intertwine-expel commited on
Commit
bb62fc3
·
1 Parent(s): fe2e900

Upload blog post json files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 10-tips-for-protecting-computer-security-and-privacy-at-home.json +6 -0
  2. 12-revealing-questions-to-ask-when-evaluating-an-mssp-or.json +6 -0
  3. 12-ways-to-tell-if-your-managed-security-provider-won-t-suck.json +6 -0
  4. 2023-great-expeltations-report-top-six-findings.json +6 -0
  5. 3-must-dos-when-you-re-starting-a-threat-hunting-program.json +6 -0
  6. 3-steps-to-figuring-out-where-a-siem-belongs-in-your.json +6 -0
  7. 45-minutes-to-one-minute-how-we-shrunk-image-deployment.json +6 -0
  8. 5-best-practices-to-get-to-production-readiness-with.json +6 -0
  9. 5-cybersecurity-predictions-for-2023.json +6 -0
  10. 5-pro-tips-for-detecting-in-aws.json +6 -0
  11. 5-tips-for-writing-a-cybersecurity-policy-that-doesn-t-suck.json +6 -0
  12. 6-things-to-do-before-you-bring-in-a-red-team.json +6 -0
  13. 7-habits-of-highly-effective-remote-socs-expel.json +6 -0
  14. 7-habits-of-highly-effective-socs.json +6 -0
  15. a-beginner-s-guide-to-getting-started-in-cybersecurity.json +6 -0
  16. a-cheat-sheet-for-managing-your-next-security-incident.json +6 -0
  17. a-common-sense-approach-for-assessing-third-party-risk.json +6 -0
  18. a-defender-s-mitre-att-ck-cheat-sheet-for-google-cloud.json +6 -0
  19. a-tough-goodbye.json +6 -0
  20. a-year-in-review-an-honest-look-at-a-developer-s-first-12.json +6 -0
  21. add-context-to-supercharge-your-security-decisions-in.json +6 -0
  22. an-easier-way-to-navigate-our-security-operations-platform.json +6 -0
  23. an-expel-guide-to-cybersecurity-awareness-month-2022.json +6 -0
  24. an-inside-look-at-what-happened-when-i-finally-took.json +6 -0
  25. announcing-open-source-python-client-pyexclient-for.json +6 -0
  26. applying-the-nist-csf-to-u-s-election-security-expel.json +6 -0
  27. attack-trend-alert-aws-themed-credential-phishing-technique.json +6 -0
  28. attack-trend-alert-email-scams-targeting-donations-to-ukraine.json +6 -0
  29. attack-trend-alert-revil-ransomware.json +6 -0
  30. attacker-in-the-middle-phishing-how-attackers-bypass-mfa.json +6 -0
  31. back-in-black-hat-black-hat-usa-2022day-1-recap.json +6 -0
  32. bec-and-a-visionary-scam.json +6 -0
  33. behind-the-scenes-building-azure-integrations-for-asc-alerts.json +6 -0
  34. behind-the-scenes-in-the-expel-soc-alert-to-fix-in-aws.json +6 -0
  35. better-web-shell-detections-with-signal-sciences-waf.json +6 -0
  36. blog.json +6 -0
  37. budget-planning-determining-your-security-spend.json +6 -0
  38. cloud-attack-trends-what-you-need-to-know-and-how.json +6 -0
  39. cloud-security-archives.json +6 -0
  40. come-sea-how-we-tackle-phishing.json +6 -0
  41. companies-with-250-1000-employees-suffer-high-security.json +6 -0
  42. connect-hashicorp-vault-and-google-s-cloudsql-databases.json +6 -0
  43. containerizing-key-pipeline-with-zero-downtime.json +6 -0
  44. could-you-go-a-week-without-meetings-at-work.json +6 -0
  45. creating-data-driven-detections-with-datadog-and.json +6 -0
  46. customer-context-beware-the-homoglyph.json +6 -0
  47. cutting-through-the-noise-riot-enrichment-drives-soc.json +6 -0
  48. dear-fellow-ceo-do-these-seven-things-to-improve-your-org-s.json +6 -0
  49. detecting-coin-miners-with-palo-alto-networks-ngfw.json +6 -0
  50. detection-and-response-in-action-an-end-to-end-coverage.json +6 -0
10-tips-for-protecting-computer-security-and-privacy-at-home.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "10 tips for protecting computer security and privacy at home",
3
+ "url": "https://expel.com/blog/10-tips-protecting-computer-security-privacy-at-home/",
4
+ "date": "Apr 23, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 10 tips for protecting computer security and privacy at home Tips \u00b7 7 MIN READ \u00b7 DAVID SCHUETZ \u00b7 APR 23, 2020 \u00b7 TAGS: Get technical / Heads up / How to Whether you\u2019re at home or at the office, there\u2019s a good chance you\u2019re relying on the internet. At the office you might have a security team who works hard to ensure your data is protected. But what about protecting your security at home? As of late, it seems like nearly everything is connected to our Wi-Fi. From multiple laptops and cell phones, to thermostats and light switches, smart technology makes our lives easier. And now in the age of social distancing, we are relying on our home networks more than ever. But the idea of being responsible for keeping your personal network connections and devices secure can be daunting. Does this mean you should live in a constant state of fear that someone will hack into your network or devices? No. But you do need to know about some steps to take to protect yourself. So \u2026 what threats should you be worried about, exactly? Most common threats For the purpose of this post, let\u2019s put vulnerabilities into three buckets \u2013 networks, endpoints and online behavior \u2013 and talk about why you should care. Networks If it\u2019s connected to the internet (laptops, TVs, voice assistants, etc.), then it can probably access other devices at home. Which means there are ample opportunities for attackers to find entry as we transmit data throughout our networks. But, unless you live off the grid, you don\u2019t have much choice except to rely on the internet to function in society. Think about securing your networks like locking your doors at home. You don\u2019t want attackers to come in and steal your belongings. And you definitely don\u2019t want them using your home to conduct criminal activity (resulting in the FBI busting down your door). Opening a port on your router for a game, connecting a thermostat to the cloud, even giving a visitor your Wi-Fi password for their phone \u2013 these can all open our networks to potential threats. Luckily, there are relatively simple ways you can make sure no one is slipping in your back door while you aren\u2019t paying attention (check out the 10 tips at the end of this post). I also get a lot of questions about using public Wi-Fi. Here\u2019s my advice: getting attacked while using public Wi-Fi isn\u2019t probable if you aren\u2019t a big target, but it is possible. That\u2019s why it\u2019s important to be thoughtful when you are using networks outside of your home. Improve your security on public Wi-Fi by using a VPN, or avoid the Wi-Fi altogether and tether to your cell phone (ideally with a cable). Endpoints Many built-in services on laptops can create more opportunities for attackers. A well-known attack is a fake \u201chelp desk\u201d call, tricking someone into granting remote access to their screen. Unless you directly call for IT support, no one needs you to share your screen or to enable remote control. Avoid keeping file sharing features like AirDrop on (and even then, set to accept files from contacts only). Turn on file sharing and remote access only when you need it, and turn it off again once you\u2019re done. Think about the apps you use, too. Be careful when installing an app that asks you to change network settings \u2013 it could be trying to watch your web traffic. And if an application asks for access to your location, contacts, or other privacy-related content, don\u2019t say \u201cYes\u201d unless you understand exactly why it\u2019s asking. As a general rule, lock your computer screen if you get up to grab a cup of coffee and put a lock on your cell phone screen. It\u2019s helpful to update your settings so your screen locks automatically after being idle for five minutes. Sure, locking screens might matter a little less if you live alone and are working from home, but these are still good habits to adopt. Online behavior Attackers often count on us to make a mistake and accidentally open the door for them. Think about the number of times you enter your bank and credit card information when you\u2019re ordering groceries from Amazon. Make sure you\u2019re shopping through reputable dealers and avoid storing your credit card information on a website. Many banks will allow you to set up text message alerts for large purchases or unusual activity \u2013 a smart feature to enable, to be on the safe side. Then there\u2019s phishing. What makes something look suspicious? Emails with a sense of urgency or a time limit, obscure invoices and warnings of disastrous outcomes are all red flags. Pop-ups that won\u2019t go away or are asking you to download something are often nefarious. Make sure you also hover over links and investigate them before clicking them. Do I need to bother mentioning that you shouldn\u2019t plug an unknown USB drive into your computer? Just in case\u2026don\u2019t do that. Don\u2019t be too quick when granting access to shared documents in G-suite or iCloud, for example. Make sure people and organizations can be vouched for and are trusted before granting access. Watch what you share on social media. Never give out your address or personal information. Hackers can search on social media sites to find answers to security questions. Tips and tricks for computer safety and privacy We\u2019ve only scratched the surface and already this looks like a lot of work. How can you make sure you aren\u2019t allowing yourself to be a target without spending your entire day thinking of all the ways you can be attacked? Use these 10 tips and tricks. Create strong passwords, don\u2019t reuse them on different sites, and ALWAYS use MFA \u2013 multi-factor authentication \u2013 when given the option (these are one-time passwords, push messages, even text messages in a pinch). Also, use a password manager application! A good password manager can make it easy to select strong, unique passwords, and should support many built-in MFA systems. They can warn you if you\u2019ve accidentally reused a password, or if you forgot to enable MFA. They can even alert you when sites you visit have had a recent password breach. Keep your software updated on operating systems, apps, laptops, cell phones and routers. Vendors are constantly patching bugs and security holes, some of which can be critical entry points for an attacker. Most operating systems and app stores can automatically update their software for you. Keeping your home network updated (Wi-Fi routers, etc.) isn\u2019t quite as critical, but if it\u2019s been years since you looked at your router, it may be a good idea to check for updates. Use WPA2 with a strong password when setting up Wi-Fi at home. For your visitors, consider setting up a guest network with a different network name and password. Disallow remote access to your network and desktop (remote login, screen and file sharing, etc.) by disabling it on your computers and limiting the number of ports you let through the internet router. When you do need it, enable it only for the time you\u2019ll be using it, and then immediately turn it back off again. Create a separate administrator account, and use a non-admin account for day-to-day activity. By keeping your administrator \u201cpersona\u201d separate from your daily use account, you lessen the chance that you may accidentally install malicious software without paying attention (many of us are a little too quick to click that \u201cOK\u201d button when we are prompted). By forcing you to switch to a different account, you ensure that a random, \u201cOh, I need your admin password now,\u201d prompt isn\u2019t going to break your computer, and makes installation of software and system-level changes a much more explicit action. Be careful with what you share online. Many sites still use \u201csecret questions\u201d to help you recover passwords. But a secret question like \u201cWhat brand was your first car?\u201d is only secret if that information is hard to find. Many common secret questions end up being things that people frequently share online (as part of a Facebook profile, or some forgotten tweet that might be easily searched for). Still others may be found from common data aggregation services \u2013 it\u2019s surprisingly easy to find the last five home addresses for just about anyone, often for no charge. Also, you should be careful not to give away too much about where you are (\u201cI\u2019m in Europe for a month, and our dogs are at the kennel, so our big suburban home in the wooded neighborhood is COMPLETELY UNATTENDED.\u201d) It\u2019s not likely that burglars are trolling social media to find targets, but you shouldn\u2019t make it too easy for them, either. Be thoughtful about the apps you install and always download from a trusted app store when possible. The \u201cbig\u201d app stores (Apple, Google, etc.) do a pretty good job of making sure that malicious software is kept out, and sticking to just those sources will go a long way to keeping you safe and secure. Whenever something (especially a website) prompts you to download a \u201cspecial app,\u201d don\u2019t download it right then and there. Instead, note what the file is (or does) and try to find it, or a suitable equivalent, in one of the main app stores. Even if you can\u2019t find it in the app store, if you can independently source it on the web, rather than taking the version the website just offered, that\u2019s usually a better plan. Have a keen eye for phishing and social engineering. Scams still come through email more than any other method, but the phone is a growing source of computer attacks. The most common is some variant of a \u201chelp desk\u201d calling to warn you that your computer is compromised, and asking you to do things to help them secure it (which instead just opens it up to their attacks). Plus there are all manner of old-school confidence tricks that people still succeed in pulling off, through phone calls, text messages and email. Learn how to recognize these, and swiftly ignore them when they happen (hang up, delete, etc.). If your router (and tech-fu) supports it, put all your internet of things, er, things (security cameras, baby monitors, refrigerators, smart-locks, etc.) on a totally separate network with its own access point. This is a great place to put your guest network as well, though they\u2019ll lose the ability to interact with your TV, etc. Backups, backups, BACKUPs! Backing up your data is a pain. Do it anyways. Follow the 3-2-1 rule: Keep 3 copies of your data; on 2 different systems (for example, one in the den, one in the basement); and 1 off-site (like at a friend or relative\u2019s house). Keeping two copies at home protects you against a single computer failure or breach, keeping one outside of the house protects you against a house fire. Cloud based services like Backblaze are fantastic for offsite backups. Have a question about keeping your stuff secure at home? We\u2019ve got lots of security nerds over here who\u2019d love to help you. Just send us a note ."
6
+ }
12-revealing-questions-to-ask-when-evaluating-an-mssp-or.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "12 revealing questions to ask when evaluating an MSSP or ...",
3
+ "url": "https://expel.com/blog/12-revealing-questions-when-evaluating-mssp-mdr-vendor/",
4
+ "date": "Feb 19, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 12 revealing questions to ask when evaluating an MSSP or MDR vendor Tips \u00b7 9 MIN READ \u00b7 YANEK KORFF \u00b7 FEB 19, 2019 \u00b7 TAGS: How to / Managed security / Planning / Selecting tech / Tools Over the last 20 years, we\u2019ve heard all kinds of interesting questions as prospective customers evaluate which type of managed cybersecurity service is right for them. The questions are often buried in a big spreadsheet, otherwise known as a request for proposal (RFP). Some of them are remarkably well thought out and put together. However, the vast majority follow a well-worn path and are kind of predictable (check out Gartner\u2019s MSSP RFP Toolkit for some of the greatest hits). But the thing about predictable questions is they generate \u2014 you guessed it \u2014 predictable answers that leave one provider sounding a lot like the rest. So in an attempt to arm you with a few questions that\u2019ll make your prospective MSSP or managed detection and response (MDR) provider stop and think, we\u2019ve compiled a short list of revealing questions that we think any service provider should be able to answer with flying colors. (Although sadly, we find that many don\u2019t.) Without further ado, here we go. Can you provide an example of ways you\u2019ve adapted your service to your customers\u2019 environments? You know as well as we do that one size doesn\u2019t fit all. Your industry, your geography, your company, your strategy, your tactics, your team \u2026 all of these variables mean every company is different. Even if you find a service provider that\u2019s a good fit today, will they adapt so they can be a good fit tomorrow? How will they continue to tune their service so you\u2019re always getting what you need? Many providers will talk about \u201cbusiness context.\u201d It\u2019s a bit of a holy grail to security service providers so make sure you understand what it is and how it works. Can your provider differentiate an attacker from that weird PowerShell blip when Jenna the sysadmin runs her same PowerShell command every Wednesday morning? Can they react faster if the CFO gets phished? Are they able to ignore PUP/PUA at one customer because it\u2019s noise, but report it every time at another because it\u2019s the CISO\u2019s priority? Without this ability, over time you\u2019ll feel like you\u2019re being served the same gruel day after day. How long, on average, did it take to fully onboard your last 10 customers, and at what point did you consider the onboarding complete? There are few activities in the managed security space that evoke more dread than onboarding. Notorious for exceptionally long, complex and error-prone disasters rife with miscommunication, onboarding roadmaps and project plans can get complex quickly. What\u2019s worse, success may mean one thing to the provider and something else to you. But it doesn\u2019t have to be this way. During the RFP process, make sure you understand what activities mark the onboarding process as being complete and ask your provider how long it took them to go through that process for their last 10 customers. Get real data. Or, even better, ask the provider if some of these customers can be references and validate this data. Remember, onboarding time has three components: calendar time (end to end how long it took), your organization\u2019s time (how much new customers have to do, and how long it takes) and the provider\u2019s time (you should care about this because it contributes to component #1 \u2014 calendar time). One last #protip for ya: ask your provider if you\u2019re going to have to pay for service during onboarding. Can you use my existing security technology or will you require that we implement new technology? You\u2019d think this one would be obvious, but many providers will mandate that you either buy new technology, add their technology (because they won\u2019t use what you already have) or introduce a duplicate technology (usually their SIEM) because their architecture demands it. A service provider in this space should be using the technology you already have in play and operationalizing it. That means ingesting the data your security products are already producing, analyzing that information and delivering answers about what matters and what doesn\u2019t. Now, not all technology is created equal. Some categories of security tech are best suited to detection, other categories are more useful when you\u2019re investigating an incident or proactively hunting for bad things in your environment. You\u2019ll want to make sure the tech you have in place can actually do what it needs to do. That said, this shouldn\u2019t come across as a requirement from your MSSP or MDR \u2014 a provider should not tell you that you need to buy this and that for anything to work. Instead, you should get a higher fidelity answer like: \u201cWithout an endpoint detection and response (EDR) tool, our ability to investigate will be limited, as will our hunting capability \u2014 some of which relies on EDR.\u201d How does your detection and response strategy differ among on-prem technology, cloud infrastructure and cloud applications? \u201cWe monitor your AWS, Azure and O365 environments for threats and respond immediately!\u201d Have you heard this one before? This isn\u2019t an answer. The way you differentiate between providers that \u201cspeak cloud\u201d and those that don\u2019t is by listening closely to their detection and response philosophy. What\u2019s different about security in the cloud versus on-prem? How are the approaches they take for static versus elastic cloud infrastructure different? Or are they? What about cloud applications? How do they think about the security of configuration settings versus the security of data residing in containers? Validating a security provider\u2019s ability to handle your cloud security is one of the more challenging aspects in the assessment process. Consider looping in people from your own organization that are responsible for your cloud strategy and implementation. They\u2019ll ask good questions and can help you evaluate the answers you receive. How will we work together during a security incident? When a security incident arises, communication is key. You and your service provider begin in a fog of war. Keeping exceptional clarity on \u201cwhat we know\u201d and \u201cwhat we don\u2019t know for sure yet\u201d is essential to navigate the investigation and response process that follows. Understanding how your provider will communicate this info (and how quickly) is important. Do you have to log into a portal and review a mostly static page updated once every few hours? That\u2019s a useful artifact, but not a useful communication method. Do you submit a ticket? Ugh. Instead, look for effective methods that include rapid info sharing and multi-person communication. Of course, during an incident you\u2019ll have to communicate with all sorts of people \u2014 inside and outside of your organization. Your service provider might have relationships with law firms who have experience in breach communications. They may also have relationships with incident response providers who can show up on-site at a moment\u2019s notice. Either way, do your own research and find firms that are a good fit for your organization. Of course, it\u2019s always easier to do this before an incident than during one. Running your own incident response tabletop exercises can reveal a lot (we\u2019ve even created a role-playing game to try and make it fun \u2014 give it a go and let us know what you think). Can you provide an example of a time you learned something from a customer that improved your service? A security service that fails to learn and grow isn\u2019t actually a security service. It\u2019s \u2026 well, we\u2019re not sure what it is, but at the end of the day it\u2019s pretty useless to you. Sure, it might provide the illusion of security, but in reality there\u2019s a lot of time spent turning cranks that produce nothing. We\u2019ve heard this complaint from more than a few CISOs: \u201cMy MSSP is a black box. I put my money in and nothing comes out.\u201d Your prospective service provider should have crisp examples of how they\u2019ve learned and improved the way they help all of their customers. And it should be material. Not something simple like, \u201cI found this threat here so I added it to my intel database.\u201d That\u2019s table stakes. What caused your service provider to rethink something and say to themselves, \u201cI think the way we\u2019re tackling this is wrong based on this customer feedback \u2026 let\u2019s do it differently?\u201d Demonstrating the ability to adapt ensures your service provider will grow with you. How will you give me the visibility I need to be confident that you\u2019re making the right decisions for my organization? Don\u2019t just trust, but verify. It\u2019s what you\u2019re paying your service provider to do after all, so you should have confidence not only that they\u2019re doing the right thing \u2026 but that they\u2019re doing it right too. Take a moment to think through the steps that comprise \u201csecurity operations.\u201d Triage. This is the process analysts go through to evaluate (often quickly) whether something is a false positive or warrants investigation. Sometimes these analysts are humans. Sometimes they\u2019re robots. Does your provider tell you both who made the decision and why? If they filter out something important very early but were wrong, that\u2019s a problem. Investigations. Will your provider show you what information their analysts pulled from your environment? Can you get a sense of the thought process they use to decide what to retrieve? And what to make of it? This is where expertise really comes into play. Reporting and response. Is the output you receive easy to understand? Are response actions clear, and do you have control over who-gets-to-meddle-with-what in your infrastructure? If you have to translate everything your provider is telling you so that mere mortals who don\u2019t speak security can understand it, that\u2019ll become frustrating \u2026 fast. As you take a step back and look through what\u2019s been done, does the provider have timestamps for every step that was taken so you can evaluate this information and measure whether their overall performance is improving or degrading? Ultimately, you have to answer this question: Did they show their work? That\u2019s the only way to verify that they\u2019re doing what you\u2019re paying them to do. When things start to break, how (quickly) do you find and fix the problem? When do I find out about it? If you\u2019ve worked with an MSSP before, you\u2019re familiar with this problem we\u2019re about to summarize. Nine months after a piece of technology stopped sending data, the provider found out it was broken. Because you told them. That\u2019s a big hit to your visibility and a lot of risk you took on without any warning. Not cool. How will your new prospective provider handle this? Can they detect when a device becomes unreachable? How fast? What about if the device stays online but stops sending data? Or worse \u2013 what if there\u2019s a significant and unexpected drop in data volume? Who\u2019s responsible for monitoring this stuff and how quickly can they recover? Get examples if you can, and bonus points if they provide you direct visibility into this kind of monitoring. How did you identify and report on an active red team engagement conducted on one of your customers\u2019 networks? Yeah, we know this one feels pretty specific, but we\u2019ve run into too many instances where customers brought in a relatively sophisticated red team partner only to discover their managed security provider was blind to these mock adversaries. They couldn\u2019t even detect them, let alone investigate or respond. To be clear, when we say red team , we\u2019re talking about a group of whitehats who try to break into your network, escalate privileges, move laterally and steal stuff \u2026 and then report on things you can do to improve your defenses. Can your new potential partner provide an example of this exercise playing out? How did they detect the \u201cattacker\u201d in this case and to what extent were they able to provide ongoing reporting? Once again, bonus points for the provider if they\u2019ll let you hear all of this directly from one of their current customers. When I have a question or concern how do I engage with your team? We talked about communication during an incident. What about when there\u2019s no incident? Is it the same process, or are there two different processes? The more you have to adapt to your provider\u2019s modes of communication, the less likely you\u2019ll remember to do the right thing when the time is right. Watch out for laggy ticketing systems and be cautious about support portals where the identity of the people you\u2019re talking to is hidden. Your partner\u2019s security analysts will have exceptionally generous access to your data. You should be able to get to know who they are and interact with them directly from time to time. Can you show me how you calculate the price of your service? Every provider will give you a price. But can you understand how and why they got to that number? Be wary of long rambling answers. If your prospective provider can\u2019t give you a crisp answer or, better yet, quote you a price on your first sales call, imagine how the conversation will go once you become their customer. If selected, can you provide a free 30-day proof of concept to demonstrate you can deliver on the expectations you\u2019ve set? After you\u2019ve asked all of your questions, appraised the responses and picked a winner there\u2019s a good chance you\u2019ll still be asking yourself, \u201cCan they really do all of these great things in my environment?\u201d Exaggerated sales and marketing claims are, unfortunately, one of the biggest scourges on the security industry. You don\u2019t want to get a few weeks into a new agreement and learn your new provider can\u2019t do everything they promised or, even worse, find out when they missed something important. One of the most effective ways to mitigate this risk is to hop on your provider\u2019s service on an interim basis. It gives you a chance to get a feel for what the interactions will be like and gives your potential partner an opportunity to prove themselves. And if your prospective service provider can\u2019t even get this operational within 30 days? Well, that tells you all you need to know. So there you have it. Twelve questions that can help you sleuth out what it will be like to work with your managed security provider. If you\u2019ve got other questions, we\u2019d love to hear them. Or if you\u2019re reading this and thinking \u201cmaybe I\u2019ll just build my own SOC,\u201d check out our post on all the things you\u2019ll need to consider if you\u2019re thinking of building a 24\u00d77 SOC."
6
+ }
12-ways-to-tell-if-your-managed-security-provider-won-t-suck.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "12 ways to tell if your managed security provider won't suck ...",
3
+ "url": "https://expel.com/blog/12-ways-to-tell-managed-security-provider-wont-suck-next-year/",
4
+ "date": "Mar 22, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 12 ways to tell if your managed security provider won\u2019t suck next year Security operations \u00b7 9 MIN READ \u00b7 YANEK KORFF \u00b7 MAR 22, 2019 \u00b7 TAGS: CISO / How to / Managed security / Selecting tech / Tools I used to love my iPhone. Now, at best, it works fine when new features aren\u2019t getting in my way. I also remember when AOL was amazing, ICQ was the best chat client and Netscape was the go-to browser. Maybe it\u2019s inevitable that the things we love will eventually be superseded, though hopefully not too quickly. Let\u2019s take a look at \u201csecurity operations.\u201d Turning logs and other forms of security signal into useful actions is an activity that\u2019s been around for decades. Whether companies have their own internal capability or have outsourced to a managed security provider, the breach headlines have continued unabated. Okay, that\u2019s not entirely true \u2014 they\u2019ve accelerated. And yet, even in this morass that is the security industry, every once in a while you\u2019ll find someone truly delighted about the products or services they\u2019re using. But delighted customers are the exception when it comes to managed security service providers (MSSPs). Some will tell you that MSSPs take your money and give you nothing in return or that they\u2019re a black stain on our industry. In fact, according to Forrester\u2019s 2017 Global Business Technographics\u00ae Security Survey, 34 percent of responding organizations were actively evaluating alternatives or actively planning replacement of their existing MSSP . In an industry where three-year contracts are common, a third of the market was in the process of switching at the time of the survey. Math doesn\u2019t paint a pretty picture here. In this ten billion dollar industry that\u2019s growing nearly 10 percent each year, thousands of companies are beyond disgruntled: they\u2019re looking to get rid of their current provider. If you\u2019re somewhere in that one-third of the market that\u2019s looking to switch to another MSSP, you\u2019re probably thinking to yourself, \u201cI thought my provider would be better \u2026 and they were for a little while. Then it all went down the toilet.\u201d So, before you sign that next contract how do you determine the likelihood that the quality of the service will last? How long will you be happy with the quality of your service provider? You might be able to get a sense of this through a proof-of-concept exercise but that won\u2019t tell you much about how you\u2019ll feel a year (or five) from now. Delighters will become table-stakes over time \u2014 so, to truly satisfy you, any new service will have to do more than just not deteriorate. It has to improve. Constantly. Creating a culture that searches for quality Why is it so essential that quality is core to your provider\u2019s DNA? Well, because it\u2019s already part of yours. You\u2019ve got a limited budget and a part of your job is to get the most bang for your buck over time. So you\u2019ll constantly be changing your investments to ensure you\u2019re getting the most for your money. A dollar you spend a year from now should be doing more than a dollar today. This translates directly to your service provider: an hour of work your service provider does today had better do more for you a year from now than it does right this minute. This means everyone (yes, everyone) at your service provider\u2019s organization needs to be looking at ways to improve quality constantly. So how can you tell if an organization\u2019s got it? Here are some key characteristics that we\u2019ve seen that create an environment where a persistent focus on quality can emerge: People feel a sense of trust and psychological safety, People have ownership of the problems they\u2019re trying to solve, People have the energy to engage in quality-seeking behaviors, and People can honestly self-assess throughout the process. You\u2019re probably thinking \u201cthat sounds pretty soft and squishy.\u201d So how do you assess whether a company you\u2019re talking to has built this sort of culture? Well, without further ado, here are a dozen things you can do to sniff out whether \u201cthe search for quality\u201d exists at an organization. 1. In search of trust \u2013 look for transparency Transparency means more than just being forthcoming. It means making the effort to be easily understood. There\u2019s no shortage of places you can go to find examples of an org\u2019s transparency. Start with the website and see if you can figure out what the company does and how they do it. As you ask questions to fill in the gaps, take note of whether you can understand the answers or if they\u2019re wrapped in marketing buzzwords or technical mumbo-jumbo. See how deeply transparency extends into the organization. Spend some time to understand the company\u2019s high-level goals. As you run into various employees in your evaluation process, ask them what these goals are and what they think about them. Ask what\u2019s going well and what\u2019s challenging. If employees can\u2019t (or won\u2019t) be forthcoming when they\u2019re literally trying to sell you something, what are the chances they\u2019ll be honest when they screw up? 2. In search of trust \u2013 look for simple execution Trust is a fickle thing. As we approach new relationships, we come with some amount of default trust in the new partner. I like to call this the \u201ctrust bank.\u201d If you\u2019ve had your trust violated a little too often, you won\u2019t be very generous when it comes to initial your initial deposit in the trust bank. If you\u2019re a bit more optimistic you might make a huge trust deposit up front, thinking the best of people. The unfair thing about trust banks is that deposits are always small, but withdrawals are easily five times as large. During your conversations, the service provider will promise to do many things. They\u2019ll send you a summary. They\u2019ll put you in touch with another customer. They\u2019ll get you on the phone for a chat with someone with greater technical depth in an area that\u2019s important to you. They\u2019ll promise you a quote. Do they follow through on those things? And do they meet the expectations they set within the timeframes they promised? It is surprisingly difficult for people to consistently meet simple obligations like doing what they said they\u2019d do. So when you find that in an organization, it really stands out. 3. In search of trust \u2013 look for failure It\u2019s easy to provide examples of past successes. It\u2019s a lot harder to admit failure. You\u2019re about to sign up for a long-term service. You\u2019ve got a right to know what sort of problems there will be. How will they be identified, communicated and handled? Ask for an example, and ask for artifacts (redacted and/or anonymized presumably). Get the full story and ask a lot of questions to fill in the blanks. An organization that knows how to handle failures and turn them into success stories is well positioned to earn (and keep) your trust. 4. In search of ownership \u2013 identify roles and responsibilities You\u2019ll have the opportunity to meet several people at a potential provider during the courtship process. Pick two or three different roles and get a copy of their job description (this may or may not be what\u2019s posted on the company\u2019s website). Ask those employees what their responsibilities are and make sure things line up. Do employees seem to understand where their responsibilities start and end? Can they point to other teams within the org and tell you how the teams work together? Sounds pretty basic, but having a strong sense of ownership often breaks down when this foundation is missing. 5. In search of ownership \u2013 ask about projects When you\u2019re meeting with mid-level and senior people at the organization who aren\u2019t part of the management team, ask about what they\u2019re working on. Usually, technical people are more than happy to share some of the projects they have in flight. Then, ask why they\u2019re working on those projects. In organizations where employees feel a strong sense of ownership, they look at their work not as tasks, but as solving business problems or customer problems. They articulate their work in the context of something greater. 6. In search of energy \u2013 ask about work and life People think about \u201cwork/life balance\u201d differently. As you interact with people at your service provider, ask them how they view the work/life balance at the company. Does it meet their needs? Do they get vacation time? Sick leave? How much? Do people actually take vacation? Do people feel like they can disconnect? In environments where there are lots of \u201csingle points of failure,\u201d people tend to work hard constantly, be stressed out and make more mistakes. While this might happen from time to time due to shifts in staffing, it shouldn\u2019t be the norm. On the other hand, where people feel like they get the space they need to bring all their enthusiasm to bear, they\u2019ll do better work and you\u2019ll be happier for it. 7. In search of energy \u2013 ask about celebrations and praise One of the factors that contributes the most to quality work is recognition that individuals and teams have done well. Contrast this with environments in which \u201cthe beatings will continue until morale improves.\u201d Yeah, you\u2019ve been there and seen that. Ask about the last few company events, what they were and why they happened. What were they celebrating? What about the last spot award or \u201ckudos\u201d someone got? Can they remember when something like that happened? 8. In search of quality-seeking behaviors \u2013 ask about conflict There\u2019s plenty of info out on the interwebs about the negative effects of groupthink and the need for constructive debate. Yet \u201cconflict\u201d seems to be a dirty word in most office environments. Instead of having a difficult conversation we hear \u201clet\u2019s take it offline\u201d which is office lingo for \u201clet\u2019s stop talking about this because it\u2019s making me uncomfortable.\u201d Ask about disagreements, technical or otherwise, and how they\u2019re resolved within the organization. Ask for an example. You\u2019ll quickly get a sense as to how the environment supports constructive disagreement and the extent to which \u201coffice politics\u201d play a role. 9. In search of quality-seeking behaviors \u2013 ask about metrics You may only get operational insight into a subset of the metrics your service provider uses to measure the quality and efficacy of what they do every day. Have someone walk you through it. How does the org measure the effectiveness of detection logic? How do they measure the availability of technology, whether it\u2019s their own or yours? Can someone provide an example of a metric he or she thought was useful \u2014 but turns out it wasn\u2019t? Is there a metric the org recently added because they\u2019ve learned something new? Look for this engine of continuous improvement within the things they count and measure. 10. In search of quality-seeking behaviors \u2013 ask about hiring When you were hired, someone entrusted you to make good hiring decisions. When you hired a manager, you entrusted her to do the same. Maybe you provided feedback, coaching or training to help her be more effective. As you bring on a service provider, you have the same need. Their hiring practices will directly impact the quality of the service you experience over time. How do they think about hiring? Talk to the head of HR. Do they use a structured hiring process? How do they think about evaluating experience, skills and traits? What key traits do they look for in hires throughout the organization? Any organization with rich answers around these questions (especially when these answers are consistent throughout the organization) clearly has a high hiring bar. 11. In search of self-assessment \u2013 ask about evaluations Do employees have the opportunity to think about how they\u2019re doing and how they\u2019re growing? And does anyone guide them through this process? The answer here can\u2019t be as simple as \u201cyeah, we do annual reviews \u2026 and they\u2019re super stressful.\u201d A huge component of perpetually increasing quality is making sure that every employee has real, ongoing opportunities for learning and growth. As you meet security practitioners, engineers and managers, ask what they\u2019ve learned since they started. What technical and non-technical growth have they experienced and how has this helped them grow their careers? Who supported this growth and how much did the company do to help? Are there programs in place to encourage this development? The more a company does to invest in its employees, the more likely it is that those employees will be investing in improving the service you receive. 12. In search of self-assessment \u2013 look out for hubris We started this blog talking about some iconic names in technology like AOL and Apple. Do you remember when AOL \u201cbought\u201d Time Warner? Have you seen what happens to technology companies that become so full of themselves they feel like you\u2019re obligated to buy their stuff? That only lasts so long. This is a difficult area to assess but an important one. If everyone you talk to is convinced they\u2019re the best at everything they do, that\u2019s a warning sign. If everyone is taking themselves a little too seriously, there might not be enough room for fallibility. If it\u2019s \u201cour way or the highway\u201d and compromise is out of the question, then that provider probably isn\u2019t a good fit for you. These warning signs create blinders for an organization, making it difficult for them to see when they\u2019ve done something wrong and learn from that mistake. What if we\u2019re wrong about all of this? Perhaps we\u2019re wrong about what it takes to maintain a culture that generates quality over time. But we do know this for certain: When you\u2019re evaluating an MSSP, you should walk away feeling pretty confident that over the course of your working relationship you\u2019ll both get better together. Or maybe you\u2019re sitting there wondering what our answers would be for some of these questions. Well, you\u2019re welcome to ask \u2026 or maybe in the not-too-distant future, we\u2019ll publish some of them right here."
6
+ }
2023-great-expeltations-report-top-six-findings.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "2023 Great eXpeltations report: top six findings",
3
+ "url": "https://expel.com/blog/2023-great-expeltations-report-top-six-findings/",
4
+ "date": "Jan 31, 2023",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 2023 Great eXpeltations report: top six findings Security operations \u00b7 2 MIN READ \u00b7 BEN BRIGIDA \u00b7 JAN 31, 2023 \u00b7 TAGS: MDR Bad news: 2022 was a big year in cybersecurity. Good news: We stopped a lot of attacks. Better news: We sure learned a lot, didn\u2019t we? We just released our Great eXpeltations annual report, which details the major trends we saw in the security operations center (SOC) last year\u2026and what you can do about them this year. You can grab your copy now , and here\u2019s a taste of what you\u2019ll find. Top findings from the Great eXpeltations report 1: Business email compromise (BEC) accounted for half of all incidents, and remains the top threat facing our customers. This finding is consistent with what we saw in 2021. Key numbers: Of the BEC attempts we identified: more than 99% were in Microsoft 365 (M365\u2014previously known as Office 365, or O365) and fewer than 1% occurred in Google Workspace. Fifty-three percent of all organizations experienced at least one BEC attempt, and one organization was targeted 104 times throughout the year. 2: Threat actors started moving away from authenticating via legacy protocols to bypass multi-factor authentication (MFA) in M365. Instead, the bad guys have adopted frameworks such as Evilginx2, facilitating adversary-in-the-middle (AiTM) phishing attacks to steal login credentials and session cookies for initial access and MFA bypass. FIDO2 (Fast ID Online 2) and certificate-based authentication stop AiTM attacks. However, many organizations don\u2019t use FIDO factors for MFA. 3: Threat actors targeted Workday to perpetrate payroll fraud. In July, our SOC team began seeing BEC attempts, across multiple customer environments, seeking illicit access to human capital management systems\u2014specifically, Workday. The goal of these attacks? Payroll and direct deposit fraud. Once hackers access Workday, they modify a compromised user\u2019s payroll settings to add their direct deposit information and redirecting the victim\u2019s paycheck into the attacker\u2019s account. (Which is just evil.) The lesson? Enforce MFA within Workday and implement approval workflows for changes to direct deposit information. 4: Eleven percent of incidents could have resulted in deployment of ransomware if we hadn\u2019t intervened. This represents a jump of seven percentage points over 2021. Microsoft has made it easier to block macros in files downloaded from the internet , so ransomware threat groups and their affiliates are abandoning use of visual basic for application (VBA) macros and Excel 4.0 macros to break into Windows-based environments. Instead, they\u2019re now using disk image (ISO), short-cut (LNK), and HTML application (HTA) files. Here are some stats we find interesting: Hackers used zipped JavaScript files to gain initial access in 44% of all ransomware incidents. ISO files were used to gain initial access in 12% of all ransomware incidents. This attack vector didn\u2019t make our list in 2021. Nine percent of all ransomware incidents started with an infected USB drive. 5: Six percent of business application compromise (BAC) attempts used push notification fatigue to satisfy MFA. Push notification fatigue occurs when attackers send repeated push notifications until the targeted employee \u201cauthorizes\u201d or \u201caccepts\u201d the request. This allows the attacker to satisfy MFA. (Hackers may or may not have learned this technique from their four year-olds at home.) 6: Credential harvesters represented 88% of malicious email submissions. Credential theft via phishing continues to grow with identity the main focus of today\u2019s attacks. The top subject lines in malicious emails that resulted in an employee click or compromise were, \u201cIncoming Voice Message,\u201d \u201cChecking in,\u201d and \u201cVoice Mail Call received for <user\u2019s email>.\u201d Our data shows that actionable, time-sensitive, and financially driven social engineering themes are most successful. The full report tells you more\u2014lots more\u2014 and provides insights and advice to help you defend against these threats. Give it a look and if you have questions drop us a line ."
6
+ }
3-must-dos-when-you-re-starting-a-threat-hunting-program.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "3 must-dos when you're starting a threat hunting program",
3
+ "url": "https://expel.com/blog/3-must-dos-when-starting-threat-hunting-program/",
4
+ "date": "Aug 13, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 3 must-dos when you\u2019re starting a threat hunting program Security operations \u00b7 4 MIN READ \u00b7 KATE DREYER \u00b7 AUG 13, 2019 \u00b7 TAGS: How to / Hunting / Planning / SOC / Threat hunting This is a recap of a talk two of our Expletives gave at Carbon Black\u2019s CB Connect in San Diego. Let us know what Qs you\u2019ve got about threat hunting \u2014 drop us a note or message us on Twitter to chat. So you\u2019ve decided you want to build a threat hunting program, but where do you start? There are several paths you can follow in building a threat hunting program. And, depending on what your hunting goals are, there are lots of options for how to hunt and what tools to use. However, figuring out exactly what approach is going to achieve your outcomes is often challenging too, especially when there are loads of fancy new tools being marketed at you every day and security buzzwords flying at you left and right. Our goal is to help you filter out the shiny stuff and think about the brass tacks of your program\u2014and what\u2019s going to make it (and you) successful. What Is Threat Hunting? Threat hunting is the process of creating a hypothesis, gathering past data, applying filtering criteria that supports the hypothesis, and investigating the leads that you generate. It\u2019s an important proactive way to look for attackers. If you\u2019ve got existing security tech, you can use that for threat hunting, or you can think about what tools you\u2019ll need to meet the goals of a new threat hunting program. And don\u2019t forget that using tools you already have and combining that data with other information\u2014like open-source intelligence\u2014is an option too. We recently put together a list of the pros and cons of using different security tech for threat hunting, which is a helpful read if you\u2019re wondering how to use the tech you already own to conduct a hunt, as well as finding new tech that can help you in generating hypotheses for successful threat hunting. Is Hunting Right For Your Org? There are plenty of reasons to start a threat hunting program. The biggest perk is that, when planned out and executed well, it\u2019ll provide you with an extra layer of security. However, like any investment it takes time and resources. And so you\u2019ll want to consider whether it\u2019s right for you and the business you\u2019re protecting. Before building your own threat hunting program, consider the risks facing your organization versus your available resources. For example, if you operate in a high-risk or highly-targeted environment\u2014maybe you work at a financial institution, a health facility or another company that stores large amounts of sensitive information about customers\u2014then hunting probably makes sense because there are plenty of adversaries who\u2019ll find your organization to be an attractive target. But if your organization\u2019s risk profile is medium- to low-risk, your time and budget might be better spent on less sophisticated threats like commodity malware. If you don\u2019t operate in a high-risk environment, hunting might distract you from things that should probably be higher on the priority list like implementing effective anti-phishing controls. 3 Tips As You Start Building Your Own Threat Hunting Program If you\u2019ve determined that you do want to build a threat hunting program, there are a couple considerations to mull over before knocking on your CISO\u2019s office door to ask for more people and budget. Think through your objectives, how you\u2019ll report on what you find and how you\u2019ll eventually scale your hunting program. Here are our three must-dos before you start a threat hunting program and how you can determine what information and technology to include within yours. Must-do 1: Know Your Threat Hunting Objectives Before you start talking about what tech you\u2019ll use for hunting or how many people you\u2019ll need, figure out what you\u2019re trying to accomplish and why. With threat hunting, you\u2019re assuming that something has already failed and you\u2019ve been compromised. So as you\u2019re defining your objectives, make sure to: Validate your existing controls: Your objective is to validate existing security controls. This means your hunting hypothesis should be focused on an attacker that\u2019s already bypassed one or more of your security controls to get into your network. Where are there known (or suspected) vulnerabilities, or what controls have failed in the past? Assess the quality of your alert management and triage capabilities: Threat hunting is a great way to perform Quality Assurance (QA) on your alert management and triage efforts. You probably want to have someone reviewing the hunt results who didn\u2019t spend a ton of time in the past month reviewing alerts. You\u2019ll want to run techniques where the hypothesis is looking for activity where you would\u2019ve expected alerts to be generated. A good example here could be looking for suspicious powershell usage. Identify notable events in your environment: If you\u2019re hunting, the goal doesn\u2019t always have to be to identify threats. Notable events are events that your hunting techniques identified that were previously unknown. You might uncover policy violations like discovering unauthorized software, or you may find activities that software or employees performed that you (or your team or customer) didn\u2019t know about. Evolve your detection libraries: If you have hunting techniques in place, a long-term goal is to figure out ways to make them high enough fidelity without losing their value so that they can become detections. Similarly, if you have detections that are too prone to false positives, think about how you can build a hypothesis around them and turn them into hunting techniques. Must-do 2: Decide How and What Information to Report On After defining your objectives, think about how you\u2019ll report on the findings from your hunts. Not only that, but also consider who you\u2019re going to brief on those insights. For example, what hunt technique are you using and why? What data did you review and what did you discover? Then talk about the outcome of your hunt, including what steps you should take\u2014if any\u2014to make your org more resilient in the future. Must-do 3: Consider Long-Term Scaling of the Program Conducting a first successful hunt is great, but how do you plan to make threat hunting part of your ongoing security practices going forward? Can you maintain an effective threat hunting program with the resources you have today or do you need new tech or more people? Think about what scale looks like based on your goals and the business\u2019s needs. Be prepared to have a conversation about all of your ideas on future scaling of your threat hunting program with your CISO or team lead. Have More Questions About Threat Hunting? To learn how Expel can help with your threat hunting program, contact us ."
6
+ }
3-steps-to-figuring-out-where-a-siem-belongs-in-your.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "3 steps to figuring out where a SIEM belongs in your ...",
3
+ "url": "https://expel.com/blog/3-steps-to-figuring-out-where-siem-belongs-in-security-program/",
4
+ "date": "Sep 22, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 3 steps to figuring out where a SIEM belongs in your security program Tips \u00b7 9 MIN READ \u00b7 MATT PETERS, DAN WHALEN AND PETER SILBERMAN \u00b7 SEP 22, 2020 \u00b7 TAGS: MDR / SIEM / Tech tools Spin up a conversation about someone\u2019s security operations and chances are the conversation will quickly move to their security information and event management (SIEM) tool. A SIEM can play an important role in your security strategy. But figuring out where it belongs (and what type of SIEM is best for you) depends on a few things. So, where to begin? We\u2019ve pinpointed three steps that can help you figure out where a SIEM fits within your security program. This post walks you through each of these steps and we hope it will help you decide what makes the most sense for you, your team and your business. Step 1: Figure out where you are on your SIEM journey Working with different customers, we\u2019ve seen most orgs fall into one of three different categories. Which one are you? Just getting started Maybe you\u2019re just starting to get serious about security or you reached an inflection point and are looking for a SIEM to take your security program to the next level. You\u2019re optimistic about the prospects of a SIEM and how it can help address some of your pain points, whether that\u2019s addressing visibility gaps or keeping your auditors happy! As you explore all of the SIEM options out there, you\u2019re pretty quickly realizing there are a ton of opportunities (especially around automation) but it\u2019s also hard to get a handle on what factors should influence your decision. You may also be wondering: if it\u2019s so easy to automate why isn\u2019t everyone doing this successfully? You\u2019re excited to bring in a SIEM and up level your team but you\u2019re also wondering what pitfalls you should avoid and how to steer clear of a path that will end up costing too much and bogging down your team with low value work. Doubling down You\u2019ve had a SIEM or two (or three) and know what it takes to keep it singing. You\u2019ve learned through trial and error what works, what doesn\u2019t and the level of investment (people and money) you need internally (or through third-party partners ) to accomplish your use cases. You\u2019ve also had time to really figure out what use cases matter to you. All of those flashy selling points you thought would be a great value add? You\u2019ve come to terms with the fact that many of them aren\u2019t for you. You know what you want of your SIEM and are looking to get the most you can with your existing investment \u2013 this could mean dedicating internal resources to managing your SIEM or looking outward for help. Disillusioned skeptic You aren\u2019t sold on the tale that a SIEM can solve all of your security woes and you aren\u2019t afraid to talk about it. How did you get here? It may have had something to do with your past experiences \u2013 you\u2019ve tried to make a SIEM work in the past and have gotten burned . Maybe the product (or products) didn\u2019t do what you wanted, or it ended up costing way more than you could justify. Regardless, you now view your security program more holistically and don\u2019t see a SIEM as the single source of truth. Sure, there are use cases where it makes sense (you may still have a SIEM kicking around in a corner for your application and OS logs) but you\u2019re reluctant to hinge the success of your security program on a single solution. You prefer to rely on your various security products and services to get you the visibility and response capabilities you need to be successful. Now that you\u2019ve figured out where you are in the SIEM journey, it\u2019s time to move on to the next step! Step 2: Determine what use cases are most important to you No matter where you are in your journey, it\u2019s important to clarify (and often re-clarify) what you \u2018re expecting your SIEM to do. You can make a SIEM do just about anything with enough effort (and consultants and money) and that\u2019s exactly what many organizations have done. Don\u2019t know where to begin? Consider the following use cases and who (you or a third-party) you envision taking responsibility: Use Case Description Examples Compliance and reporting Do you have regulatory requirements for retaining certain types of data? A SIEM could help you aggregate all of this required data and make it easy to satisfy audit requirements. ISO 27001 certification Threat Detection Depending on the maturity of your security program, you may have the need/desire to write your own detection rules. A SIEM can provide these capabilities, but also requires a definite investment in content management. Consider if you want to invest in internal teams to write and maintain detection rules or whether you want to leverage security products or services to accomplish this use case. You want to invest in a team to build custom detections for your unique application data You want alerts, but don\u2019t want to be responsible for content. (This is when you may want to look to products or services like Expel !) Investigative support A SIEM can be a powerful investigative tool if it\u2019s fed with the right data and given the love and attention it needs. Using a SIEM for investigation is a very common use case, whether you\u2019re investing in an internal team or partnering with a third party to respond to your alerts. For this use case, consider how easy it is to add new log sources and how intuitive/fast searching that data is. An easy and fast search capability will empower your analysts to get to the bottom of an alert without unnecessary frustration. Building an internal security team that investigates with your SIEM Partnering with a third party like Expel to investigate with your SIEM Response Automation Containing and remediating an incident can be challenging, especially in large enterprise environments. If this is a challenge for your organization, consider how you can apply technology to this problem. Some SIEM technologies have built in response capabilities or SOAR integrations that can help in this area. As you explore these options, pay close attention to the level of effort required to configure these tools and make sure your investment will actually help solve your problem. Also consider who you want to be responsible for managing the tool (you vs third party). Splunk with Phantom integration A SOAR tool like Demisto Case Management Who did what and when? As your security program matures, process becomes more important. Once you have multiple analysts responsible for responding to alerts, knowing \u201cwho\u2019s got it\u201d and how issues were resolved helps you understand what\u2019s happening across the environment. You can communicate that upwards to drive change. As you think about this use case, you\u2019ll need to decide where you want incident management to occur \u2013 is it in your SIEM, a ticketing system or is a partner/third-party service responsible for managing alerts? Splunk with Enterprise Security serving as an incident management tool A ticketing system like Jira or Service Now Step 3: Know what type of SIEM you have (or want) Finally, whether you have a SIEM or are going shopping for one, it\u2019s important to first understand use cases. Once you identify your needs, you can figure out which SIEMs are best for you. Traditional SIEM Traditional SIEMs are typically large, multifunction applications. They tend to have highly structured data models (think SQL vs full text indexing) which enable certain types of use cases but make others more difficult. If given proper care, they can be very powerful but often aren\u2019t very flexible to changing requirements over time. Sample Vendors: QRadar, Arcsight, LogRhythm What are they good at? Highly oppinated data models make querying data and writing detections easy (once you understand the data model) One \u201cright\u201d way to do things keeps things relatively simple (accessibility is often better) Often come with a lot of out of the box features for detection, compliance and reporting Strong incident management feature sets, are a good candidate for \u201csingle source of truth\u201d Products have been around for a long time and are generally mature and stable What are some common pain points? Hampered solutions (limited by opinionated data models/vendor\u2019s way of doing things) For on-prem installations, management can be a significant investment, so you need to plan for that Slower to accommodate new use cases/features and can become \u201cbehind the times\u201d Search-based SIEM Search-based SIEMs are essentially a log aggregation and search tool first with other features added on top of that core function. They have flexible data models and everything is driven by a search from rules to reporting and dashboards. But they often require a lot of expertise to satisfy certain use cases (like detection) \u2013 meaning you\u2019ve got to live and breathe their search language to see value. Sample Vendors: Splunk ES, Sumo Logic, Exabeam What are they good at? Strong investigative support due to powerful search capabilities Flexible and accommodating for new use cases Often easier to manage (particularly for cloud-based/SaaS products) What are some common pain points? Incident management feature sets often lag behind traditional SIEMs as they have a less structured data model Requires expertise to accomplish your use cases (you need to be an expert in their search language) DIY SIEM TL;DR \u2013 you\u2019re starting from scratch. DIY SIEM options are usually open source projects organizations invest in and build additional tooling around. These options offer a lot of flexibility and can be much more cost effective, however they require a significant investment in engineering and in-house security expertise to build out security use cases. Sample Vendors: Elastic stack, OSSIM What are they good at? Potential long-term cost savings (if you have significant in-house expertise to build and manage!) Flexibility: You have complete control over the solution and can build out the use cases you need What are some common pain points? Organizations often realize they\u2019ve \u201cbitten off more than they can chew\u201d in terms of engineering and security expertise required to build and manage a DIY SIEM On-going operational cost of maintenance is on your internal team instead of a third party, which potentially distracts you from the things that are important to your business Open source options are often significantly limited in feature sets and deployment size May not be compatible with security services (if you ever choose to partner) No SIEM Some organizations forgo a SIEM altogether. This may be an option in cases where your use cases can be satisfied with other existing tools or partnerships with third party services. For example, if you have no regulatory requirements and have limited log sources (perhaps a few SaaS applications) there may be no good reason to invest heavily in a SIEM if a third party like Expel can address your use cases directly! Sample Vendors: Expel and other similar MSSP/MDRs/XDRs What are the advantages of forgoing SIEM? One less security tool you have to pay for Reduced complexity and less responsibility What are some reasons you might need a SIEM? Regulatory requirements You have use cases your existing products and services can\u2019t accomplish (like writing rules against your custom application logs or helping your internal teams investigate issues) What\u2019s your next step? There\u2019s a lot to consider as you think (or re-think) how a SIEM should fit into your security program. By identifying where you\u2019re in your SIEM journey (and where you want to go), prioritizing use cases and choosing the right SIEM product, you can set your team up for long term success. There\u2019s likely no \u201cone-size-fits-all\u201d solution, but here are some common models we\u2019ve seen: SIEM model cheat sheet ( steal me! ) Decentralized model Some organizations do not have a significant need or desire to invest in a SIEM. These organizations may still have a SIEM off in a corner somewhere for a very specific purpose, but it is not central to their security program. Instead, security signal is often consumed directly from security products or from a third-party monitoring service like Expel. Hybrid model A SIEM can help layer additional capabilities on top of existing security controls. A hybrid approach (where a SIEM is used in combination with other security tools) can help deliver capabilities that are \u201cbest of both worlds.\u201dAs an example, many organizations choose to use their SIEM for investigation and compliance, but rely on their security products for detections and a ticketing system for incident management. A service like Expel in this model can help by integrating with all of the various sources of signal directly while leveraging the capabilities of the SIEM to provide visibility across the environment. Centralized model (single pane of glass) In this model, the SIEM is the center of the organization\u2019s security program. The organization is investing significantly in their SIEM and wants it to be the place where everything happens \u2013 from alerting to response and incident management. This model requires expertise, either internal or third party (like a co-managed SIEM service) to succeed. It also requires that all security signals be routed through the SIEM for detection and response. This is an expensive but effective approach for large security teams that have the resources to go this route. Organizations considering this approach should consider their use cases carefully and ensure the long-term investment is worth it! In many cases, the same use cases can be accomplished with a hybrid approach at a lower cost. Parting thoughts We\u2019ve seen all of these models work. Your decision depends on what makes sense for your business. The key to success is understanding what is important to you and what options you have in front of you. We\u2019ve gone through this very process at Expel and hope this framework can work for you too! Want to talk to someone before making a decision about your information security? Let\u2019s chat ."
6
+ }
45-minutes-to-one-minute-how-we-shrunk-image-deployment.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "45 minutes to one minute: how we shrunk image deployment ...",
3
+ "url": "https://expel.com/blog/how-we-shrunk-image-deployment-time/",
4
+ "date": "Dec 13, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 45 minutes to one minute: how we shrunk image deployment time Engineering \u00b7 5 MIN READ \u00b7 BJORN STANGE \u00b7 DEC 13, 2022 \u00b7 TAGS: Tech tools We use a GitOps workflow. In practice, this means that all of our infrastructure is defined in YAML (either plain or templated YAML using jsonnet) and continuously applied to our Kubernetes (k8s) cluster using Flux. Initially, we set up Flux v1 for image auto updates. This meant that in addition to applying all the k8s manifests from our Git repo to the cluster, Flux also watched our container registry for new tags on certain images and updated the YAML directly in that repo. This seems great on paper, but in practice it ended up not scaling very well. One of my first projects when I joined Expel was to improve the team\u2019s visibility into the health of Flux. It was one of the main reasons that other teams came to the #ask-core-platform Slack channel for help. Here are a few such messages: Is Flux having issues right now? I made an env var change to both staging and prod an hour ago and I\u2019m not seeing it appear in the pods, even after restarting them Could someone help me debug why my auto deploys have stopped? Hi team, Flux isn\u2019t deploying the latest image in staging Hi! Is Flux stuck again? Waiting 30m+ on a deploy to staging Deployment smoketest We decided to build a deployment smoketest after realizing that Flux wasn\u2019t providing enough information about its failure states. This allowed us to measure the time between when an image was built and when it went live in the cluster. We were shocked to find that it took Flux anywhere between 20 to 45 minutes to find new tags that had been pushed to our registry and update the corresponding YAML file. (To be clear, Flux v1 is no longer maintained and has been replaced with Flux v2.) These scalability issues were even documented by the Flux v1 team. (Those docs have since been taken down, otherwise I would link them.) I believe it was because we had so many tags in Google Container Registry (GCR), but the lack of visibility into the inner workings of the Flux image update process meant that we couldn\u2019t reach any definitive conclusions. We were growing rapidly, teams were shipping code aggressively, and more and more tags were added to GCR every day. We\u2019re at a modest size (~350 images and ~40,000 tags). I did some pruning of tags older than one year to help mitigate the issue, but that was only a temporary fix to hold us over until we had a better long-term solution. The other failure state we noticed is that sometimes invalid manifests found their way into our repo. This would result in Flux not being able to apply changes to the cluster, even after the image had been updated in the YAML. This scenario was usually pretty easy to diagnose and fix since the logs made it clear what was failing to apply. Flux also exposes prometheus metrics that expose how many manifests were successfully and unsuccessfully applied to the cluster, so creating an alert for this is straightforward. Neither the Flux logs nor the metrics had anything to say about the long registry scan times, though. Image updater We decided to address the slow image auto-update behavior by writing our own internal service. Initially, I thought we should just include some bash scripts in CircleCI to perform the update (we got a proof-of-concept working in a day) but decided against it as a team since it wouldn\u2019t provide the metrics/observability we wanted. We evaluated ArgoCD and Flux v2, but decided that it would be better to just write something in-house that did exactly what we wanted. We had hacked together a solution to get Flux v1 to work with our jsonnet manifests and workflow, but it wasn\u2019t so easy to do with the image-update systems that came with ArgoCD and Flux v2. Also, we wanted more visibility/metrics around the image update process. Design and architecture This relatively simple service does text search + replace in our YAML/jsonnet files, then pushes a commit to the main branch. We decided to accomplish this using a \u201ckeyword comment\u201d so we\u2019d be able to find the files, and the lines within those files, to update. Here\u2019s what that looks like in practice for yaml and jsonnet files. image: gcr.io/demo-gcr/demo-app:0.0.1 # expel-image-automation-prod local staging_image = \u2018gcr.io/demo-gcr/demo-app:staging-98470dcc\u2019; // expel-image-automation-staging local prod_image = \u2018gcr.io/demo-gcr/demo-app:0.0.1\u2019; // expel-image-automation-prod We also decided to use an \u201cevent-based\u201d system, instead of one that continuously polls GCR. The new system would have to be sent a request by CircleCI to trigger an \u201cimage update.\u201d The new application would have two components, each with its own responsibilities. We decided to write this in Go, since everyone on the team was comfortable maintaining an internal Go service (we already maintain a few). Server The server would be responsible for receiving requests over HTTP and updating a database with the \u201cdesired\u201d tag of an image, and which repo and branch we\u2019re working with. The requests and responses are JSON, for simplicity. We use Kong to provide authentication to the API. Syncer The syncer is responsible for implementing most of the \u201clogic\u201d of an image update. It first finds all \u201cout of sync\u201d images in the database, then it clones all repos/branches it needs to work with, then does all the text search/replace using regex, and then pushes a commit with the changes to GitHub. We decided to use ripgrep to find all the files because it would be much faster than anything we would implement ourselves. We try to batch all image updates into a single commit, if possible. The less often we have to perform a git pull, git commit, and git push, the faster we\u2019ll be. The syncer will find all out of date images and update them in a single commit. If this fails for some reason, then we fall back to trying to update one image at a time and creating a commit + pushing + pulling for each image. This is how image-updater fits into our GitOps workflow today. Improvements Performance Performance is obviously the main benefit here. The image update operation takes, on average, two to four seconds. From clicking release on GitHub to traffic being served by the new replica set usually takes around seven minutes (including running tests/building the docker image, and waiting for the two- minute Flux cluster sync loop). The image-update portion of that takes only one sync loop, which runs every minute. Hence, 45 minutes to one \ud83d\ude42. We\u2019re still migrating folks off of Flux and onto image-updater, but as far as we can tell, things are humming away smoothly and the developers can happily ship their code to staging and production without having to worry about whether Flux will find their new image. Observability The nice thing about writing your own software is that you can implement logging and metrics exactly how you\u2019d like. We now have more visibility into our image update pipeline than ever. We implemented tracing to give us more granular visibility into how long it takes our sync jobs to run. This allows us to identify bottlenecks in the future if we ever need to, as we can see exactly how long each operation takes (git pull, git commit, find files to update, perform the update, git push, etc). As expected, the git pull and push operations are the most expensive. We also have more visibility into which images are getting pushed through our system. We implemented structured logging that follows the same pattern as the rest of the Go applications at Expel. We now know exactly if/when images fail to get updated and why, via metrics and logs. jsonnet This system natively supports jsonnet, our preferred method of templating our k8s YAML. Flux v1 did not natively support jsonnet. We even made a few performance improvements to the process that renders our YAML along the way. Plans for the future Flux v1 is EOL so we\u2019re planning on moving to ArgoCD to perform the cluster sync operation from GitHub. We prototyped ArgoCD already and really like it. We\u2019ve got a bunch of ideas for the next version of image updater, including a CLI, opening a pull request with the change instead of just committing directly to main, and integrating with Argo Rollouts to automatically roll back a release if smoketests fail."
6
+ }
5-best-practices-to-get-to-production-readiness-with.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "5 best practices to get to production readiness with ...",
3
+ "url": "https://expel.com/blog/production-readiness-hashicorp-vault-kubernetes/",
4
+ "date": "Mar 9, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 5 best practices to get to production readiness with Hashicorp Vault in Kubernetes Engineering \u00b7 6 MIN READ \u00b7 DAVID MONTOYA \u00b7 MAR 9, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools At Expel, we\u2019ve been long-time users of Hashicorp Vault. As our business and engineering organization has grown, so has our core engineering platform\u2019s reliance on Hashicorp Vault to secure sensitive data and the need to have a highly-available Vault that guarantees the continuity of our 24\u00d77 managed detection and response (MDR) service. We also found that as our feature teams advanced on their Kubernetes adoption journey, we needed to introduce more Kubernetes idiomatic secret-management workflows that would enable teams to self-service their secret needs for containerized apps. Which meant that we needed to increase our Vault infrastructure\u2019s resilience and deployment efficiency, and unlock opportunities for new secret-access and encryption workflows. So, we set out to migrate our statically-provisioned VM-based Vault to Google Kubernetes Engine (GKE). We knew the key to success is following best security practices in order to incorporate Hashicorp Vault into our trusted compute base. There are a variety of documented practices online for running Vault in Kubernetes. But some of them aren\u2019t up-to-date with Kubernetes specific features added on newer versions of Vault, or fail to describe the path to take Vault securely to production-readiness. Let\u2019s connect That\u2019s why I created a list of architectural and technical recommendations for Expel\u2019s site reliability engineering (SRE) team. And I\u2019d like to share these recommendations with you. (Hi, I\u2019m David and I\u2019m a senior SRE here at Expel.) After reading this post, you\u2019ll be armed with some best practices that\u2019ll help you to reliably and securely deploy, run and configure a Vault server in Kubernetes. What is Hashicorp Vault? Before we dive into best practices, let\u2019s cover the basics. Hashicorp Vault is a security tool rich in features to enable security-centric workflows for applications. It allows for secret management for both humans and applications, authentication federation with third-party APIs (e.g.: Kubernetes), generation of dynamic credentials to access infrastructure (e.g.: a PostgreSQL database), secure introduction (for zero trust infrastructure) and encryption-as-a-service. All of these are guided by the security tenet that all access to privileged resources should be short-lived. As you read this post, it\u2019s also important to keep in mind that a Kubernetes cluster is a highly dynamic environment. Application pods are often shuffled around based on system load, workload priority and resource availability. This elasticity should be taken into account when deploying Vault to Kubernetes in order to maximize the availability of the Vault service and reduce the chances of disruption during Kubernetes rebalancing operations. Now on to the best practices. Initialize and bootstrap a Vault server To get a Vault server operational and ready for configuration, it must first be initialized, unsealed and bootstrapped with enough access policies for admins to start managing the vault. When initializing a Vault server, two critical secrets are produced: the \u201cunseal keys\u201d and the \u201croot token.\u201d These two secrets must be securely kept somewhere else \u2013 by the person or process that performs the vault initialization. A recommended pattern for performing this initialization process and any subsequent configuration steps is to use an application sidecar. Using a sidecar to initialize the vault, we secured the unseal keys and root token in the Google Secret Manager as soon as they were produced, without requiring human interaction. This prevents the secrets from being printed to standard output. The bootstrapping sidecar application can be as simple as a Bash script or a more elaborate program depending on the degree of automation desired. In our case, we wanted the bootstrapping sidecar to not only initialize the vault, but to also configure access policies for the provisioner and admin personas, as well as issue a token with the \u201cprovisioner\u201d policy and secure it in the Google Secret Manager. Later, we used this \u201cprovisioner\u201d token in our CI workflow in order to manage Vault\u2019s authentication and secret backends using Terraform and Atlantis . We chose Go for implementing our sidecar because it has idiomatic libraries to interface with Google Cloud Platform (GCP) APIs and reusing the Vault client library already included in Vault is easy \u2013 which is also written in Go. Pro tip: Vault policies govern the level of access for authenticated clients. A common scenario, documented in Vault\u2019s policy guide , is to model the initial set of policies after an admin persona and a provisioner persona. The admin persona represents the team that operates the vault for other teams or an org, and the provisioner persona represents an automated process that configures the vault for tenants access. Considering the workload rebalancing that often happens in a Kubernetes cluster, we can expect the sidecar and vault server containers to suddenly restart. Which is why it\u2019s important to ensure the sidecar can be gracefully stopped and can accurately determine the health of the server before proceeding with any configuration and further producing log entries for the admins with an initial diagnosis on the status of the vault. By automating this process, we also made it easier to consistently deploy vaults in multiple environments, or to easily create a new vault and migrate snapshotted data in a disaster recovery scenario. Run Vault in isolation We deploy Vault in a cluster dedicated for services offered by our core engineering platform, and fully isolated from all tenant workloads. Why? We use separation of concerns as a guiding principle in order to guarantee the principle of least privilege when granting access to infrastructure. We recommend running the Vault pods on a dedicated nodepool to have finer control over their upgrade cycle and enabling additional security controls on the nodes. When implementing high availability for applications, as a common practice in Kubernetes, pod anti-affinity rules should be used to ensure no more than one Vault pod is allocated to the same node. This will isolate each vault server from zonal failures and node rebalancing activities. Implement end-to-end encryption This is an obvious de-facto recommendation when using Vault . Even for non-production vaults you should use end-to-end TLS. When exposing a vault server through a load balanced address using a Kubernetes Ingress, make sure the underlying Ingress controller supports TLS passthrough traffic to terminate TLS encryption at the pods, and not anywhere in between. Enabling TLS passthrough is the equivalent of performing transmission control protocol (TCP) load balancing to the Vault pods. Also, enable forced redirection from HTTP to HTTPS. When using kubernetes/ingress-nginx as the Ingress controller, you can configure TLS passthrough with the Ingress annotation nginx.ingress.kubernetes.io/ssl-passthrough. Configuration for the Ingress resource should look as follows: Ensure traffic is routed to the active server In its simplest deployment architecture, Vault runs with an active server and a couple hot-standbys that are often checking the storage backend for changes on the writing lock. A common challenge when dealing with active-standby deployments in Kubernetes is ensuring that traffic is only routed to the active pod. A couple common approaches are to either use readiness probes to determine the active pod or to use an Ingress controller that supports upstream health checking. Both approaches come with their own trade-offs. Luckily, after Vault 1.4.0 , we can use the service_registration stanza to allow Vault to \u201cregister\u201d within Kubernetes and update the pods labels with the active status. This ensures traffic to the vault\u2019s Kubernetes service is only routed to the active pod. Make sure you create a Kubernetes RoleBinding for the Vault service account that binds to a Role with permissions to get , update and patch pods in the vault namespace. The vault\u2019s namespace and pod name must be specified using the Downward API as seen below. Enable service registration in the vault .hcl configuration file like this: Set VAULT_K8S_POD_NAME and VAULT_K8S_NAMESPACE with the current namespace and pod name: With the configuration above, the Kubernetes service should look like this: Configure and manage Vault for tenants with Terraform Deploying, initializing, bootstrapping and routing traffic to the active server are only the first steps toward operationalizing a vault in production. Once a Hashicorp Vault server is ready to accept traffic and there is a token with \u201cprovisioner\u201d permissions, you\u2019re ready to start configuring the vault authentication methods and secrets engines for tenant applications. Depending on the environment needs, this type of configuration can be done using the Terraform provider for Vault or using a Kubernetes Operator. Using an operator allows you to use YAML manifests to configure Vault and keep their state in sync thanks to the operator\u2019s reconciliation loop. Using an operator, however, comes at the cost of complexity. This can be hard to justify when the intention is to only use the operator to handle configuration management . That\u2019s why we opted for using the Terraform provider to manage our vault configuration. Using Terraform also gives us a place to centralize and manage other supporting configurations for the authentication methods. A couple examples of this is configuring the Kubernetes service account required to enable authentication delegation to a cluster\u2019s API server or enabling authentication for the vault admins using their GCP service account credentials. When using the Kubernetes authentication backend for applications running in a Kubernetes cluster, each application can authenticate to Vault by providing a Kubernetes service account token (a JWT token) that the Vault server uses to validate the caller identity. It does this by invoking the Kubernetes TokenReview API on the target API server configured via the Terraform resource vault_kubernetes_auth_backend_config . Allow Vault to delegate authentication to the tenants\u2019 Kubernetes cluster: Once you\u2019ve configured Vault to allow for Kubernetes authentication, you\u2019re ready to start injecting vault agents onto tenant application pods so they can access the vault using short-lived tokens. But this is a subject for a future post. Are you cloud native? At Expel, we\u2019re on a journey to adopt zero trust workflows across all layers of our cloud infrastructure. With Hashicorp Vault, we\u2019re able to introduce these workflows when accessing application secrets or allowing dynamic access to infrastructure resources. We also love to protect cloud native infrastructure. But getting a handle of your infrastructure\u2019s security observability is easier said than done. That\u2019s why we look to our bots and tech to improve productivity. We\u2019ve created a platform that helps you triage Amazon Web Services (AWS) alerts with automation. So, in addition to these best practices, I want to share an opportunity to explore this product for yourself and see how it works. It\u2019s called Workbench\u2122 for Engineers, and you can get a free two-week trial here. Check it out and let us know what you think!"
6
+ }
5-cybersecurity-predictions-for-2023.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "5 cybersecurity predictions for 2023",
3
+ "url": "https://expel.com/blog/5-cybersecurity-predictions-for-2023/",
4
+ "date": "Dec 21, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 5 cybersecurity predictions for 2023 Expel insider \u00b7 3 MIN READ \u00b7 DAVE MERKEL, GREG NOTCH, MATT PETERS AND CHRIS WAYNFORTH \u00b7 DEC 21, 2022 \u00b7 TAGS: Cloud security / MDR It\u2019s that magical time of year when security folks dust off their crystal balls and do their best to gaze into the future\u2014hazarding a (well-informed) guess at what\u2019s on the horizon for cybersecurity in 2023. A few leaders on the Expel team took some time to reflect on learnings from this year\u2014from our own customers and the broader security community\u2014to share what they think is next for the industry in the new year. Here are their thoughts. 1. The cyber-insurance industry is ripe for disruption. Cyber insurance is an expensive, complex, and difficult necessity in the cybersecurity industry. It\u2019s rapidly becoming a more expensive line item in a Chief Information Security Officer\u2019s (CISO\u2019s) budget, and we can expect new and innovative approaches to risk assessment to emerge. As companies look to secure cyber insurance, they\u2019ll apply additional pressure on their supply chain to provide demonstrable proof that their downstream suppliers are able to respond effectively and in near real-time to cyber incidents\u2014incidents that have the potential to affect the company\u2019s own response (like when Toyota halted production following an attack on a supplier earlier this year). \u2013 Chris Waynforth, General Manager, EMEA 2. Everything old is new again, as attackers bypass MFA by targeting the user. Since \u201csecure by default\u201d configurations have become more common, we\u2019re going to see attackers investing more of their time targeting the user. Our security operations center (SOC) saw this trend in the third quarter (Q3) of 2023, as users increasingly let attackers in by approving fraudulent multi-factor authentication (MFA) pushes to enact business application compromise (BAC) attacks. In fact, MFA and conditional access were configured for more than 80% of the cases where the attackers were successful in Q3. (More on this in our quarterly threat report recap for Q3.) In theory, none of these hacks should have succeeded, but the attacker tricked users into satisfying the request by hitting them with a barrage of MFA notifications until they eventually accepted one. For some organizations, this shift in attacker strategy will drive adoption of technologies like Fast Identity Online (FIDO). For others, especially those that struggled to implement MFA in the first place, it won\u2019t. For those companies that do button up effectively, attackers will turn back to targeting the infrastructure and applications. \u2013 Matt Peters, Chief Product Officer 3. CISOs will have to learn to frame security risk as a business factor. Company boards are having broader conversations around risk and as a result, security leaders will need to translate risk into business outcomes enabled by security investment. As macroeconomic conditions drive changing priorities, security leaders will need to adopt a more framework-based approach to demonstrate return on investment (ROI) for their boards. Security leaders unable to make the connection to business outcomes will struggle career-wise, struggle for budget, and struggle for relevance in the business decision-making processes of their organization. \u2013 Dave Merkel, Chief Executive Officer, Co-founder 4. Macroeconomic impacts will force companies to scrutinize security spend. For many security leaders, the changing macroeconomic climate will shift the focus toward cost-conscious decisions and the consolidation of cybersecurity investments. Until now, companies have taken a \u201cmore is more\u201d approach to cybersecurity products and services, tacking on tools to their arsenals to combat the growing threat landscape. But next year, they\u2019ll face tighter budgets and the need to prioritize. This consolidation can be a good thing, as it will force focus on quality outcomes, and a move away from the model of loosely integrated solutions that simply deliver more alerts. Companies have increasingly turned to managed detection and response (MDR) providers to help manage this, and that trend is only going to continue. Many security leaders recognize it can be more effective and economical to optimize their operations with outside experts. For those that do continue to handle this internally, they\u2019ll be pressured to drive cost efficiency, and with greater urgency than in previous years. \u2013 Greg Notch, Chief Information Security Officer 5. The available cybersecurity talent pool is about to get a lot bigger. As tech companies are forced to enact layoffs because of the macroeconomic climate, more professionals with technical skills will enter the job market. For companies fortunate enough to still be in the position to hire, this will present a unique opportunity to select from an increased talent pool of skilled technical workers\u2014at a time when the cybersecurity \u201cskills gap\u201d still makes the headlines daily. Not to mention, the diversity that comes from an expanded hiring pool leads to organizations that are more successful at attracting and retaining employees. \u2013 Dave Merkel, Chief Executive Officer, Co-founder At the beginning of this year, we took a deep dive into the data our SOC ingested from the previous year to predict what was in store for 2022 with our first-ever Great eXpeltations annual report. Keep an eye out for the next iteration of this report, full of year-end analysis and predictions like these, coming in January 2023."
6
+ }
5-pro-tips-for-detecting-in-aws.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "5 pro tips for detecting in AWS",
3
+ "url": "https://expel.com/blog/5-pro-tips-for-detecting-in-aws/",
4
+ "date": "Feb 15, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 5 pro tips for detecting in AWS Tips \u00b7 3 MIN READ \u00b7 BRANDON DOSSANTOS, BRITTON MANAHAN, SAM LIPTON, IAN COOPER AND CHRISTOPHER VANTINE \u00b7 FEB 15, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools Detection and response in a cloud infrastructure is, in one word: confusing. And untangling the web of Amazon Web Services (AWS) can be daunting, even for the most experienced among us. So where do you start? Sometimes better security practices begin with basic, but critical, changes. In this post, we\u2019ll walk you through five pro tips for threat detection in AWS so you can free yourself from a bunch of alerts and get the space back to focus on the alerts that matter most. Prioritize security as part of your culture\u2026 like, yesterday News flash: your security team shouldn\u2019t be the only people concerned about security \u2014 just ask your colleague that fell for yet another phishing scam. If you want a security program that works, it needs to be ingrained into all parts of your business and culture. That means educating all of your users so they understand security best practices, and keeping these best practices fresh in their minds with consistent, office-wide trainings. When security is baked into your culture, frameworks, and solutions, it becomes a day-to-day priority. Set goals along the way to see what does and doesn\u2019t work for your org. Changing the way employees think and feel about security might be an incremental process, and that\u2019s okay! At the end of the day, every employee should at least understand the importance of security, and your Chief Information Security Officer (CISO) should always have a seat at the table. Giving your CISO insight into business decisions upfront helps keep security a top line priority for your whole org from the beginning, so that you\u2019re not playing catch-up down the line. Forget what you know about \u201cnormal\u201d What\u2019s \u201cnormal\u201d anyway \u2014 right? Every AWS environment is unique, which means what\u2019s usual in one environment can be suspicious in another. Before you can automate or write detections, you need to know what\u2019s exposed to the outside world in your cloud environment, take a serious look at container security, and understand what normal looks like in your environment. If you spot unusual user or role behavior, dig deeper. Look at it through a wider lens over the past 24 hours. Does anything look interesting, like multiple failed API calls? Understanding what\u2019s the norm in your environment helps you efficiently tune alerts (and helps tune out that security engineer who\u2019s constantly running penetration tests). Automate, automate, automate Automating elements of your security program helps with consistency, but do it strategically. Start by asking, \u201cWhat problem are we trying to solve?\u201d and work from there to free up resources and speed up time-to-detect. All AWS services are available as APIs, so you can automate just about anything. Know which servers are mission critical and use automation to adjust those alerts for impact so your team doesn\u2019t miss anything. Not to mention, it might help your security team sleep through the night without waking up in a cold-sweat because an alert slipped through the cracks. Lean on logging for better context clues It\u2019s hard to tell a story and determine what happened if there\u2019s no [cloud]trail to follow. Your detections are only as good as your logging. Make sure CloudTrail is logging all of your accounts, not just certain regions, and that no one is tampering with your logging (like turning it off entirely \u2014 yikes). Then, use CloudTrail as an events source to find anomalous or aggressive API usage. We recommend linking MITRE ATT&CK tactics with AWS APIs to filter for the most interesting activity. By the way, here\u2019s a mind map for AWS investigations that lays out some preliminary tactic mapping to make this part easier. Take your time laying the breadcrumbs (re: make sure your logging is up to par). It helps your detections and ultimately speeds up triage and investigation after your team sees an alert. Get back to the basics We get it \u2014 for an industry vet, it can be easy to overlook the basics. But when misconfigurations are a leading vector behind attacks in the cloud, it\u2019s important to make sure you\u2019re brushing up on best security practices in your AWS environment. It sounds simple, but the best way to understand AWS to write detections \u2014 and the key to red team research \u2014 is learning the basics of Identity and Access Management (IAM). Similarly, when thinking about container security, make sure you\u2019re securing every point an attacker can infiltrate. Covering the basics, from IAM to parts of a container, helps you protect your environment and improve your detection writing. See? Simple. Want to know more about some or all of these tips? We did a deep dive into these tips and all things detecting in AWS during Expel\u2019s AWS Detection Day. You can check out each of our session videos here . Still have questions? We\u2019d love to chat!"
6
+ }
5-tips-for-writing-a-cybersecurity-policy-that-doesn-t-suck.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "5 tips for writing a cybersecurity policy that doesn't suck",
3
+ "url": "https://expel.com/blog/5-tips-writing-cybersecurity-policy-doesnt-suck/",
4
+ "date": "Sep 17, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 5 tips for writing a cybersecurity policy that doesn\u2019t suck Tips \u00b7 4 MIN READ \u00b7 JOHN LAWRENCE \u00b7 SEP 17, 2019 \u00b7 TAGS: CISO / Framework / How to / Planning Ask anyone who\u2019s worked in cybersecurity for any length of time and I\u2019ll bet you they\u2019ve been asked to draft or contribute to a cybersecurity policy for their org. Creating a \u201cpolicy\u201d sounds simple, but those same people who\u2019ve been tapped to contribute will tell you that it\u2019s not easy. That\u2019s because enterprise-level cybersecurity policy is still a new thing and with new things comes many different interpretations and implementations. It\u2019s also not always easy for policy writers to work with other teams to find that sweet spot where security needs and business needs are balanced \u2026 and without slowing employees down, of course. But drafting a comprehensive cybersecurity policy is critical for enforcing guidelines and reducing liability. Here are some pro tips on what goes into a good cybersecurity policy and how you might use these tips in your own org. What does policy really mean? Before putting pen to paper, you\u2019ve gotta understand what \u201cpolicy\u201d means in the first place. There are lots of terms that get tossed around when a policy is being created, but they\u2019re not interchangeable (even though some people use them that way). Here are a couple terms you might hear during a discussion about policy, along with their definitions: Term Definition Policy What it is: A plan or course of action to guide future decisions. What it answers: What to do and why to do it. Procedure What it is: Describes the exact steps for a policy to be executed. What it answers: Who does what, when they do it, how they do it and what to do specifically. Audit What it is: Measures against a set standard. An objective measurement of security. Common standards include NIST , PCI, IEC 62443. What it answers: Are we meeting our goals? Are we following our policies? Assessment What it is: Measures against the experience of others. A subjective measurement of security. What it answers: Does it seem like we are meeting our goals? How do we feel about how the policy is being followed? Now that we\u2019ve got the basic definitions out of the way, I\u2019ll use them in an example to see how they might actually be used in a conversation about your own org\u2019s policy: \u201cWe\u2019re creating a new cybersecurity policy for the company. This policy will outline goals to guide us in our most important cybersecurity tasks. The policy will state that we\u2019ll conduct an assessment every three months to verify employees are following policy and procedure and an audit every year to ensure that we\u2019re meeting PCI compliance . Further, procedures will be made to provide guidelines and steps on accomplishing the goals set forth by the policy.\u201d Pro tips for writing a policy that doesn\u2019t suck The Valve Employee Handbook , Microsoft Standards of Business Conduct and even the US Constitution \u2014 all of these works come from large organizations and at their core is strong policy writing. What are some of the most important rules of policy writing these works use that we can use as we\u2019re doing our own drafting? The stuff you decide to include in your cybersecurity policy will be unique to your org \u2014 and companies\u2019 needs when it comes to cybersecurity vary so widely that we can\u2019t try and cram all of those nuances into a single blog post. But all good cybersecurity policies do share some similar traits. After chatting with lots of Expletives who\u2019ve written and contributed to countless policies over the course of their careers, here\u2019s the final list of pro tips we came up with to help you as you\u2019re drafting your own: Know your business goals. Sounds obvious, but it\u2019s always good to gut check the direction of your policy against the broader business goals. If you\u2019re not aligned with the same stuff the business cares about, you run the risk of cybersecurity being seen as a cost center or deadweight on the company \u2014 not exactly a position you want to be in. Michael Sutton goes into greater depth here on how to create or grow relationships with the other execs on your team so that you\u2019re all on the same page when it comes to goals. Make it practical. Of course you want to create the ideal policy \u2014 but make sure the guidelines you\u2019re creating are realistic for both your users and your own security team (if you\u2019re lucky enough to have one). A common example of an impractical policy is one that includes lots of mandates around sensitive data protection. In these policies, orgs might say things like \u201call confidential data must be marked\u201d and \u201call external transmission of data must be encrypted.\u201d Sure, it sounds good on paper, but your users won\u2019t do this because it\u2019s a headache for them to do manually. Instead, you could ask employees to only mark the data when it\u2019s leaving your org, and then have tech in place to do the secure transfer automatically. Setting realistic expectations for users and your own team gives you a much better chance that the rules you set forth will be followed. Make it applicable. Make sure the policy you\u2019re writing is applicable to your org. For example, every so often a policy will get caught up covering too many specific security examples and how to resolve them. This turns the policy from a document providing direction to a document that\u2019s applicable in only a few specific circumstances. And when a policy is not always applicable people start to ignore it. Be concise. You\u2019re not drafting the Magna Carta here. Keep the policy short and to the point so that employees will actually read it. There\u2019s sometimes a tendency to include a bunch of boilerplate language that \u201call policies must have\u201d \u2014 but don\u2019t do that. The longer the policy, the less likely your users are to internalize it. Write in plain English. All of us cybersecurity folks love speaking in APTs, CVEs, XSS, and LEET (sometimes). But remember that Mike in finance and Karen in sales don\u2019t \u201cspeak\u201d cybersecurity. Write your policy in everyday language so that anyone in your org \u2014 regardless of their knowledge level about cyber threats \u2014 can understand it. Got a draft? Here are your next steps Once you\u2019ve got a draft of your policy, a great way to determine whether your policy passes the sniff test in the five areas mentioned above is to share it with others and ask for feedback. (Bonus: This is a great way to socialize the policy with your executive team and make some new friends.) There are also numerous resources you can review as you\u2019re drafting your policy that might help you get a better understanding of what a policy should and shouldn\u2019t cover \u2014 take a look at NATO CCDCOE (NATO Cooperative Cyber Defence Centre of Excellence), NCCoE (National Cybersecurity Center of Excellence) or the NIST CSF (National Institute of Standard and Technology Cybersecurity Framework) for starters. With that, you\u2019re well on your way to becoming the policy whiz kid of the office \u2026 don\u2019t let it all go to your head. John Lawrence is a Security Operations Center intern at Expel. Check out his LinkedIn profile ."
6
+ }
6-things-to-do-before-you-bring-in-a-red-team.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "6 things to do before you bring in a red team",
3
+ "url": "https://expel.com/blog/6-things-to-do-before-you-bring-in-red-team/",
4
+ "date": "Jul 8, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 6 things to do before you bring in a red team Tips \u00b7 6 MIN READ \u00b7 JON HENCINSKI, TYLER FORNES AND DAVID BLANTON \u00b7 JUL 8, 2020 \u00b7 TAGS: How to / Managed detection and response / Managed security / Planning / SOC Remember that time we almost brought down our point of sale environment on a busy holiday weekend because we thought the red team was a real bad guy? Whoah, that would\u2019ve been bad. But we didn\u2019t because we did our prep work. The SOC had a bat phone to the red team and was able to quickly verify the evil \u201c whoami \u201d and \u201c net \u201d commands were from the red team. Crisis averted. Red team assessments are a great way to understand your detection and investigative capabilities, and stress test your Incident Response (IR) plan . But good intentions can lead to bad outcomes if you don\u2019t do your prep work. A red team will generate activity that looks similar to a targeted attack (cue the adrenaline). So a little planning goes a long way. Here\u2019s six things you should do before taking on the red team. 1. Start with objectives Start here. Get clear on your objective(s) to set the direction of the assessment and define the rules of engagement. Worried that an attacker could gain access to a segmented part of your network? Or perhaps you\u2019re worried that an attacker could compromise credentials and spin up resources in Amazon Web Services (AWS)? Clear objectives help everyone. Business-focused objectives usually look like: Break into a segmented part of your network Obtain a VIP user\u2019s credentials (CEO, CTO, IT Administrator, etc.) Access/exfiltrate customer data While these drive the overall theme and end-game for the red team, there\u2019s a set of objectives that often surround the organization\u2019s ability to respond as well. From a defensive perspective some reasonable objectives are: Assess detection capabilities and identify gaps Stress test response and remediation capabilities Assess investigative capabilities in Windows and Linux environments Assess investigative capability in the cloud Goals bring purpose to the assessment. Purpose that should be measured along the way. Some key questions we measure are: How long did it take us to spot the red team? At what phase in the attack lifecycle did we spot them? How long did it take us to remediate? What challenges did we encounter when remediating? Do we need to update our response playbooks? What didn\u2019t we detect? Document these to be actioned later. Were there investigative challenges that prevented us from answering key questions? Document these to be actioned later. 2. Review your IR plan with the team It\u2019s so important to build muscle memory around your IR process before a bad thing happens. This way everyone knows what to do, including how to communicate. One of the biggest challenges is getting over the \u201cadrenaline rush\u201d that comes with responding to an incident. Panic will happen, and chaos will ensue the first couple of times through it. But as everyone gets comfortable with the process and goes through some of the unknowns together, the response process will become a well-oiled machine that everyone is ready for instead of afraid of. From an operator\u2019s perspective, we\u2019re a huge fan of running threat emulations for our analysts. These are miniature versions of a red team assessment that help train our analysts in responding to a specific threat, or testing our own response process. There\u2019s a lot of fun to be had here for a blue-teamer who is red curious (remember rule #1 is that objectives are key). For the broader org, we\u2019re biased, but \u201c Oh Noes \u201d is a great place to start if you need some help organizing a simulated walk-through of your IR plan (and have some fun in the process). 3. Emphasize remediation We agree with Tim MalcomVetter . The emphasis of a red team should be response. Talk about remediation ahead of time. Ask hard questions like, \u201cwhat would we do if that account was compromised?\u201d Pro-tip: Know ahead of time who in your org to contact for infrastructure questions, service accounts, etc. Sometimes knowing who to call is the biggest hurdle. Plan your response, know who to contact, and then stress test your plans. If your SOC doesn\u2019t have a lot of reps responding to red team activity, remediation may happen without considering business impact. Consider the following: The red team appears to be using the account \u201csql_boss\u201d to move laterally. We should disable that account. Red teams love service accounts. Service accounts typically have privileged access and can be tough to reset. In this scenario, disabling the account \u2018\u201c sql_boss \u201d would cause the red team some pain. But what else would it do? What does that account run? How is it used? Is it responsible for the backend of a business critical application? Should we disable this account? Can we disable this account right now? There\u2019s some not-so-funny stories we can tell here about how this oversight has caused major pain for some organizations. But in essence the major theme is: Do your homework, plan your response and talk about it ahead of time. 4. Set expectations Your blue team just spotted a bad guy moving laterally via WMI to dump credentials on a server? Great find! Will you let them know it\u2019s an authorized red team? There\u2019s many theories to appropriately assess the response to a red team. Some organizations prefer not to tell their defenders, some prefer to operate more openly in the purple team model. In any regard, there will be a moment between detection of the initial threat and the recognition that this is authorized red team activity that you\u2019ll want to plan for. Your SOC will think this is a real threat, and your playbooks for a real threat will (hopefully) be followed. Consider that when you make the decision to include/exclude knowledge of the assessment from key stakeholders in your security organization. One way to think about this is: \u201cat 2am who/how many will be woken up to respond, and how soon in our IR plan do things become a risk to the business?\u201d Our take: The more people in the know, the better. Don\u2019t gas the team responding to an authorized assessment. Save some capacity and energy for the real thing (we\u2019ve seen the real thing happen at the same time as the assessment). 5. Chat with your MSSP/MDR Use an MSSP or MDR? Chat with them. Understand rules of the road for responding to red team activity. It\u2019s likely one of your red team goals includes assessing your MSSP/MDR. That\u2019s great! But understand what you can expect before you get started. At Expel, we like to treat red team engagements as a real threat to exercise our analysts\u2019 investigative muscle, and also showcase our response process. This helps build confidence between us and our customers. It also helps them understand how we will communicate with them (slack, email, PagerDuty) when there\u2019s an incident in their environment. Additionally, this also showcases our analysts\u2019 investigative mindset, including a full report to show the detail of our response and the thoroughness of our investigation. Now, as mentioned above there\u2019s a cost to responding to a red team exercise. Response is time-consuming and analyst resources are extremely valuable. We believe that showcasing the initial response is important, and the extended response can wait. That means if a red team is detected and confirmed at 2am, let everyone go back to bed and pick up the response during normal business hours. For red team response, we operate M-F 9am-5pm and will continue to chase new leads for two business days before delivering a final report. That report is comprehensive, and includes everything our normal critical response would contain, but everyone is much happier at the end of the day when our off-hour energy is saved for the real thing. 6. Have a bat phone to the red team Your MDR or SOC just spotted activity they believe is the red team. Prove it with evidence. Don\u2019t assume! Call them. Show them. Verify it\u2019s the red team using evidence. You would be surprised at how often the lines get crossed when the actions taken during an assessment don\u2019t necessarily line up with what was documented/in-scope. However, the quicker these actions can be confirmed, the happier everyone is when they aren\u2019t related to the actions of an actual threat. Most SOCs will not stand down until this is confirmed, and we\u2019ve sometimes waited more than 12 hours to get confirmation that something we identified is related to an authorized test. That\u2019s a lot of energy expended on both ends. Have cell phone numbers, Zoom bridges, etc. before you get started. Always have a deconfliction process on-hand prior to launching the assessment. This will save a lot of your team\u2019s time and energy when the red team gets in. Parting thoughts Red team assessments come in all shapes and sizes, and we believe that they are essential for understanding not only the security posture of an organization\u2019s overall response readiness. If you\u2019re in a position to influence how a red team assessment is organized, we encourage you to talk about these points not only internally but with the red team you have chosen to carry out the assessment as well as the SOC/MSSP/MDR you will be relying on for defense. Some quick planning and expectation setting can prevent a lot of pain and create an overall better engagement for everyone involved!"
6
+ }
7-habits-of-highly-effective-remote-socs-expel.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "7 habits of highly effective (remote) SOCs - Expel",
3
+ "url": "https://expel.com/blog/seven-habits-highly-effective-remote-socs/",
4
+ "date": "Mar 25, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 7 habits of highly effective (remote) SOCs Security operations \u00b7 5 MIN READ \u00b7 JON HENCINSKI \u00b7 MAR 25, 2020 \u00b7 TAGS: Employee retention / Managed detection and response / SOC Last week, along with many other businesses, we moved to 100 percent remote work as a company. That included our 24\u00d77 SOC. Expel\u2019s CEO and co-founder, Merk, shared his thoughts on some of the things he witnessed during our shift to an all remote workforce, but I wanted to share some of the changes we made to keep our SOC highly effective in this new setup. Security operations is a team sport at Expel. One of our SOC guiding principles is this: teamwork makes the dream work. It\u2019s simple: great outcomes happen when people work together . But as of last week, our SOC analysts are no longer sitting together. It\u2019s a change I knew that would require us to adapt a bit. Because in order to maintain the texture of the team in a completely remote setting we\u2019d need to commit to a new set of daily habits \u2013 seven in fact, to keep our (remote) SOC highly effective. To be candid: It\u2019s a big change for us and we\u2019re still adjusting. You may be going through something similar right now too. Or you and your SOC team may consider yourselves veterans of an all-remote setting. That\u2019s great too. Now we\u2019re all in the same boat. We\u2019ll share what\u2019s worked for us (so far) and we\u2019d love to hear what\u2019s worked for you too. 1. Prioritize video conferencing Workplace camaraderie and trust are key ingredients of an effective SOC. Trust brings safety and camaraderie adds a sense of \u201ctogetherness.\u201d We trust each other to operate in the best interest of achieving our goal (protecting our customers and helping them improve) and to work with a \u201cwe\u2019re in this together\u201d mentality. We need to maintain and nurture these key ingredients in an all-remote setting. But how? Queue the SOC party line. The SOC party line is the name of our Zoom meeting that\u2019s open 24\u00d77 for the team. Instead of walking onto the SOC floor, our analysts start their day by joining this Zoom meeting. While we\u2019re no longer able to sit next to each other we can be with each other. It matters. We\u2019re emulating the texture of the SOC floor by staying connected via Zoom and maintaining our sense of \u201ctogetherness.\u201d And yes, there\u2019s an endless pursuit to find a funny Zoom virtual background . (Side note: Security is serious business. We have the privilege of helping organizations manage risk. We take our work very seriously but don\u2019t take ourselves too seriously. It\u2019s okay to find the bad guys and have fun while doing it.) 2. When in pursuit: To the breakout room! While our 24\u00d77 Zoom meeting, aka the SOC party line, emulates the SOC floor and brings us together, pursuing threats and coordinating response in this main Zoom meeting wouldn\u2019t yield the precise, coordinated response we\u2019re seeking. Too many cooks in the kitchen. Instead, as work enters the system and the team spots activity that warrants investigation or follow-up, the lead investigator spins up a Zoom breakout room and invites the necessary resources required to run the item to ground. As an individual contributor you\u2019re provided with a virtual conference room with a clear goal and objective. As a manager, you have a clear understanding of current utilization based on the number of folks in the main Zoom room versus breakout rooms. You\u2019re enabling a highly coordinated response and have a clear line of sight on capacity. A win-win. 3. Emphasize empathy Empathy is a core competency for leaders. I personally believe that no other skill makes a bigger difference than empathy when it comes to leadership. Simon Sinek agrees with me on this one. And now more than ever, during these stressful times, we need to emphasize empathy. We\u2019re all going through something significant right now. It\u2019s okay to acknowledge that and talk about it with one another. As a SOC management team, we\u2019re spending more time with our people, not less. And most of our 1:1s right now are centered around how our folks are doing and what else we could be doing to set them up for success in this all-remote setting. We listen really hard and most importantly we let them know we\u2019ve got their back. Pro tip: Empathy builds trust. And as you already know, trust is a key ingredient to an effective SOC. 4. Be transparent about quality We\u2019re doing everything we can to make our shift to a remote SOC seamless for the team. But we\u2019re also being super transparent about the quality of our work output. Has our quality gone down as a result of this change? I wrote about our SOC quality program in a previous post , but as a quick recap: we use a quality control (QC) standard, Acceptable Quality Limits (AQL), to tell us how many alerts and incidents we should review each day. We then randomly select a number (based on AQL) of alerts, investigations and incidents and review them using a check sheet. We send the results to the team using a Slack workflow . Here\u2019s an example: Reviewing the results with the team lets us know how we\u2019re doing. It lets us know where we\u2019re having problems so we can adjust and improve. And no, we never expect perfection. 5. Over-communicate This one is a bit obvious but it\u2019s worth stating. Since we\u2019re no longer working alongside each other, effective communication is crucial. And working in an all-remote setup may mean more distractions for some folks, not less. We\u2019re emphasizing empathy and listening really hard to learn what these distractions are for the team and landed on the need to over-communicate . Repeat important messages in team meetings and 1:1s. In our SOC, \u201cI don\u2019t know\u201d or \u201cI\u2019m having difficulty understanding that\u201d is always an acceptable answer to a question (If you\u2019re not testing for candor in your interview process you totally should be, by the way). Bottom line: remote work may mean more distractions. Over-communicate like your team depends on it. 6. Seek out fun In these stressful times, not only is it okay to have fun \u2026 but you should seek it out for your team. We\u2019re still finding our way here a bit, but we\u2019ve experimented with happy hours, coffee breaks and book clubs all over Zoom (don\u2019t worry, we\u2019re always watching). The digital happy hour has been the biggest hit so far but we\u2019re still coming up with new ideas. If you don\u2019t have Zoom, Skype, Google Hangouts, FaceTime and Facebook messenger are all good alternatives. Seeking out fun for your team is a great way to take care of them. You\u2019ll reduce stress and build camaraderie. 7. Test, learn, iterate Completely remote work may be our new normal for a while. Do I think the adjustments we\u2019ve made are all of the right moves? Nope. But we\u2019ll continue to test new things, learn from our mistakes and iterate our way to an even more successful remote setup. We\u2019re never afraid to ask: Is there a better way to do this? We\u2019re always trying to learn and improve. Parting words We\u2019re still getting adjusted to our all-remote setup but we\u2019ve landed on some things that work and wanted to share them with you. We\u2019ll continue to learn and improve, as we always do, but I\u2019d love to hear from you if there are daily habits you and your team practice that make your remote SOC highly effective. Finally, we\u2019re all going through something significant right now. It\u2019s okay to acknowledge that and talk about it. Emphasize empathy with your team and the people around you. Listen really hard. Prioritize effective communication. Over-communicate. And try to have a little fun while doing it."
6
+ }
7-habits-of-highly-effective-socs.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "7 habits of highly effective SOCs",
3
+ "url": "https://expel.com/blog/7-habits-highly-effective-socs/",
4
+ "date": "Nov 5, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG 7 habits of highly effective SOCs Talent \u00b7 6 MIN READ \u00b7 JON HENCINSKI \u00b7 NOV 5, 2019 \u00b7 TAGS: Employee retention / Managed detection and response / Managed security / Planning / SOC Before I talk about effective SOCs that run like well-oiled machines, let\u2019s get one thing straight. SOC isn\u2019t a dirty word. But I totally understand the negative connotation and that\u2019s exactly why I\u2019m writing this post. Alert fatigue is real , repetition leads to exhaustion and those two things in tandem create an environment ripe for analyst burnout . I get it. Here\u2019s the thing: When built right, a job working in a SOC can be so much fun, not to mention you get the learning and experience you thought you signed up for. Since launching our 24\u00d77 service almost two years ago we\u2019ve experimented a ton, learned a bunch, and through a lot of iteration landed on some habits \u2014 seven, in fact \u2014 that we believe help us \u201cSOC\u201d the right way at Expel. If you\u2019re working in or managing a SOC with a ton of turnover \u2014 or just want tips on how to shape an effective and more productive team \u2014 here are seven habits to adopt right now. 1. Have a clear mission and guiding principles Get explicit about the mission and your culture. At Expel, the SOC\u2019s mission is to protect our customers and help them improve. The mission is centered around problem solving and being a strategic partner for our customers. Notice that there are zero mentions of looking at as many security blinky lights as possible. That\u2019s intentional. Take it a step further and create some guiding principles. Guiding principles define what you as a team believe in and how you operate together. Here are some (but not all) of the guiding principles in the Expel SOC: Teamwork makes the dream work. Service with passion is our competitive advantage. We embrace positive change. Articulating guiding principles is the first step in creating a SOC culture that you can turn into your competitive advantage. Security tech and process are easily replicated but culture is hard to copy. 2. Prioritize learning Our analysts love to learn new things; it\u2019s even one of the traits we hire for. One thing that we\u2019ve learned in building out our program is that the best way to foster improvement is to combine this love of learning with a collaborative \u2014 not adversarial \u2014 approach. The best example of this is how we use attack simulations to help our team learn new techniques. During these, we have to celebrate progress and opportunities to learn \u2014 it doesn\u2019t take much to make someone feel foolish and have that metastasize into a reluctance to try a new thing or stretch a new skill. If you don\u2019t run attack simulations regularly, start building them into your schedule. But don\u2019t overthink it. You can run one right now in eight simple steps: Talk to the team and given them background so they don\u2019t feel ambushed. Open a PowerShell console. Run wmic /node:localhost process call create \u201ccmd.exe /c notepad\u201d from your PowerShell console to simulate remote process creation using WMI. Run winrs:localhost \u201ccmd.exe /c calc\u201d from your PowerShell console to simulate remote process creation using WinRm. Finally run schtasks /create /tn legit /sc daily /tr c:users <user>appdatalegit.exe to simulate the creation of a malicious Windows scheduled task. Interrogate your SIEM and EDR. Talk about it as a team. Find ways to improve. Want to run more sophisticated simulations? Here\u2019s our threat emulation framework along with an example of how to simulate an incident in AWS . 3. Empower the team Analysts want to spend time finding new things, pursuing quality leads and working with people to solve complex problems \u2014 not chasing the same false positive over and over again. Trust the team to filter out the noise and then enable them to do so. How did we build this capability at Expel? We took the DevOps processes used by our engineering teams and adapted them to detection deployment. Here\u2019s a high-level overview of what this looks like: We manage our detection rules using GitHub . We have unit tests for every detection (just like you would expect of code). We use CircleCi to build our detection packages. During the CircleCi build process, we apply linting and perform additional error checking. If a CircleCi build fails we\u2019ll automatically fail the PR so an analyst knows some additional tweaks are required. We create error codes that are easy to understand. We use Ansible to deploy new detection packages. Now an analyst can deploy a new detection package at any time as long as the content passes automated tests and has been peer-reviewed. Here\u2019s how this plays out in practice. @subtee just tweeted about a new remote process execution technique \u2026 An analyst creates the rule in GitHub and submits a new PR. That PR is picked up by CircleCi, linted and checked for errors. Assuming all goes well, the PR is marked as \u201call checks passed.\u201d The analyst requests peer review. The detection package is deployed using Ansible. Everyone\u2019s happy. Empower the team to tackle false positives and write rules to find new things. Give them control of the end-to-end system and back them up with good error checking. In doing so, your team members will feel more connected to their work and the mission. 4. Automate SOC work can be repetitive. Automation FTW! But what should you automate? Decision support is a great place to start. What\u2019s decision support? In our context, decision support is all of the automation, contextual enrichment and user interface attributes that make our analysts more effective in answering the following question: \u201cIs this a thing?\u201d How does this play out at Expel? As part of our integration with Office 365 we collect signal and generate alerts when accounts are compromised or user activity doesn\u2019t seem quite right. Investigating patterns of user authentication behavior can be a tedious task when done manually \u2026 but the good news is that it\u2019s a series of repeated steps that can be automated. Take a look at this example where, with the help of some automation, we\u2019re able to quickly review 30 days of login activity based on IP address and user-agent combinations: Automate the repetitive tasks so the team can focus their efforts on making important decisions versus clicking buttons. 5. Use a capacity model Understand your available capacity (AKA analyst hours) and utilization. Are you consistently exceeding your available capacity? Is there always way more work to do than your people can handle? If so, cue the burnout. If capacity modeling is new to you, that\u2019s okay. There are plenty of resources available to help get you started. Bottom line: Know your capacity utilization. If you discover that your team is oversubscribed, you\u2019ll need to act fast. 6. Perform time series analysis I agree with Yanek\u2019s philosophy here. Effective managers are able to look out into the future and, with reasonable certainty, predict what needs to change today . I think effective management is centered around asking the right questions and using data to answer them. You already know that alert fatigue leads to burnout. As a manager, I ask a ton of questions about the alert management process: How many alerts did we send to the team last month? How many alerts will we send to the team next month? What day of the week is the busiest? Do we get more alerts during the day or at night? How many alerts will we send to the team next year? All of these questions are centered around time . Time series analysis allows you to analyze data in order to learn what happened in the past, and to inform you on what things will likely look like in the future. By performing time series analysis you can forecast how things will change and react before it\u2019s too late. We perform time series analysis on the historical volume of alerts sent to our team for triage. From this data, we pull out different components including trend , seasonality , and the noise AKA \u201c the residual ,\u201d so that we can use patterns from historical behavior to help us predict future behavior. This allows us to not only more deeply analyze what\u2019s already happened, but it\u2019s also a way to look into the future so you can start to react now before it\u2019s too late. 7. Measure quality I love this tweet. Quality control doesn\u2019t get in the way. It pushes you forward. At Expel, we use a quality control (QC) standard, Acceptable Quality Limits (AQL), to tell us how many alerts and incidents we should review each day. We then randomly select a number (based on AQL) of alerts, investigations and incidents and review them using a check sheet . QC allows us to spot problems, understand them and then fix them. And fast. Parting words I\u2019ll be candid. At one point I thought about rebranding our SOC as a Computer Incident Response Team (CIRT) to distance ourselves from all the general negativity associated with a SOC. But a SOC can be a great place to work if you solve problems the right way and empower your teams. As an industry, let\u2019s \u201cSOC\u201d the right way and reshape everyone\u2019s thinking about SOCs."
6
+ }
a-beginner-s-guide-to-getting-started-in-cybersecurity.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A beginner's guide to getting started in cybersecurity",
3
+ "url": "https://expel.com/blog/a-beginners-guide-to-getting-started-in-cybersecurity/",
4
+ "date": "May 31, 2018",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG A beginner\u2019s guide to getting started in cybersecurity Talent \u00b7 9 MIN READ \u00b7 YANEK KORFF, BEN BRIGIDA AND JON HENCINSKI \u00b7 MAY 31, 2018 \u00b7 TAGS: Career / Guide / How to / NIST It happens from time to time. Someone tweets something incendiary, it creates a hubbub and before long you\u2019ve got yourself a veritable online brouhaha. One topic that seems to have piqued everyone\u2019s interest lately is this question: is there such a thing as an entry-level security job? It\u2019s a good one. And there seem to be two schools of thought: Never start off in security. Start with IT infrastructure, helpdesk, or development. Don\u2019t waste time, dive into security and fill in the technical gaps as you go. Here at Expel, we agree with Dino\u2019s philosophy . First of all, start anywhere you damn well want to start. \u201cFocus on what you want to do , versus what you want to be . Then, focus on finding the best place to do that and stay there.\u201d We\u2019ve seen it first hand. We\u2019ve hired several analysts straight out of college, and they\u2019re doing excellent work (If you\u2019re an employer and not plugged into the community at the Rochester Institute of Technology , and specifically working with their Computer Security program, you\u2019re definitely missing out). So we know there are degree programs out there that will prepare you for security jobs right off the bat. Now that you know where we stand, we\u2019ve got some tips on how to break into security . But there are lots of different jobs with the title \u201csecurity\u201d in them (and lots of jobs involving security that don\u2019t have \u201csecurity\u201d in the title) so it\u2019ll be important to make sure we know which ones we\u2019re talking about. Which cybersecurity jobs are we talking about? Wouldn\u2019t you know it, not only does NIST have a pretty great cybersecurity framework to help you manage risk , they\u2019ve also got another nice framework that can help job seekers figure out what employers are looking for. A good first step towards finding the work you want to do is to identify the tasks that float your boat and map them to jobs that give you the opportunity to do just that. Worried you don\u2019t have the technical depth for some of these roles? Entirely possible! If you drill into the framework a bit you\u2019ll see some jobs (like Cyber Defense Analysis , which we call a \u201cSOC Analyst\u201d) have an enormously long list of knowledge areas you\u2019ll need to be proficient in. If that\u2019s the kind of job you want to do, it might make sense to start off with a less technically demanding role that has a lot of the same baseline prerequisites like an IT Program Auditor . You could use that as a stepping stone into other security roles as you develop a deeper understanding of the security space. And yes, you could certainly start with a role in Systems Administration or Network Operations to gain technical chops too. \u201cWait a sec,\u201d you might be thinking to yourself, \u201cisn\u2019t this just a cop out by defining non-security roles as security?\u201d Yes, it absolutely is. You got us. Frankly, as the NICE Framework makes clear, security is extraordinarily broad. While some argue it\u2019s \u201cniche,\u201d it\u2019s really a compendium of niche knowledge across several vastly different work areas. That means if your mind (or your heart) is set on security, you can enter any of these domains and work your way into security. Or \u2026 you can start in security-specific domains and work your way into more technical roles over time. Okay, so maybe you buy into the argument that the security domain is pretty diverse. Maybe you go one step farther and believe several of these roles include security responsibility even if they don\u2019t have \u201csecurity\u201d in their title. After all, we\u2019ve been saying that security needs to be built-in , not a bolt-on for years, right? Perhaps what\u2019s going on here is that the online brouhaha around \u201centry-level security jobs\u201d is really focused on the security jobs where technical depth is essential. Maybe the argument is it\u2019s these jobs that require starting out in technical non-security roles first. Let\u2019s poke at that a bit. But first, there are a few things that\u2019ll apply no matter what direction you\u2019re coming from. Let\u2019s try to agree on three things Anyone can cook Have you seen the movie Ratatouille ? No? Yeah, that seems to be the most common answer. Ok, let\u2019s summarize [SPOILER ALERT]. There\u2019s this Chef, Auguste Gusteau, who authors \u201c Anyone Can Cook .\u201d Throughout the movie, you\u2019re made to believe that the message of the book (and the movie) is that literally anyone can become a great chef. Even the protagonist, a rat, can do it because you can learn how to do it from a book. Yet, by the end of the movie, you realize the point is substantially more profound and realistic. Actually, no. Not everyone who picks up the book can become a great chef. But, in fact, a great chef could potentially come from anywhere. There are so many paths to \u201csuccess.\u201d There are exceptions to every rule. Anyone can cyber. \u201cNever\u201d is rarely the right word A few years ago one of us was walking up Main Street, USA at the Magic Kingdom. It was 8:30am and he refused to buy his younger daughter funnel cake first (oh, the humanity!) \u201cYou never buy me anything!\u201d she exclaimed. He stopped. He looked around. He kept walking. The notion that you should avoid absolutes isn\u2019t new. And in the tech space, it\u2019s particularly important. A great engineer and former colleague once said: \u201cWhen the customer says it never happens, we need to build support for it to happen 5-10% of the time.\u201d So we\u2019re going to be cautious about these words when we\u2019re talking about career paths too. Broad-scale discouragement is a Bad Thing\u2122 When you engage in an argument or even a mild discussion, there\u2019s a decent chance your conversation partner is already coming to the table with an opinion. If it\u2019s a strongly-held opinion, your counter-argument may actually galvanize their original belief . In that case, your discouragement is going to fall on deaf ears \u2026 so why bother? In other cases, people may have a more flexible mindset. Think about a scout versus a soldier mindset. To a soldier, everything is black and white. Good and evil. Kill or be killed. Compare that to a scout, who\u2019s in information gathering mode all the time. Drawing conclusions are some general\u2019s job. Discouragement, in this case, could actually be effective! So good job, you\u2019ve managed to discourage a portion of the population who could actually have been amazing contributors in the field. What harm is there on succeeding or failing on one\u2019s own merit? Why encourage people to punt on first? Five habits that are helpful for (entry-level) security jobs If you don\u2019t agree with the three items above, well \u2026 it might be a good idea to stop reading now because we\u2019re about to do some hardcore encouragement , and that might make you grumpy. After all, the next great information security practitioner could be reading this blog right now. Also, we promised in the title to explain how to get into cybersecurity. So here are a few practical next steps. There are all sorts of resources out there that\u2019ll help you on the path towards becoming a super-nerdy cyber superhero. Here\u2019s our list of five things you can do to take the first steps to an entry-level technical cybersecurity career. 1. Survey the field Follow influential cybersecurity evangelists on Twitter. The most successful ones probably aren\u2019t calling themselves cybersecurity evangelists. They\u2019re just constantly dropping knowledge bombs, tips and tricks that can help your career. Here\u2019s a short list to get you going: @bammv , @cyb3rops , @InfoSecSherpa , @InfoSystir , @JohnLaTwC , @armitagehacker , @danielhbohannon , @_devonkerr_ , @enigma0x3 , @gentilkiwi , @hacks4pancakes , @hasherezade , @indi303 , @jackcr , @jenrweedon , @jepayneMSFT , @jessysaurusrex , @k8em0 , @lnxdork , @mattifestation , @mubix , @pwnallthethings , @pyrrhl , @RobertMLee , @ryankaz42 , @_sn0ww , @sroberts , @spacerog , @subtee , @taosecurity 2. Combine reading and practice This may shock you, but there\u2019s this security company called Expel that has a bunch of great content (full disclosure: we\u2019re biased). Self-serving comments aside, there are several companies that produce high-value security content on a pretty regular basis. High on our list are CrowdStrike , Endgame , FireEye , Kaspersky , Palo Alto\u2019s Unit 42 , and TrendLabs . As you read, try to figure out how you\u2019d go about detecting the activity they describe. Then, how would you investigate it ? Are you looking to grow your technical foundation for something like an analyst role? The breadth of what you need to know can be daunting. Perhaps the most foundational knowledge to pick up is around the TCP/IP protocol suite . Be prepared to answer the \u201c what happens when \u201d question confidently. For learning about endpoint forensics, you probably can\u2019t get a better foundation than Incident Response and Computer Forensics 3rd Edition . The chapter on Windows forensics is gold. Dive into Powershell , associated attack frameworks , and learn how to increase visibility into PowerShell activity with logging. Pair this knowledge with some of the best free training out there at Cobalt Strike. Watch the (most excellent) videos and apply the concepts you\u2019ve learned as part of Cobalt Strike\u2019s 21-day trial. Not enough time? Consider making the investment. The Blue Team Field Manual and Red Team Field Manual round out our recommendations on this front. In parallel, set up a lab with Windows 7 (or later) workstations joined to a domain. Compromise the workstation using some of the easier techniques, then explore post exploitation activity. Your goal is to get a feel for both the attack and defense sides of the aisle here. On the network side, consider The Practice of Network Security Monitoring , Practical Packet Analysis , and Applied Network Security Monitoring . When it comes time to take some of this book learning and make it real, resources like the malware traffic analysis blog and browsing PacketTotal where you can get a sense for what\u2019s \u201cnormal\u201d versus what\u2019s not. Your goal here should be to understand sources of data (network evidence) that can be used to detect and explain the activity. To refine your investigative processes on the network, consider Security Onion . Set up some network sensors, monitor traffic and create some Snort/Suricata signatures to alert on offending traffic. Your goal is to establish a basic investigative process and like on the endpoint side, understand both the attack and defense sides of the equation. 3. Seek deep learning, not just reading Have you ever taken a class and then months later tried to use the knowledge you allegedly learned only to discover you\u2019ve forgotten all the important stuff? Yeah, if you disconnect learning from using the knowledge, you\u2019re going to be in a hard spot. This might be one of the biggest challenges in diving into a more technical security role up front. To help offset this, in addition to combining reading with practice, consider the Feynman technique . Never heard of it? Well, it\u2019s easy to skim over bits and pieces you don\u2019t understand \u2026 but if you can distill it down into simple language such that others could understand it, then you\u2019ll have understood it better in the process. Nothing helps you learn quite like teaching. 4. Develop a malicious mindset Years ago, a security practitioner was explaining how you can become a better defender by thinking like an adversary. The story came with some awkward (and humorous) interchanges. He walked into a hotel room with his family while on vacation, saw the unsecured dispenser installed into the shower wall and said out loud, \u201cWow, it would be so easy to replace the shampoo with Nair!\u201d His family was horrified. To be clear: we\u2019re not advocating that you replace shampoo with Nair, or similarly nefarious anti-hair products. And the concept of thinking like an attacker is not new. Eight years ago when Lance Cottrell was asked what makes a good cybersecurity professional, he said they put \u201cthemselves in the shoes of the attacker and look at the network as the enemy would look at the network and then think about how to protect it.\u201d The best way to do that these days is by wrapping your head around the MITRE ATT&CK framework . It\u2019s quickly becoming the go-to model for wrapping some structure around developing an investigative process and understanding where (and how) you can apply detection and investigation. You might want to familiarize yourself with it prior to doing extensive reading and then come back to it from time to time as needed. 5. Be dauntless Don\u2019t let your lack of knowledge stop you . There are organizations out there willing to invest in people with the right traits and a desire to learn. Apply for the job , even if you don\u2019t think you\u2019re qualified. Maybe you get a no. So what? Try again at a different company. Or try again at that same company later. Reading will only get you so far \u2026 applying your knowledge will get you to the next level. And guess what, remember that Feynman technique? Yeah, teaching that knowledge you\u2019ve acquired to others will get you one level farther. Good luck, happy hunting! Finally \u2026 to those who say \u201can IT background and deep technical skills will help you get a job in security,\u201d we say: \u201cWe agree!\u201d And \u2026 To those you say \u201csecurity roles can be broad and you can use them to develop technical expertise over time,\u201d we say: \u201cWe also agree!\u201d What we don\u2019t believe in is telling people we don\u2019t know that they can\u2019t do something without understanding their unique situation. There may be paths that are generally easier, or generally harder. But assuming you can\u2019t do something is headwind you don\u2019t need. Hopefully you\u2019ve found some guidance here that gives you the push you need to consider an entry-level (or later) security job and you\u2019ll apply. To that end, we say \u2026 best of luck!"
6
+ }
a-cheat-sheet-for-managing-your-next-security-incident.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A cheat sheet for managing your next security incident",
3
+ "url": "https://expel.com/blog/cheat-sheet-managing-next-security-incident/",
4
+ "date": "Aug 24, 2017",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG A cheat sheet for managing your next security incident Tips \u00b7 5 MIN READ \u00b7 BRUCE POTTER \u00b7 AUG 24, 2017 \u00b7 TAGS: Planning / Security Incident Surviving the unexpected. On the face of it, security is pretty straightforward. We\u2019re operating in one of two modes. In Mode A we\u2019re focused on keeping evildoers at bay (and other generally bad things from happening). In Mode B the bad things have happened and we\u2019re doing the best we can to manage them. For most people A > B. But we don\u2019t get to choose when the bad guys show up. When they do , we\u2019re often out of practice because we have so much less experience responding to attacks than we do preparing for them. In a perfect world, there\u2019s a comprehensive incident response plan that involves legal, communications, the board, and technical response processes. In an even more perfect world, you\u2019ve put that plan through a table-top exercise, refined it based on your learnings, and drilled it to the point of muscle memory. But few of us live in that perfect world. That\u2019s OK. All is not lost. If you haven\u2019t yet got that perfect incident plan in place you can still make the best of a bad situation and manage your organization back on level ground. Here are six things I recommend. 1. Control your emotions and the velocity First and foremost, it\u2019s important not to freak out. Your job is to manage the incident in front of you and return the organization to \u201cnormal.\u201d Letting your emotions get the better of you will just get in the way of reaching that goal. It may be difficult to settle your emotions, but there are ways to help. First, get organized by putting a set of facts and tasks together to help you focus on the event at hand rather than the emotions surrounding it . Also, take care of yourself. Eat. Rest. Don\u2019t be afraid to take a step back (or a walk around the block) once in a while. It will help you maintain perspective and control your emotions. Pace of response is also important. You need to drive response activities but \u2013 like Icarus \u2013 you\u2019ll only be successful if you stay away from the extremes. Move too fast and you\u2019ll have wasted work, missed opportunities and poor decisions that could make you look like the Keystone Cops . Move too slowly, and you\u2019ll jeopardize the integrity of your organization as attackers continue to have access and do damage. There\u2019s no clear rule of thumb here, but as each meeting goes by and each day passes, make sure you\u2019re thinking about the velocity of activities and adjust tasking appropriately. 2. Build a team and assign roles You can\u2019t respond to an incident all by yourself. No matter how big or small your organization is, you need help. Build a team that\u2019s appropriate for the response and assign everyone discrete roles. Without roles, you\u2019ll have people stepping on each other\u2019s toes and gaps where there should be work. You\u2019ll want to engage legal, communications, key executives, IT leaders and technical staff. Make sure each person knows what they\u2019re expected to do, the level of effort and the need for confidentiality. But be careful. Don\u2019t bring in too many people \u2013 especially if you\u2019re dealing with an insider incident. Controlling information gets harder as more people get involved. So, think carefully about who you involve when insiders are involved. 3. Communication is key Regular meetings are important to keep everyone on the same page. You\u2019ll be bringing together individuals from across the organization. They don\u2019t normally work together and they won\u2019t be familiar with each other\u2019s communication styles or skills. By meeting at least once or twice a day, you\u2019ll help the team integrate rapidly and ensure your response activity doesn\u2019t suffer from lack of information sharing. And while internal communication is critical, make sure you\u2019re also looking beyond your own four walls to your customers, vendors, board, and the public at large. Controlling the message while an incident is unfolding is difficult. And it shouldn\u2019t be your responsibility \u2013 not just because you\u2019re busy, but because you are probably not good at it. Being transparent but also communicating facts externally in a way that is consistent with your brand is complicated. Educate your communications staff about the incident and hold them accountable to message with the appropriate parties. 4. Don\u2019t jump to conclusions Nothing is worse than a public statement about an incident that later has to be completely changed because an organization made an assumption during an incident that turns out to be false. I was once pulled away from a vacation with my family because my corporate website was \u201cunder attack\u201d according to our network operations center. We spent half a day working with that hypothesis, trying to shore up our DDoS defenses and control traffic. When we actually stepped back and looked at the facts, we discovered our marketing department had launched a new ad campaign without telling IT. It was swamping us with new users. Within a few minutes, we contacted marketing and had them turn the dial down to levels our infrastructure could handle. Deal with the facts you have, not the facts you want or the assumptions you brought to the table. Jumping to conclusions without sufficient facts damages your creditability with stakeholders. More important, it can lead to poor assignment of resources and cause greater harm to your organization as attackers are allowed continued room to operate. 5. Save the post-mortem for the actual \u201cpost\u201d While you\u2019re figuring out \u201cwhat\u201d happened, it\u2019s often easy to drift into thinking about \u201cwhy\u201d it happened. Assigning blame and tracking down the root cause of an incident may seem like a good idea, but it can inflame emotions and distract you from the task at hand. If you see your teammates diving into the \u201cwhy\u201d of the incident, remind them that the team will do a post-mortem after the incident and ask them to stay focused on their tasking. Usually, the promise of the post-mortem is enough to keep things on track. Then, once the incident is resolved, make sure you actually do the post-mortem analysis. Addressing the root cause of an event is important to the long-term integrity of your organization. Give everyone a few days to rest and deal with their normal job functions, but try to have a post-mortem meeting within a week after the event. 6. Start building a real incident response plan When the dust has settled, sit down with all your notes, emails, and random facts. Marvel that you were able to deal with such a complex situation with nothing but your wits and your skills. And vow to never, ever do it like that again. Creating a solid incident response plan will ensure that when things go wrong again (and they will go wrong) that your organization is better prepared to deal with the event. Did you notice something? None of these recommendations are overly technical. In my experience, when incident response goes wrong it\u2019s not because there wasn\u2019t competent technical staff. It\u2019s because there was no clear leadership for the staff to follow. \u2014 So today, while you\u2019re still working on your full incident response plan (and before anything bad has happened) let me offer a three-minute plan and a three-hour plan that will leave you better prepared to manage your organization the next time you face an incident. If you\u2019ve only got three minutes: get your phone out, make a list of the people across the organization that you\u2019ll need to work with if an incident happens and make sure you have them on speed dial. If you\u2019ve got three hours go a step further: set up meetings with each of them and tell them what their role would be if an incident ever arises. Trust me, the time you spend doing this will be paid back tenfold when that time is most valuable \u2013 during your next incident."
6
+ }
a-common-sense-approach-for-assessing-third-party-risk.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A common sense approach for assessing third-party risk",
3
+ "url": "https://expel.com/blog/a-common-sense-approach-for-assessing-third-party-risk/",
4
+ "date": "Jul 26, 2018",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG A common sense approach for assessing third-party risk Security operations \u00b7 12 MIN READ \u00b7 BRUCE POTTER \u00b7 JUL 26, 2018 \u00b7 TAGS: Example / How to / Planning \u201cHow secure is your supply chain?\u201d It\u2019s a question that can strike terror into the heart of a CISO \u2013 even one who\u2019s in charge of a mature security organization. With the move (sprint?) to cloud-based infrastructure, and business departments subscribing to SaaS apps left and right (\u201cOops! was I supposed to tell IT?\u201d), every day we rely more and more on other peoples\u2019 services to serve our customers. Here at Expel, we\u2019re a \u201ccloud first\u201d organization. Our entire enterprise\u2019s physical infrastructure fits easily on one desk. But we use the capability of nearly 50 vendors to bring our services to our customers. That\u2019s a lot of infrastructure that\u2019s not ours. And we\u2019re a relatively small company. Large companies may depend on hundreds of outside services. Understanding how all those services keep their customers (meaning \u2026 you) secure is no trivial matter. But it\u2019s super important. CISOs manage cyber risk in their own infrastructure every day. But once you leave your own infrastructure, it gets harder. And there aren\u2019t a lot of playbooks for how to manage the risk of someone else\u2019s infrastructure. Third parties are out of your control. You give them money, they provide a good or service in return. Sometimes, there\u2019s even contractual language that says \u201cwe\u2019ll do our best to secure your data.\u201d But, in practice, those words don\u2019t really mean much. What matters is the practices, procedures, and policies your vendors follow. At Expel, like many companies, we\u2019ve created a third-party assessment program for our vendors to try to manage our supply chain risk. We\u2019ve used other companies\u2019 third-party assessment programs as input, consulted our vendors and done a lot of research. It works well for us, and so we\u2019re sharing it with you, along with the third-party risk assessment questionnaire we\u2019ve developed. Watch the video overview \u2026 or keep scrolling to read on First \u2026 be realistic about who chooses your suppliers Unfortunately (at least for CISOs), security doesn\u2019t control who the organization does business with. Business owners do. And the questions they have on their mind are very different than what most CISOs are wondering. As you roll your program out, it\u2019s important to understand the business owner\u2019s mindset so you can figure out when, where and how to insert your own process into theirs. When a business owner has a problem, they probably want to fix it fast. They want to know if the product or service they\u2019ve got their eye on will do the trick. If the answer is \u201cyes\u201d (and they\u2019ve got the budget) they\u2019ll move forward, negotiating contracts, agreeing on cost and ultimately making the purchase. Meanwhile, the CISO is thinking, \u201cDoes this vendor create an acceptable level of risk?\u201d Getting answers means acting fast \u2013 while the business owners are chasing down answers to their own questions. If a potential vendor doesn\u2019t address security in a way you\u2019re comfortable with, the sooner you know that the better. It\u2019s much easier to guide the business away from potentially toxic companies early in the process than to stop a contract that\u2019s gone through all the redlining and negotiation and is one inch from the finish line. Next \u2026 set realistic expectations (aka understand the constraints) Setting realistic expectations for your third-party assessment program requires understanding two important equations that\u2019ll govern how much time you and your vendors are willing to put in. They seem simple. But it\u2019s easy to get so caught up in the weeds perfecting your process that you lose sight of them. Violate equation number one and vendors will start stretching the truth to get through all of your questions or bury the bad stuff to try and get your business. Violate the second equation and you\u2019ll find yourself giving away a free risk assessment or pen test to every potential vendor (more on that later). Remember, SaaS providers are getting bombarded left and right with third-party assessments. Short, easy questionnaires will get their attention before long complex ones. Likewise, you don\u2019t have a lot of time to dedicate to this either. The more complex the questions, the longer you\u2019ll have to spend vetting the results. Short, simple and to the point is far more likely to get to a result that\u2019s useful \u2013 both for you and your vendors \u2013 than some crazy, multi-page questionnaire. Keeping things simple has multiple benefits. When in doubt, use the \u201c50 at 50\u201d rule Striking the balance between thorough yet brief, reminds me of a saying from when I used to crew for a friend that raced cars out in West Virginia. The sanctioning body for the races required that cars be painted in a professional manner. Anyone that\u2019s been around amateur racing knows that very little about it qualifies as \u201cprofessional.\u201d The rule of thumb the officials used was \u201c50 at 50\u201d\u2026 that is, when you looked at a car traveling 50 miles per hour from 50 feet away, did the car look like it was painted? If the answer was \u201cyes,\u201d you were good to race. That\u2019s sort of how I view third-party assessments. If your process gives you the same level of assurance about your vendors\u2019 security processes as \u201c50 at 50\u201d gives racing officials, you\u2019re doing things right. Sure, there are some situations that require far more diligence than that (stay tuned!), but in most cases, you\u2019re just trying to get a general feel for things. Ultimately, even organizations with great practices and procedures will screw up sometimes. Nothing you do in your third-party assessment program will change that. The common sense process for third-party assessments There are three big chunks to any third-party assessment program: creating the questionnaire, designing the process and running it (told you it would be \u201ccommon sense\u201d). Of course, not every situation will fit neatly into your process. We\u2019ll cover the outliers too. But, to get started, you need to create your questionnaire. 1. Creating your questionnaire The questions you ask your vendors will be taken seriously by them \u2026 or at least they\u2019ll look at them seriously and try to figure out what you mean. It\u2019s important to write crisp, clear questions that vendors can easily understand and have a clear way to answer. The meat of your questionnaire is the questions themselves. We\u2019re providing our third-party risk assessment questionnaire as a starting point for you. Hopefully this\u2019ll let you speed through this step. We like these questions because they cover a wide swath of cybersecurity without being too detailed. They\u2019re also aimed at making it easy for vendors to re-use work they\u2019ve already done. Asking about existing certifications and the results of previous testing reduces friction in the process. Really, we want to ask questions we think will get answered truthfully and quickly. Focusing on reuse is one strategy for that. We\u2019ve also designed our questionnaire to sleuth out how much thought and care a vendor has put into security in general. For example, when we ask \u201cDo you have a formally appointed information security officer?\u201d we get a different vibe when the answer is \u201cYes, here\u2019s our CISO\u2019s contact info,\u201d versus \u201cNot really. Our lead developer cares a lot about security though.\u201d Simple questions like this give you a great window into how a potential vendor thinks about security. 2. Building the process Developing the questions is only one piece of the prep work that you\u2019ll need to do. How you\u2019re actually going to manage the process is equally important. The process we\u2019ve designed breaks down into the following six steps. Your exact process will, no doubt, have to be tailored a bit to the way your organization buys products and services. We\u2019re not suggesting that you can do a direct cut-and-paste of our process. But hopefully it can be an advanced starting point for you. Here\u2019s a quick overview of how we thought about each step as we created our own third-party assessment process. Step 1: Kicking off the process We created a set of criteria to determine which external vendors need to go through the process. Vendors that make the cut include: Services that will impact production systems Services that contain customer or other sensitive data Systems which aggregate data from multiple data sources. If someone is trying to use a new service that fits one of these situations, they send a request for review to a security review email alias containing what the service is, how we\u2019re going to use it and provide points of contacts at the vendor. Step 2: Send an introduction It\u2019s a bit awkward to send an email to a potential vendor demanding a bunch of information without first introducing yourself, the process and what they should expect. At Expel, the first thing we send to the vendor is a cordial email describing our process, the relatively casual and light touch nature of it and an invitation to ask questions or engage if they have concerns. We also let them know our desired timeliness (usually we ask for a response within about two weeks). Step 3: Send the real email Next, we send the real email. We use our secure file sharing system to send this email so that all communications are encrypted and their response is protected on its way back to us. You don\u2019t have to do this, but it\u2019s advisable, especially if you\u2019re asking for copies of sensitive documents such as their SOC2 and pen test executive reports. Step 4: Send a reminder After a week and a half has gone by, we\u2019ll send a gentle reminder if we haven\u2019t heard anything. That\u2019s usually enough prodding to get us answers right under our two week request. Step 5: Receive and analyze the results Hopefully, when you get the vendor\u2019s answers back they make sense, are reasonably complete and if you\u2019re lucky they\u2019re even comprehensible. Sometimes we\u2019ve had to go back to ask vendors for clarification on an answer or two, and that\u2019s OK. Keeping in mind the \u201c50 at 50\u201d mentality, once you have the answers, balance them against the business request and determine if you\u2019re willing to move forward with the vendor or if there are concerns that need to be addressed. Step 6: Brief the business owner(s) Once we\u2019ve got our heads around all of the vendor\u2019s answers, we give the business owner our opinion. When the results are positive, the conversations are easy. When we have concerns, that\u2019s when things get more difficult. It\u2019s a good idea in those cases to involve more people on the business side than just the requester (team leads, managers, etc.). You\u2019re going to get into a risk-oriented decision about how important this specific vendor is to the company and what the security risks are. The results of that meeting can vary wildly, but usually will fall into one of four buckets: Yep. Cool. Go for it. We can put in compensating controls to make up for lack of assurance in the vendor. We need a deeper dive to better understand the risks. No. Nope. Negative. Not going to use them. It\u2019s very important not to treat these decisions as binary. The reason you\u2019re doing a third-party assessment in the first place is to manage risk. Risk is a continuum, as it were, and you should treat your third-party vendor assessment process the same way. 3. Running the process Once you\u2019ve got your questionnaire and process figured out, test it on a few vendors. Be very up-front with them; let them know this is your first time trying out your third-party vendor assessment questionnaire and you\u2019d love feedback on both the material itself and the overall process. You\u2019ll find some vendors are well prepared for these kinds of requests and will have a team dedicated to answering them. Other vendors will respond with \u201chuh, this is the first time anyone\u2019s asked us about security.\u201d Be prepared for that and everything in between. Take any feedback you get and stir it inappropriately with the work you\u2019ve already done and your objectives for your third-party assessment program. After you\u2019ve tested the process on a few vendors (or later \u2026 run the process for a year or two), iterate. Feel free to change it up. As you grow, your risk appetite changes. As the state of the art of your vendors improves, you might want to modify your process to suit your needs. You don\u2019t need a forever \u201capples to apples\u201d comparison over the years. Rather, you need each response to provide you the information you need right now to make the decision that\u2019s in front of you. That information will change over time, and your process should too. Keeping track of the results You\u2019ll likely get lots of confidential documents back from your vendors when they reply to your questionnaire. You\u2019ll want to make sure you protect them according to the terms of any non-disclosure agreements you signed with them. Be sure to follow whatever your internal procedures are with respect to protecting that information. Also, we\u2019ve found that it\u2019s helpful to create one place to track all of the assessments \u2013 upcoming requests, active ones, and assessments we\u2019ve completed. We store all the responses, supporting documents and our notes in one place. We\u2019ve chosen Confluence for that since we use the Atlassian suite for a lot of our engineering and security workflow already. You should choose whatever makes sense in your organization. But be aware, you\u2019ll build up quite a pile of information quickly, so being organized early will pay off as your program grows. Hooking the process into the way your organization buys stuff Having a process is all well and good. But, unless you socialize it and have a clear way to plug it into the way your organization buys stuff, your third-party assessment program can quickly turn into shelfware. It\u2019s important to set the hook early in the process to get the best results. That hook can take many shapes: The procurement process: When a business unit requests a new PO, your purchasing department can simply ask, \u201cWhat does Security think of this?\u201d Knowing a PO won\u2019t be cut unless there\u2019s a clear answer to that question will force business owners to engage your process early so you\u2019re not playing catch up. Contract review: A slightly different take, but the same basic idea. When a contract is put in front of legal to review, they can ask, \u201cWhat does Security think of this?\u201d as well. Again, if business owners know they can\u2019t get through legal without clearing security, they\u2019re going to engage you early. That\u2019s just the way it is: Rather than have a specific gate, you can communicate with leaders and purchasers that new products and services are subject to a third-party assessment as part of doing business. If it\u2019s discovered that someone bought something without an assessment, There Will Be Consequences\u2122. Just like there are when people buy product outside of purchasing, right? Right? Whatever you decide, be sure to communicate it widely and often. New processes that affect how you buy services tend to take a while for everyone to understand and accept, so putting together a good PR campaign can\u2019t hurt your cause. Also, be sure the \u201chow to submit\u201d part of your process is clear. At Expel we use Jira\u2019s Service Desk as the portal where users can submit third-party assessment requests and track progress. We already use Service Desk for IT and other ticket tracking so it was an easy solution. YMMV and all that\u2026 be sure to choose a method of engagement that works for you and your company. Vendors that are bigger than your breadbox There may be times when the product or service you\u2019re evaluating is too big, too important or represents too much risk to apply the \u201c50 at 50\u201d rule. In these cases, you\u2019ll likely end up doing a more formal risk assessment to understand the risks they present in more depth so you can compensate for any issues you can\u2019t get the vendor to fix. Risk assessments are complicated (I addressed them in an O\u2019Reilly Security talk here if you\u2019re interested). They can be done either by your own staff or a third party. Either way, I have two points of caution: Don\u2019t give out a free pen test If you engage a third party to assess your vendor\u2019s product it\u2019s easy for your vendor to ultimately get a free pen test that you unwittingly pay for. So, if you hire a third party, make sure they\u2019re working on your behalf and use your business needs as the backstop for their work. That\u2019ll make sure the final product is geared towards you and your business, not the vendor and their product. Make sure you don\u2019t accidentally do a pen test or risk assessment The other common mistake when you dive deeper is you don\u2019t realize that you\u2019re diving deeper. You get the questionnaire back and you have questions \u2026 so you ask the vendor a few more questions. Things are clearer, but still not clear. So, you ask \u201cHey, can we take it for a test drive?\u201d You get their product, configure it, start testing it and suddenly realize you\u2019re doing a product assessment and you\u2019re already 40 hours into the process and probably have 80 more hours to go before you\u2019re done. As you start peeling back the onion be aware that you\u2019re doing it overtly and for a reason. Don\u2019t spend more time and effort on a third-party assessment than you need to. Oh \u2026 and make sure to avoid these common pitfalls Finally, there are a couple of other pitfalls you\u2019ll want to make sure you avoid as you launch (or refine) your third-party vendor assessment program. Adding to the questionnaire Be wary of asking too many questions or diving too deep. You\u2019ll quickly reach a point where vendors don\u2019t want to answer and it takes you too long to assess the results. It\u2019s not worth it. If you decide to do a full-fledged risk assessment, then by all means, dive in the deep end. But if you\u2019ve got a question you feel you must add to your questionnaire, find one (or two?) that aren\u2019t giving you any value and swap them out. Again, the simpler and shorter your questionnaire is, the more likely you\u2019ll get accurate and timely responses. Believing all the answers It\u2019s human nature to not want to fail tests. That applies to vendors responding to third-party assessment requests. They want to be as compliant as possible, so you can expect they\u2019ll take a few liberties in their answers. While it\u2019s unusual to find a vendor that flat out lies (saying they\u2019re SOC2 Type 2 compliant when they\u2019re not, for example), you may find vendors occasionally stretch the truth enough to \u201cpass.\u201d So, when you\u2019re answering the question \u201cAm I OK using this vendor,\u201d assume their answers are eighty percent correct. That\u2019s it There you go. That\u2019s Expel\u2019s third-party vendor assessment program in a nutshell. There are many like it, but this one is ours. Hopefully it gives you a jump start on building your own program. Please, take a look at our questionnaire , and feel free to use, modify, and comment on it as you see fit. I\u2019d also suggest taking a look at our NIST cybersecurity framework self-scoring tool that I created. It allows you to create charts that show your current and future security posture based on the NIST CSF and it includes a section on supply chain risk. If you do have comments and you\u2019d like to share on this process, the questionnaire or the NIST tool, please reach out to us and let us know. We\u2019re always trying to improve and would love for you to help us with that."
6
+ }
a-defender-s-mitre-att-ck-cheat-sheet-for-google-cloud.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A defender's MITRE ATT&CK cheat sheet for Google Cloud ...",
3
+ "url": "https://expel.com/blog/mitre-attack-cheat-sheet-for-gcp/",
4
+ "date": "Aug 5, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG A defender\u2019s MITRE ATT&CK cheat sheet for Google Cloud Platform (GCP) Security operations \u00b7 2 MIN READ \u00b7 KYLE PELLETT \u00b7 AUG 5, 2022 \u00b7 TAGS: Cloud security / MDR Our security operations center (SOC) sees its share of attackers in Google Cloud Platform (GCP). Seriously\u2014check out this recent incident report to see what we mean. Attackers commonly gain unauthorized access to a customer\u2019s cloud environment through misconfigurations and long-lived credentials\u2014100% of cloud incidents we identified in the first quarter of 2022 stemmed from this root cause. As we investigated these incidents, we noticed patterns emerge in the tactics attackers use most often in GCP. We also noticed those patterns map nicely to the MITRE ATT&CK Framework \u2026 (See where we\u2019re going with this?) Cue: our new defender\u2019s cheat sheet to MITRE ATT&CK in GCP. What\u2019s inside? In this handy guide, we mapped the GCP services where these common tactics often originate to the API calls they make to execute on these techniques, giving you a head start on protecting your own GCP environment. We also sprinkled in a few tips and tricks to help you investigate incidents in GCP. It\u2019s an easy-to-use resource that informs your organization\u2019s GCP alert triage, investigations, and incident response. Our goal? Help you identify potential attacks and quickly map them to ATT&CK tactics by providing the lessons learned and takeaways from our own investigations. Depending on which phase of an attack you\u2019re investigating, you can also use the cheat sheet to identify other potential attack paths and tactics the cyber criminal used, painting a bigger (clearer) picture of any risky activity and behaviors that can indicate compromise and require remediation. For example, if you see suspected credential access, you can investigate by checking how that identity authenticated to GCP, if they\u2019ve assumed any other roles, and if there are other suspicious API calls indicating the presence of an attacker. Other tactics that an attacker may execute prior to credential access include discovery, persistence, and privilege escalation. What\u2019s the bottom line? Chasing down GCP alerts and combing through audit logs isn\u2019t easy if you don\u2019t know what to look for (and even if you do). Full disclosure: the cheat sheet doesn\u2019t cover every API call and the associated ATT&CK tactic. But it can serve as a resource during incident response and help you tell the story (to your team and customers) after the fact. Knowing which API calls are associated with which attack tactics isn\u2019t intuitive, and we don\u2019t think you should have to go it alone. We hope this guide serves as a helpful tool as you and your team tackle GCP incident investigations. Want a defender\u2019s cheat sheet of your own? Click here to get our GCP mind map! P.S. Operating in Amazon Web Services (AWS) or Azure too? We didn\u2019t forget about you\u2014check out this AWS Mind Map and Azure Guidebook for more helpful guidance. Special thanks to Ryan Gott for his contributions to this defender\u2019s cheat sheet and mind map."
6
+ }
a-tough-goodbye.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A tough goodbye",
3
+ "url": "https://expel.com/blog/a-tough-goodbye/",
4
+ "date": "Aug 10, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG A tough goodbye Expel insider \u00b7 2 MIN READ \u00b7 BRUCE POTTER \u00b7 AUG 10, 2021 \u00b7 TAGS: Company news After nearly five years serving as Expel\u2019s CISO (pronounced \u201cciz-oh,\u201d for those wondering), I\u2019m moving on to new adventures. But before I leave, I wanted to share a bit about my journey with Expel. Expel is an incredible company. I honestly mean that. Even from the beginning, Expel impressed me. In 2016, I had the opportunity to be the technical advisor to the Obama administration\u2019s Commission on Enhancing National Cybersecurity. It was a fascinating experience, to be sure. One of the things I heard from all the companies and agencies I interacted with was that many of them had a similar shared experience that can be best summed up like this: \u201cI\u2019ve done everything I\u2019m supposed to do and bought all the tech I\u2019m supposed to buy. I still don\u2019t feel like I see what\u2019s happening in my environment, and don\u2019t think my provider is actually finding the bad things.\u201d At the time, I remember thinking, \u201cYep, that\u2019s how it is,\u201d and I didn\u2019t have any real ideas on how to do better. How it started I got a call from Yanek, one of Expel\u2019s founders, who was on the hunt for a CISO for this new company he was helping to start and was hoping I might have some recommendations. Always happy to help a friend, I asked him what Expel was doing and told him I\u2019d see if I could find anyone who might be interested. He told me the plan for Expel: The founders wanted to disrupt the managed security space, hook into existing investments companies have made and automate not just the detection but also the investigative and recommended remediation activities. After listening to the pitch, I thought, \u201cThat\u2019s it! That\u2019s the thing nearly everyone I\u2019ve talked to in the last year needs.\u201d I offered up that I\u2019d be willing to be Expel\u2019s CISO. I interviewed with the other execs (including a really memorable one with Pete Silberman), and I ended up with the job\u2026even if we couldn\u2019t agree on how to pronounce C-I-S-O. How it\u2019s going Fast forward almost five years, and it\u2019s been a blast. Seeing the initial vision of the company come to fruition is awesome. I\u2019ve had customers tell me our service has changed their lives; that they finally get to see their kids\u2019 sporting events for the first time in forever\u2026I\u2019ve seen companies grow and build their internal security programs without having to deal with the day-to-day stress of security operations. And I\u2019ve seen Expel grow too. This company has always been an incredible place to work, a place where everyone supports each other both professionally and personally. In my role as CISO, I oversee not just security, but IT and facilities as well. I can\u2019t overstate the quality of work done by this team. We\u2019ve published some of the work we\u2019ve done (like our 3PA process , the NIST CSF self-scoring tool and NIST Privacy Framework self-scoring tool ) but there\u2019s lots of good work this team has done that the public doesn\u2019t get to see. I\u2019m thankful for them and so proud of their work. Although I\u2019m off to a new adventure and excited about the future, it\u2019s safe to say I\u2019ll miss Expel and its band of merry Expletives. Thanks and see you around To our customers: I\u2019m happy we\u2019ve been able to make a difference for you. To my coworkers, I\u2019ve enjoyed working with all of you and you\u2019ve made me a better person during my time at Expel. And to my family, thanks for your support on this adventure and the next one. I\u2019m not going far \u2014 if you want to chat about third-party risk (that\u2019s a great topic for cocktail parties, by the way) or just say hello, you can still find me in your favorite CISO Slack community, at ShmooCon and on Twitter."
6
+ }
a-year-in-review-an-honest-look-at-a-developer-s-first-12.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A year in review: An honest look at a developer's first 12 ...",
3
+ "url": "https://expel.com/blog/developers-first-12-months-at-expel/",
4
+ "date": "Aug 16, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG A year in review: An honest look at a developer\u2019s first 12 months at Expel Talent \u00b7 8 MIN READ \u00b7 DAREN MCCULLEY \u00b7 AUG 16, 2022 \u00b7 TAGS: Careers / MDR At Expel, it should be no surprise that we value transparency. It\u2019s one of those core beliefs that makes us tick. One way we practice transparency is by providing open and candid insights into our interview and onboarding process, but what about beyond the first 90 days? Well, let\u2019s talk about it\u2014because that\u2019s what we do here. Recently, senior software engineer, Daren McCulley, used his Expel-oritory Time\u2014more on this later\u2014to reflect on his first year as a new developer at Expel. In this post, learn about Daren\u2019s experience with the interview process, major takeaways from the early days, and the personal and professional growth that came along the way. The goal? We hope that providing a peek behind the curtain will help you make the most informed decision when deciding if becoming an Expletive is right for you. Take it away, Daren! Let\u2019s start at the beginning When I think back to the interview process with Expel, what I remember most is that I was never in the dark about what was next or where I stood. In contrast to the other interviews I\u2019d been through, the process was transparent, respectful of my time, and gave me a window into Expel\u2019s culture. Our technical interviews are collaborative experiences, rather than inquisitions by whiteboard. It only took two weeks to go from my initial screen to my final interview, and my recruiter extended an offer that same evening\u2014allowing me plenty of time to compare it with offers from other companies. The thoughtfulness given to my personal circumstances\u2014understanding that I needed to weigh all of my options to make the best choice for me\u2014was the first of many times I\u2019ve witnessed Expel demonstrate another core belief: If we take care of our crew, they\u2019ll take care of our customers. At the risk of stating the obvious, I accepted my offer. What to expect in the early days At Expel, we definitely hit the ground running\u2014but don\u2019t expect to go it alone. In your first week, you can expect to commit code to production (from the comfort of your own home), but a group of people will come together to make it happen. It\u2019ll go something like this: Prior to day one, you\u2019ll get a new laptop, monitors, keyboard, trackpad, and dock in the mail. You\u2019ll also have access to some discretionary funds to make your home office sparkle. Daren\u2019s home office setup On your first day, someone from IT will guide you and other new Expletives through laptop and account setup. IT works hard to make this a fairly painless process, so things will probably work out of the box (if they don\u2019t, IT is always just a Slack message away). When joining the Engineering department, one of the first people you\u2019ll meet with over Zoom is a member of the Core Platform team to walk you through setting up your dev environment. Spoiler alert: I\u2019m a big fan of this team. They treat the rest of engineering as well as we treat our customers\u2014and they aren\u2019t alone. There are several teams at Expel whose primary mission is enabling the rest of us. Just check out this screenshot of a chat I had with one of our managers of site reliability engineering (SRE), Reilly Herrewig-Pope (hey, Reilly ), early on: Right off the bat, your manager provides a list of tasks and resources to help you get up to speed. For example, you can browse several recorded videos where subject matter experts introduce a cornerstone of Expel\u2019s tech stack\u2014which they helped design and build. Then, when you feel ready, one of your new teammates will hand-pick and shepherd you through your first issue. This is when the real fun begins\u2026 Completed Jira ticket, five days after Daren\u2019s start date TIL in year one We move fast and trust our tech At Expel, we use Gitflow for several of our primary repositories. All code is peer reviewed, checked for proper test coverage, and eventually merged into the develop branch\u2014kicking off continuous integration and continuous delivery (CI/CD) and ending in a deployment to our staging environment. We cut and merge a new tagged release every day from develop to main, which deploys the latest code to production. These daily releases require trust in the process and infrastructure to catch and handle human errors. I learned this lesson early on. On my third day, I pushed a bad database (DB) migration that would\u2019ve broken our staging environment. Not only did the automated migration process catch the error and rollback the transaction protecting the DB, but when the first Kubernetes pod failed to run the migration, the existing pods stayed live and didn\u2019t deploy the broken image. Staging kept working as expected for everyone depending on it, while I chased down and patched my bug. It was a huge relief to know that I had a safety net I didn\u2019t have earlier in my career because Expel invested in resilient infrastructure. Having a talented group of SREs designing, building, and maintaining a system that protects us from ourselves is only one part of what makes our daily release cycle work. Every feature team at Expel has a dedicated quality assurance (QA) engineer who considers each issue that needs testing carefully. I pride myself on attention to detail, but, more often than not, our QA still finds edge cases I didn\u2019t consider. That\u2019s because their involvement begins long before I merge code and mark an issue as pending acceptance. Our QAs take part in backlog grooming, where they help define testable acceptance criteria and ask questions. This pushes us to confront the devil in the details with all stakeholders present, so that we don\u2019t waste time writing code based on incorrect assumptions. We\u2019re still a startup If you want to maintain legacy Java code, or push pixels and patch bugs for a PHP application in LTS, this gig might not be for you. Similarly, if you like being a Software Engineer II and knowing that, if you meet your commit quota, you\u2019ll be eligible for Software Engineer III in two years\u2014this probably isn\u2019t for you. Even though Expel is no longer a handful of people in a barn with a dream and a whiteboard, it still feels scrappy out of necessity. Our chips are on the table behind two very ambitious bets that require constant evolution and development: We integrate with damn near anything, and We empower humans with the data to make sound judgements, and automate the rest. These bets are what keep things interesting, and demand creative problem solving from our engineers. We have swimlanes but don\u2019t operate in silos To build complex systems, software engineers rely on abstraction to hide complexity behind well-defined interfaces. There\u2019s a parallel to this in how our teams are structured at Expel. As an application developer, I don\u2019t bear the principal responsibility for designing user interfaces (UIs), setting sprint priorities, or managing infrastructure. Instead Expel offers me a seat at the table, where I can collaborate with designers, product managers, and SREs to build software that solves the highest-priority problems in a way that\u2019s scalable. Through these relationships, I\u2019ve grown my skills in all of these disciplines and, more importantly, my ability to effectively communicate with people in these roles. We run towards the fire We have a Slack channel called \u201cgotime.\u201d This is where high-visibility incidents are first reported before they\u2019re spun-off into dedicated channels and Zooms. One of the most remarkable affirmations of Expel\u2019s culture is the number of people that join the fight immediately following one of these incidents\u2014regardless of who is responsible or who owns the code. Our support of one another extends beyond incidents. Whenever I need help, I always find someone willing to lend a hand. There\u2019s a lot to like about Expel, but the people I have the privilege to work with will always be at the top of that list for me. Opportunities for personal growth In addition to the growth we experience on the day-to-day (that\u2019s the nature of the job), Expel encourages us to attend one conference per year and provides a budget of $2,500 to make that happen. This year, I flew out to San Jose for a Postgres conference. I was honestly surprised by how simple it was to get the trip approved, book travel, and submit expenses. Not to mention, we have access to tools like Pluralsight for curated online training. But access to material isn\u2019t enough. You also need time and space to invest in continued education. My team let me spend an entire sprint building a foundation in one of the JavaScript (JS) frameworks we use, so that I could approach future issues with more experience and confidence. FYI: we write the majority of our applications in Go, JS, or Python, which gives you the opportunity to become (or remain) proficient in three in-demand languages. Every quarter, we set aside two days called Expel-oritory Time (remember this from the intro?), where the entire product organization can work on whatever they want. Folks often elect to form small, cross-team groups to hack away on some experimental feature, explore our data in a new and interesting way, or use the time to write a blog post\u2014like this one. (Side bar: while I can\u2019t yet speak from experience, we also have a 12-month BUILD program for managers, designed to give you practical skills through ongoing learning and practice.) \u2026and professional growth Like I\u2019ve said, transparency is foundational at Expel. Information normally held close to the chest at other companies, like compensation or the state of the business, is shared openly. That principle applies to our workplace relationships as well. I have candid 1-on-1s with my manager every week where we discuss how things are going, any obstacles she can help me overcome, and what the next steps are for my journey at Expel and beyond. She\u2019s transparent about my performance, and we chat openly about challenges I\u2019m facing and what I should focus on to reach the next milestone in my career. From day one, I\u2019ve had someone in my corner considering my individual circumstances, who never made me feel like a replaceable cog in a corporate machine. We\u2019re building a product that meets customers where they are in their security journey, which means we need people with different points of view at the table. It\u2019s part of the reason equity, inclusion, and diversity are hugely important at Expel\u2014it\u2019s another one of those core beliefs: \u201cbetter when different.\u201d We\u2019re a stronger organization when we recognize, celebrate, and learn from those whose backgrounds and perspectives are different from our own. We also have four employee engagement groups (ERGs) to support that: BOLD (for Black employees), WE (for the women of Expel), The Treehouse (for LGBTQ+ employees), and The Connection (for mental wellbeing)\u2014all of which are open (and welcoming) to allies. We\u2019ve added more than 180 new Expletives since I started, and there are a whole lot of open positions and opportunities for career advancement (BTW, we\u2019re hiring ). You won\u2019t be pigeonholed here. The opportunity to apply for new roles arises often, giving you a chance to find your perfect fit or try something new. Looking back (and ahead)\u2026 I knew from the interview process that Expel was the right choice for me\u2014and my confidence in that choice has only grown over my first year. Most professions require some amount of continued education, but the pace of change in software engineering takes this requirement up a notch. Working for a company that understands the value of investing in their workforce, and that provides the necessary space and time to experiment, truly supports my personal and professional growth. Every job comes with a unique set of challenges and Expel has no shortage of hard problems. The difference\u2014and the reason I\u2019m looking forward to year two\u2014are the people I get to face down those challenges with. If I\u2019ve sold you on Expel, or you think it\u2019s too good to be true and want to ask some questions, check out our open jobs . If you\u2019re anything like me, you won\u2019t be disappointed."
6
+ }
add-context-to-supercharge-your-security-decisions-in.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Add context to supercharge your security decisions in ...",
3
+ "url": "https://expel.com/blog/add-context-to-supercharge-your-security-decisions-in-expel-workbench/",
4
+ "date": "5 days ago",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Add context to supercharge your security decisions in Expel Workbench Security operations \u00b7 2 MIN READ \u00b7 PATRICK DUFFY \u00b7 MAY 12, 2023 \u00b7 TAGS: Cloud security / MDR / Tech tools Defenders need so much information to make good security decisions in the security operations center (SOC). Situations constantly evolve\u2014employees join and leave the org, new technology gets onboarded, unexpected risks surface, and so much more\u2014it\u2019s hard for the SOC to keep up with ever-changing conditions throughout the organization. The good news is that all of these changes create contextual information that Expel and our customers use to make smart decisions. The more we know about your environment and your users, the easier it is for our software\u2014and by extension our SOC analysts\u2014to determine which events require remediation. With this in mind, we\u2019ve introduced a new capability which allows you to add business context to Expel Workbench\u2122 that helps our SOC team reduce the time-to-decision on alerts and relieve the burden on your team. Adding context to Workbench Here\u2019s how it works: On the \u201cContext\u201d page in Workbench, users can add new context and see all existing context that has been previously added by your organization or our SOC team. Think of context as information about a user or situation that\u2019s helpful to know when making a decision about a security alert. It\u2019s like a virtual sticky note with directions like: Every time you see user X, be aware that they often travel outside the country. This gives Expel important information about the user\u2019s location that could help quickly resolve alerts generated about logins from different countries when traveling. On this page, you can edit context, add descriptions and notes, change users and more. You can also see a history of who created the context, who updated it, and when, and you can create categories to quickly group and find types of context being added in Workbench. You can also upload lists of context, like IP addresses or emails that belong to specific groups. Highlight essential information Once added, you can highlight this context in Workbench to call attention to important pieces of information. This serves as a digital sticky note for analysts to share information and learnings about an environment. For example, if we know that specific prefixes are used for admin hosts, we can add context calling out that host is an admin to provide situational awareness so analysts can make the right call on whether and how to act on an alert. This is visible to Expel SOC analysts and customers, meaning you have insight into how analysts work alerts, investigations, and incidents. More valuable ways to add context Context allows you to easily make updates as employees leave the organization or change roles. For example, you can add context for the CEO\u2019s email address along with specific intel into Workbench, knowing that CEOs are often targets of phishing attacks. If the CEO leaves the org, you can update or remove the email address and all the associated detections and workflows update automatically. Another way to use context is to make note that specific indicators of compromise (IOC) have been linked to a threat actor within the environment. For example, the SOC can take note that the auto host containment remediation action needs to be taken immediately if a specific IOC is seen as alert. For example, if they see the domain faceb00k.com using zeroes instead of O\u2019s. Making Expel work for you Context is just one more way to customize Expel to your specific environment. Be sure to check out the Context page under Organizational Settings to see what context you already have in place and consider additions that would be helpful."
6
+ }
an-easier-way-to-navigate-our-security-operations-platform.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An easier way to navigate our security operations platform ...",
3
+ "url": "https://expel.com/blog/an-easier-way-to-navigate-our-security-operations-platform-expel-workbench/",
4
+ "date": "Apr 4, 2023",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG An easier way to navigate our security operations platform, Expel Workbench Security operations \u00b7 4 MIN READ \u00b7 KIM BIELER \u00b7 APR 4, 2023 \u00b7 TAGS: MDR / Tech tools When it comes to security operations, speed and ease-of-use are critical for making the best decisions and judgments quickly. It\u2019s important that analysts see what they need to see, and can get to the information they need as intuitively as possible. That\u2019s why we\u2019re excited to announce upgrades to the navigation within our security operations platform, Expel Workbench\u2122. Our offerings and capabilities have evolved as the security needs of our customers have grown, so we redesigned the navigation to make it even easier for our clients to manage security operations. The new design makes navigation within Workbench even more flexible, easy-to-use, and downright good looking. And the kicker is that these changes were all driven by you\u2014our customers. Let\u2019s take a look at what\u2019s new. Sidebar navigation The most noticeable change is that we shifted the horizontal navigation to the sidebar. This gives us more room for the essential tools we offer today and the capabilities we plan to build in the future, and makes it easier for you to get to the tools you need, fast. Alert ticker You\u2019ll also notice we\u2019ve moved the alert ticker to the top of the interface, which makes it easier to see the most essential information first. The alert ticker links directly to all critical, high, medium, low, and tuning alerts, and is ever-present throughout Workbench for easy access. Custom detection rules We moved the Custom Detection Rules view from our Settings page to our Detections page. This improvement helps you better understand what will raise Expel alerts in your environment, in addition to any custom lookout, add-to investigation, and noisy alert suppressions created. New location for Actions One of the most important questions our customers ask when working with Expel\u2019s security operations center (SOC) during an investigation or incident is, \u201cWhat\u2019s on our team\u2019s plate?\u201d We\u2019ve made it simple to get to that to-do list by moving our Actions page to the top of our information architecture in the navigation. With one click, you now see all outstanding to-do items for the team, Expel\u2019s SOC, or our bots, for any investigation or incident. Breadcrumbs Sometimes you go down a rabbit hole, checking out all the awesome work done during an investigation or incident\u2014we get it. We\u2019ve introduced breadcrumbs at the top of each page to make it simple to jump back to the starting point of your journey through Workbench. Why we made these changes We continuously ask ourselves: how can we make our users\u2019 jobs easier and their experience in the product more intuitive? We spoke to customers, collected feedback and discovered new ways to simplify how clients use the product today and provide flexibility for how the product will expand in the future. Our mission with the new navigation design therefore centered around four goals: Use navigation space more efficiently and provide room to grow. Create a high-level information architecture that makes even more sense. Reduce clicks to the important and frequently used parts of the platform. Align Workbench with the brand palette and iconography. Since we launched, we\u2019ve scaled Workbench significantly to keep up with ever-evolving security needs. We\u2019ve added half a dozen dashboards; entire new offerings like threat hunting , phishing , and managed detection and response (MDR) support for Kubernetes ; and tools like context, configurable notifications, and the NIST CSF. The original horizontal navigation could no longer expand to accommodate existing features, never mind the accelerating pace of enhancements and new offerings we knew were coming soon. We wanted to make ground-breaking features like the detections strategy UI and additional offerings like hunting easier to find and use. When customers have a consistently good experience across touchpoints, that creates a sense of assurance and trust\u2014which is especially critical in security, when customers are trusting us to keep their organization safe. That\u2019s why the colors and icons you see on the website now carry through to our Workbench platform. How this helps you We hope that the new navigation makes your work easier and faster. We know that this is an essential tool you use every day\u2014so making it even more enjoyable to use will improve your workflow and help keep your organization safe. Here are a few specific details we think you\u2019ll appreciate: The features are there when you need it, and out of the way when you don\u2019t. You can get where you want to go with fewer clicks. It\u2019s easier to see how the platform is structured and where you are in that structure. More of the features are visible and discoverable. A glimpse into the design process To ensure our new Workbench navigation design aligns with your needs, we followed the proven user experience process of research, iteration, testing, and change management. Research: We had a lot of hunches and opinions about what needed to change, but we weren\u2019t designing for ourselves. So early on we conducted a card-sorting exercise with our customers, asking them to sort the features and categorize them. This research helped us understand what needed to be visible in the main navigation versus what could be listed in the secondary navigation. Iteration: There\u2019s never one right way to solve a design problem. The team experimented with different layouts, colors, icon choices, and organizational schemes. Testing: A key concern for the redesign was how it would affect analyst efficiency. We\u2019re proud of our response times, and if the new navigation slowed analysts down by even a second per alert, that could meaningfully affect our service level objectives (SLOs), which was out of the question. So we did a staggered release to the SOC and had analysts kick the tires for several weeks while we watched efficiency metrics. Change management: A project like this doesn\u2019t get designed, built, and released overnight. It\u2019s a change management effort that involved months of communication, resourcing and planning discussions with engineering, and the creation of a tiger team to execute the design and plan the roll-out. Check it out If you haven\u2019t logged into Workbench since this update, I encourage you to jump in and explore."
6
+ }
an-expel-guide-to-cybersecurity-awareness-month-2022.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Expel guide to Cybersecurity Awareness Month 2022",
3
+ "url": "https://expel.com/blog/expel-guide-to-cybersecurity-awareness-month-2022/",
4
+ "date": "Oct 4, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG An Expel guide to Cybersecurity Awareness Month 2022 Tips \u00b7 5 MIN READ \u00b7 GREG NOTCH \u00b7 OCT 4, 2022 \u00b7 TAGS: MDR Fall is in the air, which can only mean one thing: Cybersecurity Awareness Month is here. Every year, the National Cybersecurity Alliance (NCA) and the Cybersecurity and Infrastructure Security Agency (CISA) use October to share information and important resources to help people stay safer and more secure online. It\u2019s a favorite for us at Expel because it\u2019s about education and awareness at a time that isn\u2019t a reaction to the cyber-threat or attack du jour. Instead, we can take a step back to share information and resources within the defender community and anyone with an online presence\u2014which, let\u2019s face it, is just about everyone. Expel is also a proud Champion of Cybersecurity Awareness Month 2022 \u2014a collaborative effort among businesses, government agencies, colleges and universities, associations, nonprofit organizations, and individuals committed to improving online safety and security for all. This year, the CISA and NCA are promoting four key security behaviors to help equip everyone, from consumers to corporations, to better protect their data. To support this initiative, we\u2019ve curated some Expel resources to help your organization improve its cybersecurity posture\u2014this month, and beyond. 1. ICYMI: always enable multi-factor authentication (MFA), but also have a back-up plan. At this point, enabling MFA (when available) should be a no-brainer. But, we also know that MFA isn\u2019t always a silver bullet for protecting your environment. Our security operations center (SOC) has seen examples of this in the wild. We\u2019ve responded to phishing attacks that used a man-in-the-middle tactic to send users to a fake Okta login page. (Check out how it went down here .) We\u2019ve also seen attackers use BasicAuthentication to bypass MFA and target access to human capital management systems . Based on these novel incidents, here are a few lessons learned you can apply to your own organization: Deploy phish-resistant MFA wherever possible. If FIDO-only factors for MFA are unrealistic, disable email, SMS, voice, and time-based, one-time passwords (TOTPs). Instead, opt for push notifications. Then configure MFA or identity provider policies to restrict access to managed devices as an added layer of security. (More on this in our Quarterly Threat Report for Q2 2022 .) Enforce MFA prompts when users connect to any sensitive apps via app-level MFA. Don\u2019t let your sensitive apps (think: Okta, Workday, etc.) be a one-stop shop for attackers. To take it a step further, tell your users to always review the source of the MFA request (if via push notification) to verify the login isn\u2019t from an unusual area\u2014and if it is, encourage your people to report strange requests. Finally, be wary of brute force MFA requests, which involve an attacker continuously sending push notifications to the victim until they accept. Let your users know this is something to watch out for. 2. Don\u2019t rely on your memory or Sticky Notes to keep track of all your passwords. This year, a global survey conducted by open-source password manager, Bitwarden, revealed that 55% of people rely on their memory to manage passwords . Of those surveyed, only 32% of Americans were required to use a password manager at work. We know that memory can be fickle at best. Password managers are a great way to keep organized for anyone creating multiple (if not dozens) of usernames and passwords to do their job, but they can be difficult for your IT team to enforce. Instead, many businesses opt for a single sign-on (SSO) solution to allow employees to sign into an approved account one time for access to all connected apps. However, easy access for users also makes SSO services a popular target for attackers\u2014it\u2019s part of the reason business application compromise (BAC) attacks are evolving . Regardless, it\u2019s never a bad idea to encourage employees to create strong, unique passwords for different sites/apps, and of course\u2014we can\u2019t say this one enough\u2014enable MFA whenever possible. Want to be able to forget your passwords? Installing a password manager will help generate strong passwords, keep your accounts safer, and save you from memorizing countless strings of characters. Plus, it makes it easier to deal with constantly changing passwords for sites whose accounts have been compromised. BTW, we\u2019ve compiled more tips for maintaining security and privacy at home for remote workers (because, let\u2019s face it, that\u2019s most of us these days), as well as effective ways to encourage more secure behaviors . 3. Stop ignoring that \u201csoftware updates available\u201d notification. For security professionals, this might sound like an obvious one, but patching and updating software regularly can help prevent attacks. Vendors are constantly plugging security holes and patching bugs, some of which might represent entry points for attackers. A lot of operating systems and app stores will do this for you automatically, but keep an eye on those notifications prompting an update\u2014pushing it off might be convenient now, but cost you down the line. Updates to web browsers are particularly important, so try to install those right away. So how do you ensure your team keeps up with these updates? Try a combination of gamification and education. Entering employees into raffles for gift cards or other perks for applying OS updates is a generally inexpensive way to reduce risk for your organization and keep folks happy. (FYI: more tips like this from industry leaders grappling with similar challenges from Forbes , including this same sage advice from our own co-founder and CEO, Dave Merkel.) 4. Help your organization avoid taking the bait on a costly phishing scam. Recognizing and reporting phishing schemes is one of the first lines of defense when it comes to protecting your organization. We\u2019ve seen this in our SOC on countless occasions, from attackers targeting Amazon Web Services (AWS) login credentials , to malware-poisoned resum\u00e9s aimed at job recruiters \u2014and everything in between. We\u2019ve also seen how these campaigns can reveal larger, more malicious business email compromise (BEC) attacks if they aren\u2019t stopped in their tracks (get the full rundown on that incident here ). Fortunately (or not), Expel\u2019s Phishing team reviews hundreds of emails a day and thousands of emails weekly, so we\u2019ve picked up a few things about how to protect your organization, including: Prevention starts with proper training. Make sure employees learn to recognize potential red flags associated with phishing emails when they land in their inbox. Even if this means an investment on your part, it\u2019ll pay dividends in the long run. Spend time on education for specific business units on the phishing campaigns that might target them. Finance teams might encounter financial-themed campaigns with subject lines, such as \u201cURGENT:INVOICES,\u201d while recruiters may see resum\u00e9-themed lures. Once they know what to look for, make it easy for people to report suspicious activity. An effective way to do this is through a system for employees to validate suspicious emails or texts. This allows IT to provide guidance to the individual, and gives security team members enough insight to identify trends to sniff out a larger scale attack early on. (More on preventing these scams like this here .) We know. There\u2019s a lot to unpack here, and there\u2019s probably more we didn\u2019t include for the sake of space and your sanity. But hopefully these resources provide a glimpse into some of the ways you can help your organization toward an overall better security posture\u2014even after October. We\u2019re just getting started for Cybersecurity Awareness Month. Check out our #BeCyberSmart resources for curated content to follow along."
6
+ }
an-inside-look-at-what-happened-when-i-finally-took.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An inside look at what happened when I finally took ...",
3
+ "url": "https://expel.com/blog/inside-look-what-happened-finally-took-vacation/",
4
+ "date": "Aug 6, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG An inside look at what happened when I finally took a vacation (for realsies) Talent \u00b7 5 MIN READ \u00b7 AMY ROSSI \u00b7 AUG 6, 2019 \u00b7 TAGS: Career / Employee retention / Great place to work / Management I\u2019ve got a confession: I\u2019m terrible at relaxing. In fact, one of my college entrance essays centered around the fact that I have a hard time sitting still. And I once had a roommate look at me and ask, \u201cDo you ever just sit down and do nothing?!\u201d Sure, sometimes I sit to watch Netflix and Hulu, but I\u2019m usually folding clothes or thinking about next week\u2019s carpool schedule for my kids at the same time. Let\u2019s just say I\u2019m grateful I discovered the many benefits of yoga years ago. But this blog post isn\u2019t about my struggle with knowing how and when to slow down . It\u2019s about what happened when I finally took a real vacation \u2014 one that involved me and my family with zero cell phone or internet services for a whole seven days. Our view on vacays At Expel, we believe in the importance of taking vacation. It\u2019s so important to us that we\u2019ve included it in our Palimpsest (no, we didn\u2019t make up a word ) \u2014 it\u2019s a document our executive team developed together, and it outlines what we value about our culture and describes the way we want to work with each other. Among many other attributes we value here, our Palimpsest makes it clear that all employees should feel not just comfortable but encouraged to take the vacation time they need. But here\u2019s the thing: Words are just words \u2014 in a Palimpsest or anywhere else \u2014 unless what you do aligns with what you say (and what you tell everyone else). The TL;DR is this: If I want other people on my team to take real vacations where they truly unplug and stop worrying about whatever\u2019s happening back at the office, then I\u2019ve got to do the same. There\u2019s nothing worse than the leader who wants people to do as they say and not as they do. So earlier this summer, I boarded a cruise ship in Galveston, Texas to spend a week in Cozumel, Costa Maya and Roatan with my extended family. I purposely didn\u2019t buy an international phone plan for the trip. And when someone asked if I wanted (outrageously priced) internet access on the ship \u2014 I declined. I declined! That meant no email, Slack, LinkedIn or Instagram for the entire vacation. The seven (not-so-obvious) things I learned from my time away There are plenty of things that happened on my vacation that anyone could\u2019ve predicted \u2014 all the stuff that\u2019s already been well-documented across the interwebs. Without emails and text messages and meeting invites to distract me, I focused on the people around me and got to appreciate the beauty of the ocean. I read the book Where the Crawdads Sing , practiced yoga and made a conscious decision not to worry about anything happening off the ship. I returned from my trip not just with a little more sun, but also some new perspectives \u2014 including why it\u2019s so important for execs to step away and take a real vacation. If you take real vacations, so will your team. A \u201creal vacation\u201d is one when you take multiple days away and you truly disconnect from the office. This doesn\u2019t mean you have to go anywhere exotic or fancy \u2014 staycations work too. For this particular vacation, I was gone for one week but others at Expel are committed to taking vacations that are at least two weeks. As our head of user experience, Kim Bieler, once explained to me, two weeks is a proper vacation and a game-changer for your well being. Whatever length of time you choose to take, be sure to talk about your vacations and share pictures and stories. Talking about it is another signal to your team that it\u2019s healthy and encouraged to take that break and unplug. Your team gets more opportunities to shine. While I was out, my team members stepped up and into work they don\u2019t normally do on a day-to-day basis. This was a great experience for them, both in stretching their own capabilities and determining if this new work is something they want to continue to do in the future. It also gave them more of an appreciation for and a front-row seat to what I manage on a day-to-day basis. You discover what you should\u2019ve been delegating all along. If your team can do it while you\u2019re out, they can do it when you get back. And handing the reins to your team frees you up to focus on new things. If you\u2019re scared that delegating some of the things you normally do makes you replaceable, you\u2019re right \u2014 but I prefer to think about this concept in a different way. If someone else in my org can step up and take on some of the programs and tasks I used to be responsible for, that means I\u2019ve built a great and capable team. And that\u2019s a wonderful thing for your business, your employees and you . You discover where you\u2019ve got process gaps. We\u2019ve hired lots of new Expletives lately, which means my team has only been working together for a few months. Stepping away showed me where we needed to improve our processes and better share information. For example we encourage everyone to attend at least one conference a year and we budget $2,500 per person for this experience. While I was out, my team raised some good questions on how to best use this benefit, which prompted us to write some additional guidance for our employees. Your team has more opportunities to build relationships. While I was cruising, the people on my team connected directly and more often with our exec team. I try to encourage those connections while I\u2019m in the office, but removing myself from the equation helped this happen even more naturally while I was out. You\u2019re reminded there are more ways than your way to get work done. I know it sounds obvious, but seeing work get done differently is good for so many reasons. One of my favorite parts of my job is coaching managers and helping them think differently about ways to grow and support the people on their teams. During these conversations I draw upon my experience and the techniques I\u2019ve developed over time, in the same way others on my team draw upon their own unique experiences. This means that the same conversation can have different outcomes based on the questions asked and guidance provided. Usually in these situations there isn\u2019t one right way, but many ways to get to an outcome. I enjoyed returning from vacation and learning from the coaching provided during my absence. You realize why it\u2019s so important to communicate to your team the difference between a vacation and trip. Many of us blend work and personal time when we go away. I take these kinds of \u201cblended\u201d trips when I visit California. I get the chance to spend time with my family and friends while still staying connected to the office to get work done. I don\u2019t consider these trips to be vacations, but if you look at this travel from the lens of a traditional PTO policy, it\u2019d require vacation hours. If you work at a company with a flexible time off policy, the lines start to blur so it\u2019s important to communicate in advance the type of away time you\u2019re taking. If the travel is for a trip, then fine \u2014 define your rules. If the travel is for a vacation \u2014 then be clear that you\u2019ll be disconnecting in order to protect your time away. Moral of the story: If you come work at Expel, we want you to take a vacation. For realsies. And if you choose not to come work with us, I hope I\u2019ve at least encouraged you to spend a few days fully disconnected. Do it for your own sanity and the development of your team. Now \u2026 off to get my Vinyasa on."
6
+ }
announcing-open-source-python-client-pyexclient-for.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Announcing Open Source python client (pyexclient) for ...",
3
+ "url": "https://expel.com/blog/open-source-python-client-pyexclient-expel-workbench/",
4
+ "date": "Oct 27, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Announcing Open Source python client (pyexclient) for Expel Workbench Engineering \u00b7 2 MIN READ \u00b7 EVAN REICHARD, DAN WHALEN, MATT BLASIUS, PETER SILBERMAN, ROGER STUDNER, SHAMUS FIELD AND WES WU \u00b7 OCT 27, 2020 \u00b7 TAGS: Company news / MDR / Tech tools At Expel, we believe that human time is precious, and should be spent only on the tasks that humans are better at than machines \u2013 making decisions and building relationships. For the rest of the work, it\u2019s technology to the rescue. We\u2019ve built our platform, Expel Workbench\u2122, to provide an environment where our analysts can focus on high-quality decision making. In order to do this, we knew we needed the platform to be like fly paper for inventors \u2013 good ideas should be easy to experiment with and get into production. Everything you can do in our platform has a discoverable ( Open API FTW!), standard compliant ( JSON API anyone?) application-programming interface (API) behind it. If you can click it in the user interface (UI), you can automate it with client code. Internally at Expel, we\u2019ve been taking advantage of our APIs from the very beginning, but we\u2019ve always hoped to see customers do the same. Introducing pyexclient Today we\u2019re announcing the release of pyexclient , a python client for the Expel Workbench. We\u2019ve built on our learnings over the past few years and have beefed it up with documentation and lots of examples. With the release of pyexclient we\u2019re including: Snippets : we\u2019re releasing 25+ code snippets that give, in a few lines each, examples of how to accomplish a specific task. Want to create an investigation or update remediation actions? We\u2019ve got you. Scripts : In addition to the snippets, we\u2019re releasing some fully featured scripts that contain larger use cases. The three we\u2019re releasing today are: Data Export via CSV : Want to manipulate alert data in your favorite business intelligence (BI) analytics tool? This script provides an example of how to export alert data and fields as a CSV over a specified time range. Poll for new Incident : Want to build automation that runs when bad things are detected? This script provides an example that polls the API for new incidents. It also allows for filtering on keywords. Sync with JIRA : Want to expose artifacts from decisions our analysts make in Expel Workbench to your internal case management system? This script provides an example of syncing Expel activities that require customer action to a Jira project. This includes: Investigations assigned to the customer Investigative actions assigned to the customer Remediation actions assigned to the customer Comments added to an investigation Notebook : Want to see what change point analysis or off-hours alerting looks like in your environment? We\u2019ve got you. We\u2019re releasing a notebook that implements the following: ipywidget to Auth to Expel Workbench (feel free to re-use this!) Overview of alerts with some basic stats like number of alerts, percentage done without customer involvement and off-hours alerting (you can configure timezone and working hours) Heatmap of alert arrival times Time-to-action by severity w/ bar chart Change point analysis for Expel Alert time series! Here\u2019s a screenshot of change point analysis available in the notebook: Example alert time series w/ change points As we\u2019ve been working with our customers to protect and build out their cloud environments, we\u2019ve been impressed with the raw power that can be achieved with composing APIs and configurable components. Work that used to require a huge team to customize enterprise software is now just a script away. We\u2019re really excited to get this client in the hands of our customers and partners, and see what innovative ways they leverage the information available in Expel Workbench. Interested? We hope so! Getting started is as easy as \u201cpip install pyexclient\u201d. Head over to our pyexclient documentation page for more details."
6
+ }
applying-the-nist-csf-to-u-s-election-security-expel.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Applying the NIST CSF to U.S. election security - Expel",
3
+ "url": "https://expel.com/blog/applying-nist-csf-to-election-security/",
4
+ "date": "Sep 24, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Applying the NIST CSF to U.S. election security Security operations \u00b7 10 MIN READ \u00b7 BRUCE POTTER \u00b7 SEP 24, 2019 \u00b7 TAGS: Framework / Managed security / NIST / Planning / Vulnerability If you\u2019ve worked in security for any length of time, chances are good that you\u2019ve heard of the NIST Cyber Security Framework (CSF) . It\u2019s a useful tool for helping orgs increase their overall resilience and response to cyber threats. I\u2019ve personally used the CSF to guide cybersecurity activities in orgs of all sizes, ranging from startups and local governments to Fortune 500 companies. Even well-known tech brands like Amazon and Microsoft use the CSF to understand where they are and where they want to be with respect to cyber risk. Given the utility of the CSF, I\u2019d argue that it\u2019s not only useful for corporations \u2014 it\u2019s helpful for guiding security activities around processes like our national elections. As we march toward November 2020, there\u2019s continued dialogue around how to secure our democracy. That\u2019s because our election systems have been under attack by various adversaries ever since the United States was formed. Over the last few years, though, these attacks have come into sharp focus but the collective response to those attacks hasn\u2019t. Is election security an area where the CSF could lend some clarity to the \u201cas is\u201d and \u201cto be\u201d of the U.S. election infrastructure? I vote yes. (Pun fully intended.) The 3 challenges for state and local election operations Most of the mechanics of our elections process \u2014 like setting up ballot boxes or electronic voting machines, staffing the polls and recording and reporting votes \u2014 is managed at the state and local government level. So for the purpose of this CSF exercise, I\u2019ll focus on assessing state and local election operations at a high level. The three biggest challenges that these orgs face when it comes to election security are: Lack of standardization: Applying the CSF to election security isn\u2019t easy for many reasons \u2014 one of the biggest being the fact that there\u2019s no single organization that\u2019s in charge of U.S. elections. Unlike performing a CSF assessment on a bank or a car company, the election system isn\u2019t a monolithic organization with one executive team and one board of directors. Our election systems are governed (and funded) by various U.S., state and local laws and operated by thousands of local agencies and organizations around the country. This diversity in oversight means that any specific finding or recommendation made by any of those entities would need to be implemented by those thousands of organizations \u2014 all with varying degrees of cybersecurity knowledge and budgets. No small task. Voting infrastructure: The next challenge is the infrastructure itself. Localities run elections differently \u2014 there is no \u201cone size fits all\u201d approach that\u2019s taken by every single city, county and town throughout our country. Some use paper ballots at the voting booth, some go electronic only and some use both. Some have voter registration rolls stored on modern, cloud-based systems while others still use mainframes. Some have money for technology and security improvements but many don\u2019t. Think about running a penetration test on hundreds of different systems that have a common function but no common architecture. How would you develop recommendations after that exercise? Training for election volunteers: Lastly, many state and local governments provide training for the volunteers who show up to help you cast your vote \u2014 but just like the overall elections system, there\u2019s no standardization here. That means the election security training happening in your town might be vastly different than the depth of training happening a few towns over. Is this a hard problem? Yep. Is it unsolvable? Nope. Let\u2019s walk down the path of the CSF and see how it could apply to an important part of the election supply chain: state and local governments. U.S. Elections \u2013 Identify Looking at the NIST CSF , the first functional area is Identify. In Identify you\u2019ve got categories that deal with taking inventory of hardware and software systems, cybersecurity governance, cyber risk management and supply chain risks. Unsurprisingly, all these categories apply to securing election systems (I\u2019m hoping to quickly sway those who think election security begins at the election booth \u2014 it doesn\u2019t). Hardware and software inventories are historically complicated even for the big, seemingly tech-savvy enterprises. It\u2019s the first CSF control and arguably one of the hardest to do right, because understanding what you own and what you\u2019re running is a herculean task in organizations larger than a few dozen people. When you think of the scale of modern election systems, you might think the same is true in that case. But one thing local election boards do very well is hardware inventory. Understanding what voting systems they have and where they physically are at any given moment has been a core part of election security for as long as we\u2019ve been doing secret ballots. So while there may not be a unified hardware inventory method, there\u2019s still a concrete inventory that\u2019s well controlled. For those playing along with our NIST self-scoring tool (yeah, we have one of those and it\u2019s really easy to use \u2014 grab your own copy of the NIST CSF scoring tool here ) that\u2019s probably a 3 on the verge of a 4. Software is a different animal. Election voter rolls are run on all kinds of different systems and likely the software that runs those systems is not well inventoried (at least in many cases). Also, electronic voting systems are often a black box, so while the vendor that built the system may know what\u2019s running on those machines, the local elections boards probably doesn\u2019t. Thanks to researchers at organizations like the DEF CON Voting Village , the public now has a better inventory of what\u2019s on our voting machines. But even if the public has greater visibility into what\u2019s on the machines, that doesn\u2019t translate into election boards taking better inventory of the software on their systems. Let\u2019s score this area a 2. Another category in Identify is vendor and supply chain management. As a friend of mine says, government contracting is the land of LCTA \u2014 \u201clowest cost, technically acceptable.\u201d This applies to everything from traffic light controllers to law enforcement communication networks to voting machines. It\u2019ll come as no surprise that when you go the LCTA route, security may not be something that\u2019s a priority (if it\u2019s a consideration at all). While voting machines and voter roll systems are well regulated from a procurement perspective, there are wildly varying levels of due diligence done on the supply chain from a cyber risk perspective. Look at the state of Georgia, for example \u2014 officials purchased a voting system with known security vulnerabilities because the procurement was too far down the road and there were no perceived viable alternatives. In a conventional enterprise, these sorts of vulnerabilities would have stopped the procurement process cold. But in the relatively small world of government election systems, the transaction happened without a blink of an eye. I\u2019m going to rate that a 2, but trending towards a 1. U.S. Elections \u2013 Protect Next up in the NIST CSF is the Protect functional area. This part deals explicitly with security controls that are designed to protect an organization from a successful attack by an adversary. Encryption and data protection, identity and access management, training and awareness and how you operate the system are all part of Protect. Again, the level of sophistication of these categories varies depending on your locality. Let\u2019s talk about elections and encryption. The biggest forcing function for encryption with elections is the voter rolls and associated personal data. Upcoming laws like the California Consumer Privacy Act (CCPA) will likely force officials to create a regulatory framework that requires encryption for voter rolls. And depending on how broad the definitions are in laws like the CCPA, officials might need to encrypt the vote itself as well since it\u2019s arguably one of the most personal pieces of information someone gives away. Encrypting it makes perfect sense. We don\u2019t have concrete evidence of how much data is or is not encrypted currently in modern voting systems, so for now we\u2019ll have to label this as \u201cunknown\u201d in our NIST self-scoring tool. Lastly, Protect deals with conventional IT security controls such as change management, vulnerability management and auditing. The quality (or lack thereof) at the local level impacts the assurance of voter registration rolls as well as vote tallying and results communication processes. At the state and local level, these controls are managed by a patchwork of local officials, contractors and vendors. While orgs such as the National Association of State Legislatures have guidelines on how to secure these systems, these guidelines are voluntary and compliance varies from state to state. Looking at these controls, we could score them a solid 2 with a few states trending toward a 3. U.S. Elections \u2013 Detect The Detect functional area of the NIST CSF is the sweet spot when it comes to cybersecurity operations. This is where the bad guys are caught doing bad things. Getting a good score in Detect typically means that an org has good security signals being generated by various security tech. From there, analytical technology and humans working in a security operations center are responsible for identifying malicious activity and notifying the appropriate parties. The question here is what state and local governments have to do when it comes to: Security technology installed on endpoints and networks Security signal generated by these technologies Aggregation and analysis capabilities SOC analysts and escalation paths The distinction between what\u2019s required for the overall voting ecosystem (that includes voter registration systems and vote reporting systems) versus what\u2019s required to secure just the voting machines is striking. While voter registration and vote reporting systems are essentially enterprise systems that can have commodity security technology installed for detection purposes, electronic voting systems are basically embedded systems. They have specialized hardware and software that requires vendor interaction and specialized processes to update. Plus, voting systems are offline for most of their lives and are generally not connected to a network even when they\u2019re in use. Getting real-time telemetry off of them with software that most other security and analytic systems can understand is highly unlikely (and may put the system in more danger versus less). So for many of the Detect subcategories, scores will be pulled down due to the nature of offline voting systems in general. Some of the slack has been picked up by organizations like CYBERCOM . During the 2018 midterm elections (and to some extent in the 2016 elections as well) CYBERCOM monitored it\u2019s SIGINT assets as well as worked with various public and private sector entities to monitor election night activities for bad actors. This point-in-time monitoring is useful for detecting threat actors that may be attempting to interfere with the voting itself, but doesn\u2019t necessarily address attacks against other parts of the ecosystem. So for subcategories like Detect \u2013 Continuous Monitoring 1: \u201cThe network is monitored to detect potential cybersecurity events,\u201d most states would score a 2. U.S. Elections \u2013 Respond The Response Functional Area is a part of the NIST CSF many of us hope to never get to. If you\u2019re responding to an incident, then a bad thing already happened and you\u2019ve got to deal with it. The reality for any enterprise is that you\u2019ll eventually have to respond to security incidents. For election systems, we know from public reports that they\u2019ve been under attack for years. And some of these attacks have been successful, unfortunately. We should expect future elections to have similar issues. The good news is that because of past events, we see lots more coordination between various stakeholders than we\u2019ve ever seen before. The federal civil and military agencies are actively communicating with state and local authorities. So for RS.CO-3 (\u201cInformation is shared consistent with response plan\u201d) and RS.CO-4 (\u201cCoordination with stakeholders occurs consistent with response plan\u201d), scores are probably at least a solid 3 with some localities trending toward a 4. But how good is each plan itself (RS.RP-1)? That likely varies dramatically based on how far down into the process you are. While states have response plans at a strategic level, once you get to the local precincts, IR processes for local cyberattacks start to disappear. The saving grace is that mechanically poll workers are looking for anything out of the ordinary and run their local precincts according to a common set of procedures. So while there\u2019s no plan per se at that level, there are compensating controls that somewhat act as a plan. Score? I\u2019ll give them a 2, trending towards 3. And how well do we understand the impact (RS.AN-2)? That\u2019s been a matter of national debate for the last several years. Regardless of the facts around specific incidents, it\u2019s almost impossible for outsiders to find truth due to ideological and partisan differences. The current mechanisms for discovering and communicating the impact of cyber incidents is unfortunately woefully inadequate, resulting in a score of 1. U.S. Elections \u2013 Recover Finally, we get to the shortest Functional Area of the CSF: Recover. Once all is said and done, how well do you get back to normal operations? How well do you handle the public relations aspect to deal with the event that occurred? And are you able to refine your recovery activities based on what you learned from the last incident? Much like Respond, past events help drive improvements in this functional area. States have practices on recovery operations now and are able to (in some cases) restore services in a timely and accurate way. There are plenty of situations in which data is still lost \u2014 it takes diligence and attention to get recovery operations to be smooth and easy to execute. Score on recovery planning? I\u2019ll give this area a 2. Public relations is a large part of recovery (RC.CO-1 and 2). Again, like Response, recovery public relations relating to the election system isn\u2019t like public relations for a normal enterprise. The country is polarized and simply saying \u201cEverything is back to normal!\u201d may not be enough to satisfy most voters. Transparency is required and that isn\u2019t a strong trait of current election recovery operations. We\u2019ll get there \u2026 but for now, we\u2019re still at a 2. Next steps This was a quick, back-of-the-napkin attempt to apply the CSF to U.S. elections. Certainly we\u2019d benefit from a detailed analysis \u2014 using the CSF as the driving framework \u2014 of election systems in all 50 states. Shining a bright light on what\u2019s working and what needs help in our election systems would assist in driving funding decisions at all levels of our democracy. With that kind of common assessment, the public could make apples-to-apples comparisons between different systems and architectures in different states. We\u2019d be able to monitor change over time and measure the progress being made by those responsible for the integrity of our elections. And over time, the public would put more trust in our election system. Who would do this and where would the funding come from? That\u2019s a question that a blog post can\u2019t answer. However, I hope that what this post does provide is evidence that the NIST CSF offers value in systems of all shapes and sizes, including the national election systems. Security for the broader election supply chain That said, remember that local agencies and organizations that are leading these election operations are only part of the election security supply chain. Many people\u2019s perceptions of the election process go something like this: They go vote at their local polling place, the magic happens and results show up on their nightly news a couple hours later. But the system is much larger than that \u2014 elections are about far more than the voting machine. Consider voter registration efforts and election rolls, the campaigns and special interest groups that disseminate information about candidates and issues and the reporting and validation of the results. If you consider all those distinct parts of the supply chain, there are plenty of opportunities for attack and the adversary can be lurking almost anywhere, whether that\u2019s at a polling place or behind a Twitter account. While state and local orgs play a role in a larger effort to protect our national elections, a NIST CSF-style assessment for all 50 states would be a fantastic step forward in making our future elections more secure."
6
+ }
attack-trend-alert-aws-themed-credential-phishing-technique.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Attack trend alert: AWS-themed credential phishing technique",
3
+ "url": "https://expel.com/blog/attack-trend-alert-aws-themed-credential-phishing-technique/",
4
+ "date": "Feb 1, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Attack trend alert: AWS-themed credential phishing technique Security operations \u00b7 4 MIN READ \u00b7 SIMON WONG AND EMILY HUDSON \u00b7 FEB 1, 2022 \u00b7 TAGS: Cloud security / MDR / Tech tools The Expel Phishing team spotted something \u2026fishy\u2026 recently. We came across a less common but well crafted Amazon Web Services (AWS)-themed phishing email that targets AWS login credentials. These emails have been reported in the past by security practitioners outside of Expel, but this is the first time our security operations center (SOC) encountered this technique. Now that we\u2019ve seen this tactic in the wild, we wanted to share what we learned about this attack and how our SOC analysts triage malicious emails here at Expel. What happened Expel\u2019s Phishing team reviews hundreds of emails a day and thousands of emails on a weekly basis; the vast majority of malicious emails we encounter are credential phishing attacks that are Microsoft themed. Why are they often Microsoft themed? We think it\u2019s because Microsoft and Google have dominant market share and both tech giants have highly reputable brands. Their cloud platforms and offerings are reliable cloud infrastructures, which cover most businesses\u2019 needs \u2013 like email, communications, and productivity applications. So this attack was interesting to us. Similar to Microsoft and Google, AWS is a popular cloud platform. If attackers were to obtain AWS credentials to an organization\u2019s cloud infrastructure, this can pose an immediate risk to their environment. On January 26, 2022, our customer\u2019s user submitted a suspicious email for review. We picked it up and immediately turned the email into an investigation based on some highly suspicious indicators (we\u2019ll dive into those below) that were surfaced to our analyst by one of our bots, Ruxie\u2122 . Based on these leads, we decided to dig into the submitted email for a closer look. How we triaged The way we triage emails here at Expel can be different from other managed security service providers. We use our platform, the Expel Workbench\u2122 , to ingest user submitted emails. From there, based on detections and rules created by our team, the Expel Workbench gives context for why the email is suspicious. This context provides decision support for our analysts as they review the email. That way the analyst can focus on applying what we call OSCAR (orient, strategize, collect evidence, analyze, and report), and perform analysis with decision support from our bots. We walked you through how Expel analysts use OSCAR in a previous post here . Here\u2019s what we can capture from our initial lead by applying OSCAR. By orienting , we notice that Ruxie\u2122 surfaced a suspicious link that isn\u2019t related to Amazon. And within the screenshot we noticed some poor grammar \u2013 there\u2019s no space in between \u201caccount\u201d and \u201crequires.\u201d We\u2019ve also noticed these bad actors are always cordial by making use of words like \u201ckindly.\u201d The image below is a screenshot of the suspicious email. Suspicious email submitted to Expel Phishing Let\u2019s strategize next! There are two common phishing tactics we see when it comes to phishing. One is suspicious hyperlinks and the other is file attachments. In this case, Ruxie informed us that there\u2019s no attachment. But there\u2019s a suspicious hyperlink that needs to be reviewed.The image below shows the suspicious link surfaced by Ruxie in the Expel Workbench. Expel Workbench initial alert Next step: let\u2019s collect evidence. Wow, these bots are really helping us out here! Without the need to download the email file and open it in an email client for analysis, our bots do all the heavy lifting for us. Here Ruxie\u2122 surfaces the URL, recognizes that there is a partial base64 string which looks to be an email address, and sanitizes that email address. Awesome! Ruxie actions in the Expel Workbench In a previous post , we mentioned how Expel managed phishing uses VMRay to analyze phishing emails. But not everyone has access to an advanced sandbox. Can you still analyze malicious emails? Absolutely! We\u2019ll show you how to do this by using free tools like a simple web browser sandbox and the built in developer tools, which is one of our favorite methods of analysis. We recommend using Browserling , as this provides you with a safe environment to analyze suspicious hyperlinks. We\u2019ll be using Mozilla and it\u2019s developer tools as the web browser in this example. Follow these steps to access the developer tools: Navigate to the malicious domain. Let the landing page load. Note that this page is convincing if you\u2019re not careful, since the threat actor has cloned the page. Fake AWS sign-in page Enter the faulty credentials: [email protected] Navigate to the browser\u2019s developer tools. Mozilla developer tools navigation Here is a side-by-side comparison of the two pages. As you can see, they\u2019ve cloned the AWS login page. If a user isn\u2019t careful in reviewing, they\u2019ll fall victim to this attack. Left: Real AWS login page. Right: Fake AWS login page There are few important HTTP methods, like \u201cGET\u201d request, you can use when you\u2019re attempting to get data from a web server. But what about when you\u2019re investigating where credentials are being stored? You\u2019ll want to follow the \u201cPOST\u201d request traffic. This HTTP method is used to send data to the web server and most commonly used for HTTP authentication with PHP. After entering phony credentials we see the \u201cPOST\u201d request is storing the credentials to the same domain. Now we can scope using this indicator as evidence to identify potential account compromises. Mozilla developer tool In addition to our awesome bots (can you tell we love our bots here at Expel?), we also have automated workflows that are built into the Expel Workbench\u2122 that can help our analysts be more efficient by reducing cognitive loading for triaging emails. By running our domain gather query we observed no evidence of traffic to the malicious credential harvesting domain, which suggests no signs of compromise! Whew! Last but not least, we can now record that there was no evidence of compromise in our findings as a part of the investigation. Ruxie analysis that displays any POST requests made to the fake AWS webpage across the customer\u2019s environment Although tech is great and can help us be more efficient at running down investigations related to credential harvesting, it\u2019s not always necessary and we can still achieve the same goal manually. The technique we just walked you through in this post can be applied to triaging any suspicious credential harvesting email. How you can keep your org safe AWS users are just as vulnerable to credential phishing attacks as Microsoft users. And if an AWS user falls victim to phishing emails and social engineering techniques, putting their credentials in the hands of an attacker, there\u2019s a chance you\u2019ll be dealing with a cloud breach. Here are a few ways you can remediate if your AWS account was compromised: Reset Root/IAM user credentials. Disable, delete, or rotate access keys. Audit permissions and user activity through the use of CloudTrail. Enable AWS multi-factor authentication on user accounts. We hope you found this post helpful! Have questions or want to learn more about how the Expel Phishing team works? Let\u2019s chat (yes \u2013 with a real human)."
6
+ }
attack-trend-alert-email-scams-targeting-donations-to-ukraine.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Attack trend alert: Email scams targeting donations to Ukraine",
3
+ "url": "https://expel.com/blog/attack-trend-alert-email-scams-targeting-donations-to-ukraine/",
4
+ "date": "Mar 24, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Attack trend alert: Email scams targeting donations to Ukraine Security operations \u00b7 5 MIN READ \u00b7 HIRANYA MIR, JOSE TALENO AND SIMON WONG \u00b7 MAR 24, 2022 \u00b7 TAGS: MDR / Tech tools As the Russian invasion of Ukraine continues, many people around the world are looking for opportunities to donate to Ukrainian relief efforts. For scammers, this presents an opportunity to prey on people\u2019s well-intentioned desire to help. Recently, we\u2019ve seen an increase in phishing emails masquerading as Ukrainian cryptocurrency and charitable apparel organizations. In this post, we\u2019ll show you what these emails look like and how to spot the tell-tale warning signs to ensure your donations are going to help those in need. It\u2019s both unsurprising and horrible that there are people out there trying to take advantage of the situation. We are not discouraging anyone from donating but, since there are bad actors at play, we do encourage people to verify their donations are going to a legitimate place to help those in need. Crypto scam emails If you\u2019re thinking about donating cryptocurrency to help victims in Ukraine, it\u2019s important to be aware of potential scam techniques before you hit \u201csend.\u201d Especially if you\u2019re prompted to donate via email solicitation, rather than seeking out a public wallet address associated with donation efforts. If you receive an email claiming to represent a charitable organization accepting crypto donations, there are some key clues to indicate whether it\u2019s genuine or not. The email below is a recent example of a crypto scam email: Crypto scam email Our first clue that things are amiss? The name and address listed in the \u201cFrom\u201d field. Let\u2019s zoom in a bit more\u2026 Email headers and signature field The doctor\u2019s name listed on the \u201cFrom\u201d field (Dr.Maxim Aronov), doesn\u2019t match the email address listed on the \u201cFrom\u201d field (fontbadia@). Also, the email address provided in the signature field, maximaronov40@gmail[d]com, isn\u2019t associated with the children\u2019s clinic. If we look up the email reputation for maximaronov40@gmail[d]com we can see that this address isn\u2019t linked to any social media profiles on major services like Facebook, LinkedIn, and iCloud. While this could also mean this is a new email address, it\u2019s also suspicious. Next, let\u2019s inspect the public wallet address listed in the email body. (We\u2019ve hidden the wallet address but for anyone wondering, it was an Ethereum public address.) Crypto transactions are stored on the blockchain \u2014 leaving us a nice digital footprint of transaction activity associated with a public wallet address. You can review the transaction history of a public address using block chain explorer sites like blockchain.com and Polkascan. Below is the transaction history of the public wallet address listed in the email body: Public Ethereum address transaction history What stands out? This public wallet address has recorded zero transactions. When donating crypto to Ukrainian relief efforts, be wary of public addresses with minimal transaction history and low balances. Would you buy an expensive watch from a seller on Ebay with zero transaction history? Probably a red flag, right? The same applies to crypto donations. For a comparison, the Ukraine government\u2019s (verified) Twitter account shared three cryptocurrency wallet addresses \u2014 a Bitcoin wallet address, Ethereum wallet address, and Polkadot address. Below is the transaction history for the Bitcoin public address 357a3So9CbsNfBBgFYACGvxxS6tMaDoa1P: BTC transaction history for 357a3So9CbsNfBBgFYACGvxxS6tMaDoa1P This public wallet address has recorded tens of thousands of transactions and is labeled as a \u201cUkraine Donation Address.\u201d This is a stark contrast to the transaction history of the Ethereum public wallet address listed in the email body. The bottom line? If you\u2019re thinking about donating crypto, double-check the public address and transaction history before hitting \u201csend.\u201d You can review the transaction history of a public address using block chain explorer sites like blockchain.com and Polkascan. Be wary of public addresses with minimal transaction history and low balances. Also, perform a quick Google search of the public address. If it\u2019s not linked to Ukraine crypto donation efforts, that\u2019s a tell-tale sign that something is wrong. Fake charitable apparel emails Scammers don\u2019t just target people wanting to donate. They also target people looking to \u201cshow\u201d their support. If you\u2019re thinking about buying apparel to support Ukraine, here are a couple of things to lookout for before you hit \u201cbuy it now.\u201d Here\u2019s a recent phishing email investigated by our SOC: Fake charitable apparel email Our first clue that something just doesn\u2019t feel right? The email address listed in the \u201cFrom\u201d field has no online presence according to our friends at EmailRep . Now focusing on the email body, if we were to click the \u201cClick Here to View\u201d hyperlink, that would connect our web browser to a domain hosted at u.danhramvaiqua[d]xyz. Email hyperlink For some quick context, the .xyz top-level domain has a history of domain abuse . We\u2019re in no way saying that all websites using the .xyz top-level domain lead to bad things, but used in this way \u2014 it\u2019s certainly enough to grab our analyst\u2019s attention. Let\u2019s take a look at the website reputation for u.danhramvaiqua[d]xyz. Reviewing a website\u2019s reputation is a great way to understand if a specific IP, URL, or domain name has a negative reputation or if it\u2019s been categorized as malicious. There are a number of free resources you can use. Submit the domain and review the results. It\u2019s that easy. Here are a couple of our favorites: Symantec Site Review URLVoid Talos IP and Domain reputation Webpulse Site Review classified the u.danhramvaiqua[d]xyz domain as phishing. Webpulse domain reputation results So far, we have an email address with no digital presence sending an email with a hyperlink that points to a .xyz domain that has a reputation of phishing. This is enough evidence to make the decision to either delete the email in question or forward it on to your IT team for further review. But for folks looking to go an additional step, let\u2019s take a look at what happens when we load the \u201cu.danhramvaiqua[d]xyz\u201d page in a sandbox and browse the URL as if a user visited that page. We\u2019ll use URLScan \u2014 another free online resource. URLscan provided us the effective URL (where the domain is pointing to), provided us screenshots by loading the page (which it does if the page is active), and even let us know Cloudflare issued a TLS certificate for the site on February 28, 2022. The biggest takeaway is that if a user were to click the \u201cClick Here to View\u201d hyperlink, they\u2019d be redirected to www[d]mimoprint[d]shop. URLscan results You may be asking, should I look up the website reputation for www[d]mimoprint[d]shop? Absolutely! Spoiler: It\u2019s got a bad reputation. If you\u2019re considering a donation to support victims of the crisis in Ukraine, be aware of the prevalence of scams at play to make sure your donations are actually going to help those in need. We strongly recommend using official channels to make donations and researching your options before you hit \u201csend\u201d or \u201cbuy it now.\u201d Things you can do to spot potential scam emails Before clicking on hyperlinks, hover over them and check where that URL may lead you. Report suspicious emails to your security team and avoid interacting with any unsolicited emails. Ensure your org conducts frequent security awareness training sessions and that they\u2019re adapted to current events that might be used to mislead your end-users. Make sure your org has a good security email gateway product in place for protection. Have questions about scams like these, or want to learn more about the Expel Phishing team? Reach out any time."
6
+ }
attack-trend-alert-revil-ransomware.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Attack trend alert: REvil ransomware",
3
+ "url": "https://expel.com/blog/attack-trend-alert-revil-ransomware/",
4
+ "date": "Feb 17, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Attack trend alert: REvil ransomware Security operations \u00b7 3 MIN READ \u00b7 JON HENCINSKI AND MICHAEL BARCLAY \u00b7 FEB 17, 2021 \u00b7 TAGS: MDR Over the past week, Expel detected ransomware activity targeting law firms attributed to REvil, a Ransomware-as-a-service (RaaS) operation. In this post, we\u2019ll share more about REvil, how we detected this latest attack and what you can do to make your own org more resilient to a REvil attack. What is REvil? REvil is a well-known ransomware group operating a Ransomware-as-a-Service (RaaS) program since early 2019. Given that initial access to a target organization is the job of RaaS affiliates contracted by the core REvil group, the delivery and initial infection vectors vary. But they\u2019ve been known to include phishing, the exploitation of known vulnerabilities in publicly accessible assets and collaboration with botnet owners who sell existing infections to REvil affiliates. In recent REvil campaigns , attackers deployed a modified version of Cobalt Strike\u2019s BEACON agent to compromised systems before escalating privileges and moving laterally in the target environment. Once REvil has administrator-level privileges inside an organization, they\u2019ll deploy REvil ransomware, aka SODINOKIBI or BLUECRAB. What\u2019s new about this particular REvil campaign? This most recent campaign is similar to activity we saw in fall 2020, where users visit a number of compromised yet legitimate third-party websites and are redirected to a Question & Answer (Q&A) forum instructing them to download a ZIP file that contains a malicious JScript file. It appears as though users weren\u2019t directed to these fake forum posts via phishing emails, but instead through their own Google searches. This suggests that the attackers responsible for this campaign invested considerable effort into boosting these malicious pages higher in Google result rankings. Many of these pages align with themes related to legal topics, while others talk about international defense agreements or even cover letter samples. In short, there\u2019s a wide range of topics being showcased on these various sites. The JScript file, when run, deploys a BEACON stager to the system. So far we\u2019ve seen REvil targeting users in Germany and in the United States. How to detect REvil activity in your own environment There are a few activities you can alert on in an effort to detect REvil activity: Alert when you see wscript.exe or cscript.exe execute a .vbs, .vbscript or .js file from a Windows user profile. If this generates too many false positives, try adding the condition where the wscript.exe or cscript.exe process also initiates an external network connection. Alert when wscript.exe or cscript.exe execute a .vbs, .vbscript or .js file from a Windows user profile and the process spawns a cmd.exe process. Alert when you see Windows PowerShell execute a base64 encoded command and the process initiates an external network connection. REvil process example How to remediate if you think you\u2019re affected #1: Contain the host(s) Isolate the host in question to remove attacker access. #2: Start the re-image Attempting to manually clean the fileless persistence mechanism used by this campaign may lead to re-infection on startup if not done properly. That\u2019s why re-imaging is critical. #3: Scope the environment for additional infections The PowerShell command executed as part of this activity occurs at the time of initial installation as well as at startup after persistence is established. This means that it\u2019s extremely important to determine when the initial download of the zipped JScript file occurred and compare that to the timestamp associated with the detected PowerShell activity. Network traffic destined for known command and control domains also provides a good way to timeline activity related to this campaign in your environment. If you discover that this infection persisted in your environment for more than a short period of time, it\u2019s possible that attackers already moved laterally within your environment and/or escalated their privileges within your Active Directory Domain. RaaS actors typically wait until they have the privileges necessary to deploy ransomware to a large portion of your environment at once before moving on from the persistent implant portion of the attack lifecycle and actually deploying ransomware. How to protect yourself against a REvil ransomware attack There are actions you can take in your environment today to better protect your org against a REvil ransomware attack: Configure Windows Script Host (WSH) files to open in Notepad Prevent the double-click of evil JavaScript files. Configure JScript (.js, .jse), Windows Scripting Files (.wsf, .wsh) and HTML for application (.hta) files to open with Notepad. By associating these file extensions with Notepad, you mitigate common remote code execution techniques. Pro tip: PowerShell files (.ps1) already open by default in Notepad. Enable PowerShell Constrained Language mode Constrained Language mode mitigates many PowerShell attacks by removing advanced features that these attack tools rely on such as COM access and .Net and Windows API calls. The language mode of a PowerShell session determines which elements can be used in the session. Don\u2019t expose RDP directly to the internet Don\u2019t expose RDP services directly to the internet. Instead, consider putting RDP servers or hosts behind a VPN that\u2019s backed by two-factor authentication (2FA). Create and test backups of data Consider creating and testing backups of data within your org as part of your IT policy. Regularly creating valid backups that aren\u2019t accessible from your production environment will minimize business disruptions while recovering from ransomware attacks or data loss. Want to find out when we share updates from our SOC on attack trend alerts just like this one? Subscribe to our EXE blog to get our latest posts sent directly to your inbox."
6
+ }
attacker-in-the-middle-phishing-how-attackers-bypass-mfa.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Attacker-in-the-middle phishing: how attackers bypass MFA",
3
+ "url": "https://expel.com/blog/attacker-in-the-middle-phishing-how-attackers-bypass-mfa/",
4
+ "date": "Nov 9, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Attacker-in-the-middle phishing: how attackers bypass MFA Security operations \u00b7 4 MIN READ \u00b7 ANDREW BENTLE \u00b7 NOV 9, 2022 \u00b7 TAGS: MDR TL;DR: Credential phishing is an established attack mode, but multi-factor authentication (MFA) made it much harder on hackers. A new tactic\u2013called \u201cattacker-in-the-middle\u201d\u2013can be effective at end-running MFA defenses. This case examines a recent AitM attack on one of our customers and provides useful advice on how to detect it in your own environment. Credential phishing is nothing new\u2013fooling users into giving away their logins and passwords has been hackers\u2019 bread and butter forever. But until recently the effects of credential phishing could be mitigated by using multi-factor authentication. The attacker might get a password, but the second factor is a lot more difficult. Also not new: attackers finding techniques to bypass security measures. One popular way around MFA is known as attacker-in-the-middle (AitM), where the user is tricked into accepting a bogus MFA prompt. What happened? AitM techniques look identical to regular credential phishing at first. Typically, an email directs the user to a fake login page, which steals credentials when the user attempts to sign in. With normal credential phishing, this fake login page has served its purpose\u2013it stores the credentials and the attacker will attempt to use them at a later time. AitM phishing does something different, though; it automatically proxies credentials to the real login page, and if the account requires MFA users get prompted. When they complete the MFA, the web page completes the login session and steals the session cookie. As long as the cookie is active, the attacker now has a session under the victim\u2019s account. Our SOC recently saw this technique used to bypass MFA and detection in a customer\u2019s environment. The attackers harvested a user\u2019s credentials and login session into their organization\u2019s Microsoft 365 portal using AitM techniques. The attacker evaded detection for 24 days until a suspicious Outlook rule was made in the compromised user\u2019s inbox. Our analysts identified the source IP as a hosting provider and noticed that no login events were seen from the IP address. They followed the related session ID to its earliest date and found that the session originated from another IP address, 137[.]220[.]38[.]57, nearly a month before. This address is related to a hosting provider (Vultr Holdings) and was anomalous for the user account. But something stranger was going on: not only was MFA satisfied from this login, but it was also supported by the Active Directory (AD) agent on the user\u2019s host. This didn\u2019t make sense\u2013how could a login from a random hosting provider IP address use the AD agent tied to the user\u2019s managed host? This is something we might see when a user logs in while using a VPN or proxy, but our analyst\u2019s OSINT research and Expel\u2019s automatic IP enrichment didn\u2019t connect this address with a VPN provider, so we kept digging. We checked logs from their Palo Alto firewall and DNS requests from the host in Darktrace and found DNS requests to rnechcollc[.]com with DNS A records pointing to 137[.]220[.]38[.]57, the same IP the first login was from. The rnechcollc[.]com site hosted an AitM credential harvesting page that proxied the credentials (and even the AD agent authentication from the user\u2019s on-premises host through the Vultr Holdings infrastructure and onto the organization\u2019s Microsoft 365 portal). The page then recorded the session cookie and the attacker continued the active session from a VPN provider for the next 24 days. Confirming AitM in your environment AitM can be tricky to confirm, especially without network logs. But there are a few ways to investigate if a compromise originated from an AITM credential harvesting page. Investigating using only cloud logs: this is the worst-case scenario. All you have are the logs from the cloud providers, be it Okta, Microsoft 365, or any number of other platforms, and the goal will be to determine the initial login IP address by following the session ID back to its earliest point. The initial login will likely, but not necessarily, be from an IP address associated with a hosting provider. Check passive DNS entries associated with the IP address (VirusTotal and PassiveTotal are good tools for this). Check the reputation on the recent DNS entries related to the IP address through OSINT\u2013it may be a known indicator of AitM, as was the case with the rnechcollc[.]com domain. Investigating using network and cloud logs: like the above method, you\u2019ll need to identify the initial login IP address through cloud logs. Follow the session ID back to the initial login and take note of the IP address. Check your firewall logs for URLs associated with the IP address. Confirming connections from within your environment to phishing domains associated with the initial login IP address is a strong indicator of AitM methodology. Investigating using EDR and cloud logs: again, identify the initial login IP address through the cloud logs. Follow the session ID back to the initial login and take note of the IP address for the initial login. Check EDR logs for network connections to the IP address. Some EDRs, like CrowdStrike and Defender for Endpoint, will record domain names related to IP connections. Confirming connections from within your environment to phishing domains associated with the initial login IP address is a strong indicator of AitM methodology. Things you can do to keep your org safe Don\u2019t discount the effectiveness of MFA\u2013 still one of the single most effective security tools that can be implemented in your organization. While AitM can bypass MFA, it represents a small portion of the credential phishing we\u2019ve seen in the wild to date. Consider implementing policies to shorten the time that session tokens can remain active; if attackers lose their sessions, they\u2019ll need to re-phish the user to get it back, or at least get them to accept another MFA prompt. Implement conditional access policies to prevent logins from unwanted countries, noncompliant devices, or untrusted IP spaces. Additionally, services like our Managed Phishing can identify malicious credential harvesting emails and inform your team of campaigns in your organization and help block attacks before they succeed."
6
+ }
back-in-black-hat-black-hat-usa-2022day-1-recap.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Back in Black (Hat): Black Hat USA 2022\u30fcDay 1 Recap",
3
+ "url": "https://expel.com/blog/black-hat-usa-2022-day-1-recap/",
4
+ "date": "Aug 11, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Back in Black (Hat): Black Hat USA 2022\u30fcDay 1 Recap Expel insider \u00b7 4 MIN READ \u00b7 ANDY RODGER \u00b7 AUG 11, 2022 \u00b7 TAGS: Company news Black Hat is more than a collection of successful events held around the world; it\u2019s a community. And if you needed a reminder of that fact, Black Hat USA 2022 will shake those cobwebs free! While Black Hat did hold its 2021 event at Mandalay Bay in Las Vegas, this year brings more people, more exhibitors, and more energy. From the moment Jeff Moss, founder of Black Hat, took the stage during the first keynote, community has been a common thread throughout the presentations. Moss kicked things off noting that 2022 marks the 25th year of Black Hat USA, and brought the crowd back in time to the conference\u2019s humble origins. At that time, Moss simply reached out to folks in his network to see if they\u2019d want to speak. (Did you know that he considered calling the event \u201cThe Network Security Conference\u201d?) Over the last quarter-century, the community of security practitioners has grown right alongside the expanding threat landscape. Until recently, Moss had thought there were three \u201cteams\u201d when it came to cybersecurity: Team Rule of Law, Team Undecided, and Team Authoritarian. Some teams were following the rules, others were limiting access to information, and there were even a few more somewhere in the middle. But now he sees a new team: a community of super-empowered individuals and organizations. These were people much like the attendees of Black Hat, who take action to right the wrongs in the world. For example, Moss noted how some companies simply stopped doing business with Russian companies in the wake of the Ukraine invasion. Some turned off access by Russian companies to their services and others shut down their websites. He used this example to remind attendees that this community has a significant influence in the world. Following Moss was Chris Krebs of the Krebs Stamos Group, and former director of the Department of Homeland Security\u2019s Cybersecurity and Infrastructure Security Agency (CISA). Krebs spoke about his time \u201cwandering the wilderness\u201d over the past few years, and talking to people in and outside the U.S. across a range of roles about their security challenges and concerns. He kept hearing three questions: Why is it so bad right now? What do you mean it\u2019s going to get worse? What can we do about it? These aren\u2019t easy questions to answer, but he sees the solution in this community of people who have the ability to make positive changes based on its principles. Krebs covered a lot of ground during his roughly 45 minutes on stage, but if there was a single takeaway, it\u2019s that he holds a lot of hope for cybersecurity and its role in improving the world. Black Hat explores those huge macro issues, but it also looks at smaller ones, too\u2014the ones that practitioners face day-in and day-out to better protect their organizations. Kyle Tobener led a session on taking a \u201charm reduction\u201d approach to cybersecurity best practices. Did you know that most organizations\u2019 security teams employ a \u201cuse reduction\u201d approach to security best practices? To quote the Five Man Electrical Band song \u201cSigns\u201d: Do this, don\u2019t do that, can\u2019t you read the signs? Tobener argued that simply telling people what to do isn\u2019t effective. In fact, he shared research that showed how this approach can have the opposite effect. He instead advocates for harm reduction, a commonly used approach in healthcare. Harm reduction offers a set of practical strategies and ideas aimed at reducing the negative consequences associated with various human behaviors. It focuses on the outcomes, not the original behaviors. His advice? Remove \u201cdon\u2019t do that\u201d from your vocabulary. Replace it with, \u201cTry not to do that, but if you do, then here are some ways to be safe.\u201d Adam Shostack of Shostack and Associates took the stage virtually in his session titled, \u201cA Fully Trained Jedi You Are Not.\u201d Shostack pointed out that while the Star Wars movies usually focused on the Jedi and their contribution to the rebellion, non-Jedi characters made huge contributions. He emphasized that the field of cybersecurity needs people of all different skill sets and experience levels, and the field isn\u2019t limited to Jedi-level cybersecurity masters. Instead he shared that a mix of more targeted training and education combined with an effort to \u201cshift left\u201d (incorporating security into the development process) can solve a lot of cybersecurity issues and better support developers and security personnel alike. After all, it takes more than Jedi knights for a successful rebellion. Burnout can have a major impact on cybersecurity professionals. Stacy Thayer, Ph.D., knows this all too well, and shared her knowledge on the topic in her session, \u201cTrying to be Everything to Everyone: Let\u2019s Talk About Burnout.\u201d A number of factors contribute to burnout in cybersecurity. Dr. Thayer named a few: High levels of mental workload Anticipating cyber-attacks A shortage in staffing and an increase in workload A struggle to find one\u2019s place within the organization Work is often not appreciated in the organization Dr. Thayer says that the usual advice for dealing with burnout is completely ineffective. Take a vacation? Sure! I\u2019ll just have more work waiting for me when I get back. Go to the gym? Okay, I feel like absolute garbage but sure let\u2019s get on the treadmill! Stop caring so much? Not possible! According to Dr. Thayer, the more that you learn about yourself and your relationship with burnout and your hidden triggers, the better you\u2019ll be at managing it. These are just a few of the topics that presenters covered on day one of the event. Presenters and attendees shared so much more in sessions and on the business hall floor, but if there\u2019s anything that\u2019s obvious about Black Hat USA 2022, it\u2019s that the community here is alive and well, and poised for great things."
6
+ }
bec-and-a-visionary-scam.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "BEC and a \u201cVisionary\u201d scam",
3
+ "url": "https://expel.com/blog/bec-and-a-visionary-scam/",
4
+ "date": "Jan 10, 2023",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG BEC and a \u201cVisionary\u201d scam Tips \u00b7 2 MIN READ \u00b7 SHARON BURTON \u00b7 JAN 10, 2023 \u00b7 TAGS: MDR What does business email compromise (BEC) have to do with the vanity anthology scam? \u201cTo be part of this exciting project, all you have to do is pay $700 by Jan 1!\u201d I\u2019m a writer. I\u2019m also a woman in tech. When I saw the call for writers in a Reddit channel, looking for women in tech to write an essay about their career for an upcoming book, I was interested. Very interested. I filled out the Google form. On December 22, I got a group email announcing a project meeting at 6pm that day. A little short notice and the message didn\u2019t indicate the time zone, but OK. Responding back to the group, I clarified the time zone and decided I could attend. We met on Google Teams. The woman running the meeting seemed uncertain how to work a virtual meeting, which seemed strange because she billed herself as the chief information officer (CIO) of a large organization and, well, it\u2019s 2022. \u201cWe\u2019re always learning!\u201d she announced to the 20 or so women as she struggled to get the video and screen share to work. She devoted the first 15 minutes of the presentation to her professional background, which demonstrated that she was a \u201cVisionary.\u201d She even referred to herself that way on the typo-ridden slides. Visionary, upper case. She covered the many benefits of the book project for this select group. Visibility in our profession, authority, marketing, inspiration, you can\u2019t be what you can\u2019t see. Our stories would inspire generations. Generations. By the time she got to the part where we needed to give her $700 nonrefundable dollars by Jan 1st to be included in this inspiring project\u30fcor $100 now and three easy payments!\u30fcI knew we were in the middle of a scam. Specifically, the vanity anthology scam . Most professional writer organization websites cover it in detail. Different con, same rules So why should this story interest cybersecurity people? I\u2019m fortunate to work for a security company. When this scam presented itself, I\u2019d just completed our annual internal security training, and was hyper-vigilant about everything, so I saw this swindle for what it was. Because we\u2019re assaulted by an array of ad, marketing, economic, and partisan pitches every day, we\u2019ve evolved pretty good BS detectors. But scammers are evolving too. In this case, the Visionary employed tactics very similar to what we commonly see in BEC scams. Sense of urgency: the first meeting happened just as most people were starting their holiday break, with all the bustle that goes with it. We were given about six hours notice of the meeting. Payment was due in a week. This was all very fast during a time of year where people are already overloaded with commitments and tasks. Typos and other language issues: writers are especially sensitive to typos and dropped words because, well, words are our air. The slides had typos and missing words. Not what I expect of a CIO. Uncertainty in using basic tech: the Visionary didn\u2019t know how to share her screen initially. In 2022. After two years of remote pandemic work. Additionally, she was a CIO. A basic familiarity with simple conferencing and presentation is expected. And this was for women in tech, so technological ability should be inherent. Person of authority: She used her r\u00e9sum\u00e9 to assert credibility and emphasized how important the Visionary is in the world of tech. Too good to be true: being included in this project would enhance our careers and inspire generations. She said the volume would be an Amazon Best Seller. That\u2019s a lot for any book, much less one that\u2019s essentially self-published. In the end, the message is that people are people and bad guys are bad guys. The lessons we learn from \u201creal life\u201d apply to the cyber world, and vice versa. My awareness of BEC tactics helped me sniff out the Visionary\u2019s grift. Take your sensitivity to the iffy product and service claims you encounter in everyday life with you when you log in. And maybe that\u2019s how we inspire generations."
6
+ }
behind-the-scenes-building-azure-integrations-for-asc-alerts.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Behind the scenes: Building Azure integrations for ASC alerts",
3
+ "url": "https://expel.com/blog/building-azure-integrations-asc-alerts/",
4
+ "date": "Feb 9, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Behind the scenes: Building Azure integrations for ASC alerts Engineering \u00b7 12 MIN READ \u00b7 MATTHEW KRACHT \u00b7 FEB 9, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools If you\u2019ve read the Azure Guidebook: Building a detection and response strategy , you learned that we\u2019ve built our own detections and response procedures here at Expel. Missed the guidebook? Download it here But what we didn\u2019t share in that guidebook is how we figured a lot of those things out. Anytime you learned some lessons the hard way, it makes for a long story; which is exactly what I\u2019m sharing with you here. This story begins with our need to replace a third-party tool we were using to pull logs from various cloud providers. Building it ourselves gave Expel new access to data, improved monitoring and ended up allowing us to update our entire detection strategy for Azure Security Center (ASC) alerts and Azure in general. Over the years that third-party application started to creak and groan under the pressure of our needs. Something needed to change. Let\u2019s connect That\u2019s where I came in. (Hi! I\u2019m Matt and I\u2019m a senior software engineer on Expel\u2019s Device Integrations [DI] team.) Building an integration isn\u2019t a simple or linear process. It\u2019s why we warned Azure guidebook readers to go into the process with eyes wide open. It\u2019s also why we harp on the importance of team communication. I\u2019ll walk you through how we built an integration on top of Azure signal to help our analysts do their jobs effectively and share some lessons learned along the way. Finding the right signal At Expel, building an integration is a collaborative effort. The DI team works with the Detection and Response (D&R) team and the SOC to identify sources of signal and additional data to include in our alerts. Early on in the process the DI and D&R teams evaluate the technology and to decide which security signals and raw events are accessible. For Azure, all security signals revolve around ASC. Once we decided on using ASC as our primary alert source, I got to work building out the data pipeline. D&R got to work generating sample alerts within our test environment. Before long we had a POC working that was generating ASC alerts within the Expel Workbench\u2122. If you don\u2019t already know, ASC provides unified security management across all Azure resources as well as a single pane of glass for reviewing security posture and real-time alerts. It\u2019s one of the primary sources of alerts across Microsoft\u2019s security solutions. But I still had to figure out the best way to access the data. The good part for Expel was that there are a lot of ways to access ASC alerts; the challenging part is that, well, there are a lot of ways to access ASC alerts. In the end, we went through three different approaches for accessing these alerts \u2013 each with their pros and cons: Microsoft Graph Security API Azure Log Analytics API Azure Management API When we began development of our Azure integration, the Security API was a relatively new offering within Microsoft Graph. It\u2019s intended to operate as an intermediary service between all Microsoft Security Providers and provides a common schema. Microsoft Graph Security API presents two advantages for Expel: The single point of contact for all Microsoft Security alerts allows us to easily adapt and expand our monitoring services as our customers\u2019 needs and tech stack change without requiring our customers to enable us with new permissions or API keys. The common alert schema of the Security API means we only have to adapt one schema to Expel\u2019s alert schema rather than one schema per Microsoft product offering. We already used Microsoft Graph Security API for our Azure Sentinel integration so we were poised to take advantage of the extensibility of the API by simply adding ASC to the types of alerts we were retrieving, or so we thought. Our SOC analysts walked us through comparisons between ASC alerts in the Expel Workbench\u2122 and those same alerts within the ASC console. It quickly became apparent that the data we retrieved from Graph Security API were missing key fields. We had previously used Azure Log Analytics (ALA) to enrich alerts for our Azure Sentinel integration and thought we might be able to do the same for ASC. I worked with the analysts to find different sources of data so we could fill in those data gaps from the Graph Security API. With this approach, we could find almost all of the alert details not provided by the Graph Security API. The downside and eventual death knell to this approach was that ASC alerts by default are not forwarded to ALA. Forwarding ASC alerts would require extra configuration steps for our customers as well as the potential for increased ALA costs. The following chart gives a comparison of what ASC fields were found via each API for a single ASC alert for anomalous data exfiltration. Note that each ASC alert type will have different fields but this chart follows closely with our general experience of data availability across these APIs. A table showing, for anomalous activity related to storage blobs, what fields in the alert are or aren\u2019t present based on how you access the alert As the saying goes: when one Azure door closes another always opens. We couldn\u2019t get the fields we needed from the Graph Security API and we couldn\u2019t reliably find those fields within Azure Log Analytics, but we still had Azure Management API to welcome us with open arms. The ASC console uses Azure Management API so we knew we could get data parity using that API. The reason we avoided it initially was that the normalization would require a lot more effort. Each alert type had its own custom fields (see properties.ExtendedProperties field ) and there wasn\u2019t a set schema for these fields. Fortunately, we had enough examples of these alerts and could use those examples to drive our normalization of Azure alerts. In the end, data parity and SOC efficiency are a higher priority for us than some upfront normalization pain, so we went down the Azure Management API route (pun intended). Scaling our SOC If you\u2019ve ever worked with ASC, you probably also know that managing the alerts can feel a little overwhelming. Most of the alerts are based around detecting anomalous behavior or events (like unusual logins or unusual amounts of data extracted). Note that these alerts are generated from different Azure subsystems or resources, so as your environment changes, so do the types of alerts you\u2019ll see. Microsoft is also constantly improving and updating these alerts so you might also find yourself handling \u201cpreview\u201d alerts. And how do I know all this? I didn\u2019t until we started to scale up our POC integration. As soon as our analysts started seeing ASC alerts coming in Expel Workbench\u2122, we immediately got feedback around the lack of context available in the alert. Who is the \u201csomeone\u201d that extracted data from the storage account? What are their usual interactions with that storage account? What other user activity was there outside of Azure Storage? These are all questions that our analysts would need to answer in order to act. The example below shows what little context we had around the ASC alert. Preview alert with missing storage operation data (ex. Extracted Blob Size) Without context, our analysts will pivot to the source technology to look for additional fields to help them make a decision. In this case, they log in to the Azure portal to get more info about the alert. This experience isn\u2019t ideal for our analysts. As a side note, pivots to console (when an analyst leaves Expel Workbench\u2122 to get more details on an alert) is a monthly metric we present to the whole executive team. We track how many times a day, week and month analysts are pivoting (per each vendor technology we support) because it\u2019s an easy indicator that there\u2019s room for improvement. My team works hard to provide our analysts with the information they need to quickly make good decisions and take action, rather than spending their time doing mundane tasks like logging into another portal. Any DI team member will tell you that their worst fear is writing an integration that creates extra work for (read as: annoys) the SOC. But most importantly, an efficient SOC helps us support more customers \u2013 and provide better service. For Azure in particular this meant adjusting the noise inherent in having large amounts of alert types and also adding more context around the anomaly-based alerts. Reducing the noise We continuously work to improve the signal of alerts with all of our integrations. ASC, however, was difficult because of the outsized impact configurations have on the variety of alerts you get. For instance, ASC alerts are not generated unless a paid service called Azure Defender is enabled. Azure Defender can be enabled per Azure subscription, per resource type such as Azure Defender for Servers and, in some cases, per individual resources. The configuration of Azure Defender along with the different underlying resources being monitored created a lot of variance in the alerts. As we transitioned from our test Azure environment to real cloud environments, we quickly found this out. Our D&R team generated plenty of ASC alerts but in a live environment we received \u201cpreview\u201d (i.e. beta) alerts, duplicate alerts from Azure AD Identity Protection or Microsoft Cloud App Security along with alerts from Azure resources that we couldn\u2019t set up in our environment. I was able to deduplicate the ASC alerts from other Azure Security Providers (one of the pros of the Security Graph API is that it will do this for you). The D&R team was able to update detections so that we can ignore known low-fidelity alert types and preview alerts. But, even with all of these improvements, we can still get new alert types. As with any tuning effort, the work is ongoing. But we at least solved the known issues. Adding moar context By far the biggest challenge with our ASC integration was getting enough context around an alert so that our analyst could quickly understand the cause of the alert and make triage decisions. After iterating over all three REST APIs to address the data gaps, we eventually got to data parity between Expel Workbench\u2122 and ASC\u2019s console. However, our analysts still didn\u2019t have the context they needed to understand ASC alerts based around anomaly detection. Enter the D&R team. They took the lead on deciphering not only the breadth of alerts ASC generated but also, with the help of our SOC analysts, determined what types of log data were needed to understand each of these alerts. For instance, when we got an ASC alert warning of \u201can unusual amount of data was extracted from a storage account,\u201d D&R built automation in the Expel Workbench\u2122 that uses platform log data to show analysts exactly what the user\u2019s \u201cusual\u201d actions were. Helpful; right? You can see an example below. Example of automated decision support for an ASC alert in Expel Workbench\u2122 That not only bridged the context gap of the ASC alerts but also helped provide a framework around how our analysts triage ASC alerts. And as a bonus it didn\u2019t require them to perform any additional manual steps or pivot into the Azure portal. Is this thing on? Finding the right alert signal and making sure our SOC can triage that signal efficiently are the bread and butter of any integration. However, getting those right doesn\u2019t necessarily mean we\u2019ve created a great integration. Alongside these priorities, we\u2019re focused on operations aspects of the integration: creating a good onboarding experience, ensuring we have optimal visibility (health monitoring) and reducing storage costs. Improving visibility When building the Azure integration, we added plenty of metrics to help us profile each environment. Some technologies we integrate with have a fairly narrow range of configuration options but when it comes to monitoring an entire cloud environment that range becomes very large, very fast. As we onboarded customers, we were not only looking at performance metrics but also monitoring subscription totals, resource totals and configurations of each resource. Example customer Azure Subscription totals with Azure Defender configuration settings The image above shows a sampling of a few of our Azure customers, the number of subscriptions we\u2019re actively monitoring and the various Azure Defender configuration settings we detected. You can see there\u2019s a broad number of total subscriptions, and Azure Defender is in various status across the customer and subscriptions. We knew these metrics would help us provide insight to customers on how to maximize our visibility; we just didn\u2019t realize how quickly that was going to occur. Right away we started catching misconfigurations \u2013 disabled logs, Azure Defender not being enabled for any resources, missing subscriptions, etc. We could do as much alert tuning or detection writing as we wanted but without the proper visibility it wouldn\u2019t be much use. Example Expel Workbench\u2122 warning of a potential device misconfiguration You might be noticing a theme: the importance of feedback. And our feedback loop doesn\u2019t just include our internal teams. Ensuring our customers are on the same page and can share their thoughts is critical to making sure we\u2019re doing our job well. So, as we onboarded customers to the integration, our Customer Success team jumped in to work with customers to find ways to improve their configuration. They then ensured each of these customers understood the way our Azure monitoring works and the value of these configuration changes. As the Customer Success team worked, the Turn on and Monitoring team (this is Expel\u2019s internal name for our feature team focused on making onboarding simple, intuitive and scalable along with proactively detecting problems with the fleet of onboarded devices Expel integrates with) used this feedback to build out a way for us to provide automatic notifications for common configuration issues. Example Ruxie notification for a misconfigured Azure Expel Workbench\u2122 device Did you forget to provide access for us to monitor that subscription? No problem. We automatically detect that and provide you a notification along with steps to fix the issue within minutes of creating the Azure device in Workbench\u2122. Keeping costs in check There are design decisions which have very real implications toward cost as you build out integrations with an IaaS provider. Azure was no different. Requiring customers to enable Azure Defender increases their Azure bill. Requiring customers to forward resource logs to Azure Log Analytics increases their Azure bill. If we only integrate with Azure Sentinel, that increases our customer\u2019s Azure bill. And so on\u2026 When it comes to these decisions, we lean towards reducing direct cost to customers. We\u2019ve already discussed how important log data is for providing context around ASC alerts. Azure Storage log data is particularly important. This log data is basically a bunch of csv files within Azure Storage itself . If you want to search this data, you have to forward it to a log management solution within the Azure ecosystem \u2013 that means Log Analytics. During the development of the integration, the best resource from Microsoft for forwarding logs was to use a PowerShell Script to pull storage log data, translate it into JSON format and upload it to a Log Analytics workspace where the data can then be searched or visualized. As of this writing, there is a preview Diagnostic Settings feature for Azure Storage accounts that allows automatic forwarding of logs to ALA via the Azure console. Even though forwarding the logs to ALA is becoming easier, storing these logs in ALA can be expensive. In some cases, our customers would have paid more than $300 a day, or over $100k a year, to store their Azure Storage logs within ALA. Instead of requiring customers to foot the bill for the storage and also adding yet another configuration step, we decided to directly ingest those logs into our own backend log management solution. This helped us solve the cost problem across all our customers with a single solution. A typical approach to solving this problem is to figure out which logs you don\u2019t need and then filter them out prior to ingestion. In the case of Azure Storage, each log entry is a storage operation so it drops benign operations during ingestion. This approach is difficult for two reasons. The first is that we\u2019re dealing with a large variety of Azure environments. Determining a set of benign operations may be possible for a single environment but the odds aren\u2019t good for determining benign operations across all customer environments. The second is that these logs helped provide context around detections of anomalous behavior. Removing whole swaths of logs would make understanding what was normal versus abnormal more difficult. To get around this, I worked with D&R to create a log aggregation approach that would decrease the log volume without filtering whole chunks of logs or reducing our context. The idea was that we could determine what log entries pertained to the \u201csame\u201d operation but at different points in time. If the operations were the \u201csame\u201d then we would combine them into a single log record with added operation counts and time windows. Based on the operation type we could loosen or tighten our definition of \u201csame\u201d in order to provide better aggregation performance. In the end, we were able to achieve a 93 percent reduction in volume across all of the storage accounts we were monitoring while still maintaining the contextual value of the logs themselves. This was no small feat considering the diversity of Azure Storage use cases, and thus log content, across our different Azure customers. Estimated costs for searchable Azure Storage logs Volume (MB/day) Est. ALA Cost ($/yr) Device Azure Storage Accounts Raw Aggregate Raw Aggregate Reduction (%) cb7ebb31-c17f-4b73-9962-db585b94f58d 68 173268 2917 138547 2332 98.32 6321c95f-b6c9-4e65-9a18-8760a0846387 24 54551 10047 43619 8033 81.58 0c839cc8-90ae-4733-b9f5-992f5461ed2c 168 19287 626 15422 500 96.76 afe394af-609e-425f-a075-197047aa1875 5 15718 5027 12569 4019 68.02 f3b0a370-d1d0-4160-a3fc-06d5ed400797 7 1569 9 1254 7 99.45 8e7a3c33-09be-468b-beeb-b51bcc524c06 58 49 13 39 10 73.58 503f94e4-7322-42c2-8794-8cbc51494a2e 21 40 17 32 14 56.90 3d77e130-23f9-4db7-a0aa-8212b2f513bd 2 17 2 14 2 88.47 1bc7556a-1a54-45f6-979a-77ab57b2af0f 1 16 2 13 1 88.68 Above is the table we built internally to track various customer storage costs as we worked to reduce their cost and still capture relevant logs to enable detection and response. Teamwork: Always Azure bet Our goal is to always provide high-quality alerts with as much context and information to both our analysts and customers. The collective expertise of our teams and their ability to react and solve problems in real-time helped us not only replace the third-party application, but also create an entirely new detection strategy around ASC that improves visibility and coverage for our existing customers, and improves our analysts\u2019 experience \u2013 creating greater efficiency across the board. Remember the feedback loop I mentioned? Like all integrations we build, we don\u2019t consider the integrations to ever truly be complete. There\u2019s always another company behind integrations that is making changes (hopefully improvements) that affect Expel. That\u2019s another reason communicating in real-time is key. Each of Expel\u2019s internal teams have the ability to drive changes to the integration or detection strategy. If you\u2019re considering building your own detections on top of Azure signal, I hope this post gave you a few ideas (and maybe even saved you some time AND money). Want to find out more about Azure signal and log sourcing? Check out our guidebook here ."
6
+ }
behind-the-scenes-in-the-expel-soc-alert-to-fix-in-aws.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Behind the scenes in the Expel SOC: Alert-to-fix in AWS",
3
+ "url": "https://expel.com/blog/behind-the-scenes-expel-soc-alert-aws/",
4
+ "date": "Jul 28, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Behind the scenes in the Expel SOC: Alert-to-fix in AWS Security operations \u00b7 8 MIN READ \u00b7 JON HENCINSKI, ANTHONY RANDAZZO, SAM LIPTON AND LORI EASTERLY \u00b7 JUL 28, 2020 \u00b7 TAGS: Cloud security / How to / Managed detection and response / Managed security / SOC Over the July 4th holiday weekend our SOC spotted a coin-mining attack in a customer\u2019s Amazon Web Services (AWS) environment. The attacker compromised the root IAM user access key and used it to enumerate the environment and spin up ten (10) c5.4xlarge EC2s to mine Monero . While this was just a coin miner, it was root key exposure. The situation could have easily gotten out of control pretty quickly. It took our SOC 37 minutes to go from alert-to-fix. That\u2019s 37 minutes to triage the initial lead (a custom AWS rule using CloudTrail logs ), declare an incident and tell our customer how to stop the attack. Jon\u2019s take: Alert-to-fix in 37 minutes is quite good. Recent industry reporting indicates that most incidents are contained on a time basis measured in days not minutes. Our target is that 75 percent of the time we go from alert-to-fix in less than 30 minutes. Anything above that automatically goes through a review process that we\u2019ll talk about more in a bit. How\u2019d we pull it off so quickly? Teamwork. We get a lot of questions about what detection and response looks like in AWS, so we thought this would be a great opportunity to take you behind the scenes. In this post we\u2019ll walk you through the process from alert-to-fix in AWS over a holiday weekend. You\u2019ll hear from the SOC analysts and Global Response Team who worked on the incident. Before we tell you how it went down, here\u2019s the high level play-by-play: Triage, investigation and remediation timeline Now we\u2019ll let the team tell the story. Saturday, July 4, 2020 Initial Lead: 12:19:37 AM ET By Sam Lipton and Lori Easterly \u2013 SOC analysts Our shift started at 08:45 pm ET on Friday, July 3. Like many organizations, we\u2019ve been working fully remotely since the middle of March . We jumped on the Zoom call for shift handoff, reviewed open investigations, weekly alert trending and general info for situational awareness. Things were (seemingly) calm. We anticipated a quieter shift. On a typical Friday night into Saturday morning, we\u2019ll handle about 100 alerts. It\u2019s not uncommon for us to spot an incident on Friday evening/Saturday morning, but it\u2019s not the norm. It\u2019s usually slower on the weekend; there are fewer active users and devices. Our shift started as we expected, slow and steady. Then suddenly, as is the case in security operations, that all changed. We spotted an AWS alert based on CloudTrail logs that told us that EC2 SSH access keys were generated for the root access key from a suspicious source IP address using the AWS Golang SDK: Initial lead into the AWS coin-mining incident The source IP address in question was allocated to a cloud hosting provider that we hadn\u2019t previously seen create SSH key pairs via the ImportKeyPair API in this customer\u2019s AWS environment (especially from the root account!). The SSH key pair alert was followed shortly thereafter by AWS GuardDuty alerts for an EC2 instance communicating with a cryptocurrency server (monerohash[.]com on TCP port 7777). We jumped into the SIEM, queried CloudTrail logs and quickly found that the EC2 instances communicating with monerohash[.]com were the same EC2 instances associated with the SSH key pairs that were just detected. Corroborating AWS GuardDuty alert As our CTO Peter Silberman says, it was time to buckle up and \u201cpour some Go Fast\u201d on this. We\u2019ve talked about our Expel robots in a previous post . As a quick refresher, our robot Ruxie (yes\u2013 we give our robots names) automates investigative workflows to surface up more details to our analysts. In this event, Ruxie pulled up API calls made by the principal (interesting in this context is mostly anything that isn\u2019t Get*, List*, Describe* and Head*). AWS alert decision support \u2013 Tell me what other interesting API calls this AWS principal made This made it easy for us to understand what happened: The root AWS access key was potentially compromised. The root access key was used to access the AWS environment from a cloud hosting environment using the AWS Golang SDK. It was then used to create SSH keys, spin up EC2 instances via the RunInstances API call and created new security groups likely to allow inbound access from the Internet. We inferred that the root access key was likely compromised and used to deploy coin miners. Yep, time to escalate this to an incident, take a deeper look, engage the customer and notify the on-call Global Response Team Incident Handler. PagerDuty escalation to Global Response Team: 12:37:00 AM ET Our Global Response Team (GRT) consists of senior and principal-level analysts who serve as incident responders for critical incidents. AWS root key exposure introduces a high level of risk for any customer, so we made the call to engage the GRT on call using PagerDuty . The escalation goes out to a Slack channel that\u2019s monitored by the management team to track utilization. PagerDuty escalation out to the GRT on-call Incident declaration: 12:39:21 AM ET A few minutes after the initial lead landed in Expel Workbench \u2013 19 minutes to be exact \u2013 we notified the customer that there was a critical security incident in their AWS environment involving the root access key. And that access key was used to spin up new EC2 instances to perform coin mining. Simultaneously, we jumped into our SIEM and queried CloudTrail logs to help answer: Did the attacker compromise any other AWS accounts? How long has the attacker had access? What did the attacker do with the access? How did the attacker compromise the root AWS access key? At 12:56:43 ET we provided the first remediation actions to our customer to help contain the incident in AWS based on what we knew. This included: Steps on how to delete and remove the stolen root access key; and Instructions on how to terminate EC2 instances spun up by the attacker. We felt pretty good at this point \u2013 we had a good understanding of what happened. The customer acknowledged the critical incident and started working on remediation, while the GRT Incident Handler was inbound to perform a risk assessment. Alert-to-fix in 37 minutes. Not a bad start to our shift. Global Response Team enters the chat: 12:42:00 AM ET Follow @amrandazz By Anthony Randazzo \u2013 Global Response Team Lead I usually keep my phone on silent, but PagerDuty has a vCard that allows you to set an emergency contact. This bypasses your phone\u2019s notifications setting so that if you receive a call from this contact, your phone rings (whether it\u2019s in silent mode or not). We call it the SOC \u201c bat phone .\u201d This wasn\u2019t the first time I was paged in the middle of the night. I grabbed my phone, saw the PagerDuty icon and answered. There\u2019s a lot of trust in our SOC. I knew immediately that if I was being paged, then the shift analysts were confident that there was something brewing that needed my attention. I made my way downstairs to my office and hopped on Zoom to get a quick debrief from the analysts about what alerts came in and what they were able to discover through their initial response. Now that I\u2019m finally awake, it\u2019s time to surgically determine the full extent of what happened. As the GRT incident handler, it\u2019s important to not only perform a proper technical response to the incident, but also understand the risk. That way, we can thoroughly communicate with our customer at any given time throughout the incident, and continue to do so until we\u2019re able to declare that the incident is fully contained. At this point, we have the answers to most of our investigative questions , courtesy of the SOC shift analysts: Did the attacker compromise any other AWS accounts? There is no evidence of this. How long has the attacker had access? This access key was not observed in use for the previous 30 days. What did the attacker do with the access? The attacker generated a bunch of EC2 instances and enabled an ingress rule to SSH in and install CoinMiner malware. How did the attacker compromise the root AWS access key? We don\u2019t know and may never know . My biggest concern at this point was communicating to the customer that the access key remediation needs to occur as soon as possible. While this attack was an automated coin miner bot, there was still an unauthorized attacker with an intent of financial gain that had root access to an AWS account containing proprietary and potentially sensitive information lurking somewhere. There are a lot of \u201cwhat ifs\u201d floating around in my head. What if the attacker realizes they have a root access key? What if the attacker decides to start copying our customer\u2019s EBS volumes or RDS snapshots? Incident contained: 02:00:00 AM ET By 2:00 am ET we had the incident fully scoped which meant we understood: When the attack started How many IAM principals the attacker compromised AWS EC2 instances compromised by the attacker IP addresses used by the attacker to access AWS (ASN: AS135629) Domain and IP address resolutions to coin mining pool (monerohash[.]com:7777) And API calls made by the attacker using the root access key At this point I focused on using what we understood about the attack to deliver complete remediation steps to our customer. This included: A full list of all EC2 instances spun up by the attacker with details on how to terminate them AWS security groups created by the attacker and how to remove them Checking in on the status of the compromised root access key I provided a summary of everything we knew about the attack to our customer, did one last review of the remediation steps for accuracy and chatted with the SOC over Zoom to make sure we set the team up for success if the attacker came back. For reference, below are the MITRE ATT&CK Enterprise and Cloud Tactics observed during Expel\u2019s response: MITRE ATT&CK Enterprise and Cloud Tactics observed during Expel\u2019s response Initial Access Valid Accounts Execution Scripting Persistence Valid Accounts, Redundant Access Command and Control Uncommonly Used Port With the incident now under control, I resolved the PagerDuty escalation and called it a morning. PagerDuty escalation resolution at 2:07am ET Tuesday, July 7th Follow @jhencinski By Jon Hencinski \u2013 Director of Global Security Operations Critical incident hot wash: 10:00:00 AM ET For every critical incident we\u2019ll perform a lightweight 15-minute \u201chot wash.\u201d We use this time to come together as a team to reflect and learn. NIST has some opinions on what you should ask , at Expel we mainly focus on asking ourselves: How quickly did we detect and respond? Was this within our internal target? Did we provide the right remediation actions to our customer? Did we follow the process and was it effective? Did we fully scope the incident? Is any training required? Were we effective? If not, what steps do we need to take to improve? If you\u2019re looking for an easy way to get started with a repeatable incident hot wash, steal this: Incident hot wash document template. Steal me! The bottom line: celebrate what went well and don\u2019t be afraid to talk about where you need to improve. Each incident is an opportunity to advance your skills and train your investigative muscle. Lessons Learned We were able to help our customer get the situation under control pretty quickly but there were still some really interesting observations: It\u2019s entirely possible that the root access key was scraped and passed off to the bot to spin up miners right before this was detected. We didn\u2019t see any CLI, console or other interactive activity, fortunately. The attacker definitely wasn\u2019t worried about setting off any sort of billing or performance alarms given the size of these EC2s. This was the first time we saw an attacker bring their own SSH key pairs that were uniquely named. Usually we see these generated in the bot automation run via the CreateKeyPair API. The CoinMiner was likely installed via SSH remote access (as a part of the bot). We didn\u2019t have local EC2 visibility to confirm, but an ingress rule was created in the bot automation to allow SSH from the Internet. This was also the first time we\u2019d observed a bot written in the AWS Golang software development kit (SDK). This is interesting because as defenders, it\u2019s easy to suppress alert-based on user-agents, particularly SDKs we don\u2019t expect to be used in attacks. We\u2019ll apply these lessons learned, continue to improve our ability to spot evil quickly in AWS and mature are response procedures. While we felt good about taking 37 minutes to go from alert-to-fix in AWS in the early morning hours, especially during a holiday, we don\u2019t plan on letting it get to our heads. We hold that highly effective SOCs are the right combination of people, tech and process. Really great security is a process, there is no end state \u2013 the work to improve is never done! Did you find this behind-the-scenes look into our detection and response process helpful? If so, let us know and we\u2019ll plan to continue pulling the curtain back in the future!"
6
+ }
better-web-shell-detections-with-signal-sciences-waf.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Better web shell detections with Signal Sciences WAF",
3
+ "url": "https://expel.com/blog/better-web-shell-detections-with-signal-sciences-waf/",
4
+ "date": "Oct 9, 2019",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Better web shell detections with Signal Sciences WAF Security operations \u00b7 5 MIN READ \u00b7 ALEC RANDAZZO \u00b7 OCT 9, 2019 \u00b7 TAGS: Get technical / How to / Managed security / SOC If you work for an organization that has a web presence (and let\u2019s be real, they almost all do) and that presence is perfectly coded, has zero vulnerabilities nor any functions that could be misused \u2026 then you can stop reading. For everyone else, know that there\u2019s a real chance of your website being compromised at some point \u2014 leading to things like website defacement , website functionality modification or a broader compromise of the network . The common theme for these sorts of attacks are web vulnerabilities that lead to the upload of web shells, giving an attacker a foothold on the underlying server. In this blog post, I\u2019ll talk about what a web shell is, some of the typical ways of detecting them and the (vastly improved) detection method I discovered. What\u2019s a web shell? A web shell is a web page or web resource that abuses certain functions in web languages (like PHP, JavaScript, etc.) that give it backdoor-like capabilities to the underlying web server. Capabilities typically include things like file upload, file download, and arbitrary command execution. Web shells usually crop up after a threat actor exploited a website vulnerability and gives the attacker an initial foothold onto a network through the web server. Typical methods of detecting web shells Detection of web shells traditionally comes in two forms, both with downsides. The first detection method involves detection on the endpoint by file name, file hash, or file content. Unfortunately this is often CPU intensive which means business operations teams may not allow you to do it on production systems. The second method is passive and effective but it\u2019s a pain to set up and manage. It involves mirroring web traffic to a network traffic monitoring device that has built-in detections or supports custom Snort or Suricata rules. You\u2019ll also need to upload your web server SSL private keys to the network appliance(s) for SSL decryption or you won\u2019t be able to inspect encrypted web traffic. I don\u2019t know about you, but that\u2019s not a mess that I\u2019d want to manage or deal with. How Expel uses Signal Sciences WAF to detect web shells One of the commitments we\u2019ve made to our customers since Expel was founded is to support and integrate with the security technologies that our customers already use or plan to buy. Several of our customers use the Signal Sciences Web Application Firewall (WAF) , so we created an easy way to integrate those security signals into Expel Workbench. As we were developing our integration, I discovered that the Signal Science WAF has a great capability to detect web shells thanks to a complete application layer visibility into web traffic with a user-friendly rules engine bolted on top. That\u2019s right \u2014 a rules engine that allows you to key off of web content such as HTTP methods, any header keys and values (even custom headers), query parameter keys and values, post body keys and values, domain, URI, or any combination of the preceding items. This visibility and rules engine allows us to augment customers\u2019 Signal Sciences WAF deployments with granular rules that detect network traffic to popular web shell variants with a high fidelity (meaning it\u2019ll only trigger on the traffic we\u2019re looking for). I\u2019ll pull back the curtain and show you how Expel develops web shell detection rules for its customers so you can try the process yourself with your own Signal Science WAF deployment. How Expel develops web shell detection rules using the Signal Sciences rules engine Here\u2019s a high-level overview of the web shell detection rule development process: Stand up a web server running whatever web language you want to develop rules for and install the Signal Science WAF agent. I started with an Ubuntu server running Apache and PHP. Find some web shells. Thankfully that\u2019s not very hard. Copy web shells you want to write rules for to a directory the web service is serving resources from. Load up a packet capturing tool or my preferred tool, Chrome browser\u2019s built-in developer\u2019s console. Access the web shell and use its various functions, looking for unique indicators in the HTTP requests. Create the rule to detect the web shell in the Signal Science WAF rule editor and hook it up to a signal that would generate an alert. Test out your new rule by interacting with the web shell again, verifying that all the actions you intended to detect are being detected. Now I\u2019ll walk through the specifics of creating a rule for the WSO web shell version 4.0.5 (MD5: b4d3b9dbdd36cac0eba7a598877b6da1 ) starting at step 5 of the process I described above. The following screenshot series will show you how to take different actions through the WSO web shell while having Chrome\u2019s developer console open. You\u2019ll see me: Executing \u201cpwd\u201d to return my present working directory. Executing \u201cls\u201d to return a directory listing of my current working directory. Using the built in function \u201cProcess status\u201d which is a WSO execution wrapper around the shell command \u201cps aux\u201d and Navigating to the root of the server\u2019s file system. In each screenshot below, I added red boxes around the post-body parameters my browser sent to the web shell. Take a peek: Execution of \u201cpwd\u201d to return the present working directory. Execution of \u201cls\u201d to return a list of content in the current working directory. Use of \u201cProcess status\u201d which is a WSO execution wrapper around the shell command \u201cps aux\u201d Navigation to the root of the server\u2019s file system. Each request always had the parameters \u201ca\u201d, \u201cc\u201d, \u201cp1\u201d, \u201cp2\u201d, \u201cp3\u201d and \u201ccharset.\u201d It turns out that all actions taken while using this web shell will have those parameters. If you review other versions of the WSO web shell this\u2019ll also be true. So if you want to generically detect WSO web shell use regardless of version, all you need to do is look for all those parameters being present in a request. Before you write a rule, you need to prepare a few things in the Signal Sciences WAF: Create a \u201csite signal\u201d on each site where you want your rules monitoring that your rules will point to. In my example, I called the signal \u201cexpel-alert\u201d. Create a \u201csite alert\u201d that takes in the new signal and set the threshold to one request in one minute. This is the lowest threshold that you can set. Since your WSO web shell will be high fidelity, you want an alert generated if that threshold is ever met. Signal Sciences WAF has a powerful feature called \u201cadvanced rules\u201d which Signal Sciences reps can turn on for you. There\u2019s an additional cost, but the feature greatly expands the WAF\u2019s capability. For each Expel customer that has a Signal Science WAF, we deploy an advanced rule. This rule turns on verbose logging that records post-body contents and query parameters. We only enable verbose logging on expel-alert signals. This gives us complete visibility into commands sent to a web shell so we can investigate alerts. Now onto the meat of the rule. In the \u201csite rule\u201d editor, you\u2019ll want to chain five \u201cPost Parameter exists where name equals <string>\u201d where \u201c<string>\u201d are the values \u201ca\u201d, \u201cc\u201d, \u201cp1\u201d, \u201cp2\u201d, \u201cp3\u201d, and \u201ccharset\u201d. Set the rule action to add the signal \u201cexpel-alert\u201d. Take a look at the final rule configuration: The final step is to test the efficacy of your rule by using the web shell some more to see what gets tagged. Take a look at the screen shot below \u2014 every request we made to the web shell was tagged with \u201cexpel-alert\u201d and has its post-body contents logged. Success! Bonus: Free web shell detection rules As a reward for making it through this blog post, I\u2019ve got a prize for you: ten web shell detection rules that you can upload right into your Signal Science WAF. They\u2019ll detect WSO, r57, c99, c99 madnet, PAS, China Chopper, B374k, reGeorg and reDuh web shells. There\u2019s also a generic rule to detect some common commands that could be pushed to web shells we don\u2019t have explicit rules for. To download these web shell detection rules, submit your info below and we\u2019ll send it over in an email."
6
+ }
blog.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Blog",
3
+ "url": "https://expel.com/blog/page/5/",
4
+ "date": null,
5
+ "contents": null
6
+ }
budget-planning-determining-your-security-spend.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Budget planning: determining your security spend",
3
+ "url": "https://expel.com/blog/budget-planning-determining-security-spend/",
4
+ "date": "Oct 16, 2017",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Budget planning: determining your security spend Security operations \u00b7 4 MIN READ \u00b7 BRUCE POTTER \u00b7 OCT 16, 2017 \u00b7 TAGS: Budget / Management / Planning It\u2019s a common question: \u201cHow much should I spend on cybersecurity?\u201d Looking at your peers, analyst guidance, and postings on random security companies\u2019 websites, it\u2019s a difficult question. And there\u2019s not a one-size-fits-all answer. It may seem counterintuitive, but how much you spend on security is really a trailing indicator of how your company views security. In corporate life, we\u2019re asked to set a budget long before we\u2019ll actually spend the money. So, we talk to our staff, we talk to company leadership and we attend conferences to figure out what we should be doing about cybersecurity and cyber risk management in our organization. Then we put together a budget, which gets kicked around for a while before it\u2019s eventually approved. A few months later we, start finally spending those budget dollars. But by that time we\u2019re really implementing our vision of security as it was 6 or even 12 months ago. What bucket are you in? What your vision is depends a lot on how your company views cybersecurity. I\u2019ve found most organizations fall into one of five buckets. Do any of these sound familiar? Security as an enabler ($$$$) \u2013 These are businesses that view cybersecurity as a differentiator to their service or product. They\u2019re implementing \u201cleading edge\u201d security solutions in an effort to set them apart from the pack. Risk based ($$$) \u2013 Organizations that have risk-based cybersecurity are constantly making tradeoffs between required security controls and their risk appetite. While spending in these organizations can be high, it\u2019s also organized and controlled. Security as a requirement ($$) \u2013 Some businesses use regulatory and industry requirements to guide their spend. This is often less expensive than a risk-based approach but it won\u2019t have the same coverage of controls. Yet another piece of IT ($) \u2013 In these organizations, security is managed like IT spend, which for the most part means minimizing cost and not pulling from the bottom line. Reactionary ($?*!$) \u2013 This is the \u201clet the winds blow us where they may\u201d strategy of cybersecurity. When things go badly, there\u2019s a large spend. When they go well, the spend is minimal. Real Dollars By now I\u2019m guessing you\u2019ve plotted what bucket your organization is in. But practically, how big are those dollar signs? According to Gartner , cybersecurity spend can vary from 1% to 13% of the overall IT budget. That\u2019s a pretty big range that doesn\u2019t speak well to the maturity of the state of the security profession. At the low end of that spend, you\u2019ll have organizations with minimal security controls and security incidents that go undetected and unaddressed for long periods of time. At the high end, you\u2019ve got armies of dedicated staff, heavy tolling and engaged executives sponsoring cybersecurity initiatives. Be aware, though, that absolute dollars are only one measurement. It\u2019s important to understand where this money is being spent\u2026 or more appropriately where it could be spent. Cybersecurity spend comes in many forms including staff, security software, hardware, contractor support, and outside services. Depending on your needs, you\u2019ll find you get different levels of value depending on which buckets you spend your dollars in. For instance, in a small organization that is sensitive to hiring more staff, contract support or outside services may be a better bet than ramping up staffing. In larger, more sophisticated organizations, spending on software and hardware that automates existing security controls and processes may be the best thing you can do. Each approach has a different price tag and will affect where you land on the one to 13 percent spectrum. Find your focus (aka it\u2019s all about outcomes) If you\u2019re struggling to figure out what type of security organization you\u2019re trying to be and what your long-term strategy is, my advice is to focus on your desired outcomes \u2013 both in proactive and reactive situations. Ask yourself: \u201cWhat outcomes do I want, and when do they need to be possible?\u201d Combine the answers to help focus your initial budget thinking\u2026 or at least rationalize your planned spend and set company expectations on realistic outcomes. If your budget and expectations don\u2019t match (typically the budget is too small to meet the desired expectations) you need to do one of three things: 1) get more budget, 2) right-size expectations, 3) find a new job proactively because this story won\u2019t end well and you will likely be the scapegoat. Avoiding the trap door when you\u2019re in the breach zone There will always be ebbs and flows when it comes to how much money there is to go around. Everyone has lived through a budget crunch at some point and had to tighten belts and live off less. On the flip side, if you\u2019ve suffered a major security event recently, your budget likely got a bump to help you deal with the breach, response activities, and remediation. I call this the \u201cbreach zone\u201d. If you\u2019ve been there you\u2019ve probably also witnessed the \u201cpanic spending\u201d that typically follows. Spending that windfall quickly is often seen as a proxy for progress. But it can also be a trap that sets you up for failure down the line. Why? Panic spending often results in buying products and services you don\u2019t ultimately get value from. What\u2019s worse is that you\u2019re then stuck paying for those products out into the future \u2013 increasing your long-term budget needs even more with things you don\u2019t need. Not to mention the time it takes to maintain them. It\u2019s a bit like stretching to afford a sports car but then you realize you can\u2019t afford the expensive gas and insurance. A healthier approach is to use the specter of a breach to drive your budgeting process. If you\u2019re lucky enough to have escaped a breach, congrats. Pretend you have and go back to that outcome-based approach I talked about earlier. What do you need? What would you want to change in your org to achieve them? What investments would you make and what would you do differently? Use those answers to guide your budget process. Scenario based budget planning can help you build a budget for the security you\u2019re likely to need and ensure your spend is on target with what your organization requires in the future. Finding your spend Based on all this, the question still stands: \u201cHow much should I spend on cybersecurity?\u201d The answer to that question is unique to each organization. As I said at the start, there\u2019s no one-size-fits-all answer. It depends on your maturity, current capabilities, executive support, and threat model; you may have wildly different spending needs than your peers. But there are some things you can do to find the budget that\u2019s right for you. Review your past spend and do an assessment. Did you get the results you want? What would you have done differently? Tabletop some terrible events like breaches and insider attacks. What would you need to respond? What would you need to stop it from happening? Use these answers to drive your budget and spending decisions. And remember that your budget is your own. Just because another organization is spending more or less doesn\u2019t matter if you\u2019re getting the results you want."
6
+ }
cloud-attack-trends-what-you-need-to-know-and-how.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Cloud attack trends: What you need to know and how ...",
3
+ "url": "https://expel.com/blog/cloud-attack-trends-need-to-know/",
4
+ "date": "May 25, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Cloud attack trends: What you need to know and how to stay resilient Security operations \u00b7 7 MIN READ \u00b7 ANTHONY RANDAZZO \u00b7 MAY 25, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Well, 2020 is getting smaller in our rearview mirror as our journey into 2021 takes us closer to summer. Good riddance. We\u2019d be remiss, though, if we didn\u2019t take some time to reflect on the things we observed and learned over the last year at Expel. So, we decided to take a close look at the cloud threat landscape. While we can easily get hung up on the black swan events of the year, we took a more data-driven approach to find the greatest threats to the majority of orgs today. At Expel, we view the cloud as any infrastructure, platforms or applications living in some data center that your org doesn\u2019t wholly manage. This might be your Amazon Web Services (AWS) or Microsoft Azure cloud infrastructure; an O365 or G Suite tenant; your GitHub repositories or perhaps the Okta instance that manages identity to all of your end users. During the COVID-19 pandemic, our SOC saw that bad actors wasted no time thinking of more evil ways to attack in the cloud and take advantage of people using phishing tactics. See full \u201cTop cybersecurity attack trend during COVID: Phishing\u201d infographic And the IC3\u2019s 2020 Internet Crime Report echoes our findings. It\u2019s disheartening to see that attackers used a crisis to their advantage to infiltrate cloud apps and increase their phishing efforts. But it\u2019s also not surprising. Bad actors will continue to evolve their tactics, using health and economic crises to manipulate unsuspecting people into surrendering their credentials and other information. That doesn\u2019t mean hope is lost. There are ways to remediate and stay resilient against the inevitable attacks in your cloud and phishing ploys. Follow @amrandazz In this blog post, I\u2019ll cover the top three types of attacks we saw between March 2020 and March 2021, how to respond to an attack if it happens to you and share some steps you can take today to drastically reduce the chance of it happening to your business. Attack trend: Business email compromise If you\u2019ve taken a look at our \u201cTop cybersecurity attack trend during COVID: Phishing\u201d infographic, you\u2019ll know that business email compromise (BEC) is still public enemy number one. Here at Expel, the scale tips favorably toward BEC incidents in O365 versus G Suite. And there\u2019s one primary reason for that: O365 has some initial configurations that need to be changed by default, whereas G Suite\u2019s settings out-of-the-box are pretty straightforward. We previously covered these configurations but here\u2019s the TL;DR: With original deployments of O365 tenants, IMAP and POP3 were enabled by default in O365 Exchange as well as BasicAuthentication . IMAP and POP3 don\u2019t support multi-factor authentication (MFA), so even if you have MFA enabled, attackers can still access these mailboxes. BasicAuthentication allows attackers to authenticate with clients past any pre-authentication checks to the Identity Provider which could lead to unwanted account compromises or account lockouts from password spray or brute force attacks. Microsoft intended on doing away with BasicAuthentication by default but has postponed this rollout due to the COVID-19 pandemic. This is now expected to rollout before the end of 2021. Google, on the other hand, disables these configurations in G Suite by default but allows them to be enabled ex post facto. Remediation What should you do if you identify someone that shouldn\u2019t be in your O365 Exchange? Fortunately, it\u2019s pretty straightforward. Reset the user\u2019s credentials; Review the mailbox audit logs to determine if any unsavory activity occurred; and Remove any mail forwarding rules (if applicable). Resilience There are quite a few things you can do to prevent these BECs from being commonplace in your cloud email. First and foremost, ensure that you\u2019re using MFA wherever possible. While it\u2019s not a silver bullet, it\u2019s absolutely critical in today\u2019s cloud-first environments. Our data infers that 35 percent of the BEC attempts we\u2019ve spotted could have been prevented by enabling MFA. Next, disable legacy protocols such as IMAP and POP3. Again, these don\u2019t support any sort of Modern Authentication (Modern Auth) which means an attacker can bypass MFA completely by using an IMAP/POP3 client. Once those are turned off, strongly consider disabling BasicAuthentication to prevent any pre-auth headaches on your O365 tenants. Seven percent of BEC attempts could have been stopped by enforcing modern authentication. If you\u2019re still not sleeping well at night, then consider implementing some extra layers of conditional access for your riskier user base. You can even create a conditional access policy to require MFA registration from a location marked as a trusted network. This prevents an attacker from registering MFA from an untrusted network. Lastly, don\u2019t neglect your secure mail gateway. We recently helped a customer make some configuration changes that ultimately lead to a major drop in volume of phishing emails they received on a daily basis \u2013 reducing their BEC incident count. Attack trend: Cloud access providers If we remove the explicit BEC incidents, the next biggest target we see are cloud access identity providers like Okta or OneLogin. While some attackers might just want access to your email for fraud purposes, others have their eyes on a bigger prize: the data behind your applications. Many orgs already migrated to SSO (SAML) authentication, and this is especially the case in a post-2020 working environment where many employees work remotely. Which means that attackers can hit more than just mail providers as an easy target to harvest credentials. During 2020, we saw attacks on Okta quite a bit. So we\u2019ll focus our remediation recommendations there. So, how are all of these Okta accounts getting compromised? A couple ways. First, it\u2019s entirely possible to intercept session tokens for Okta after MFA has been established. We\u2019ve talked about this tactic a bit in the past (and yes, U2F will prevent this). These session tokens can then be used to maintain access indefinitely depending on the refresh token and any limitations it might have. But there\u2019s an even simpler approach: hoping unsuspecting end users will click that push notification. You might be amazed at how frequently this occurs. And the results can be disastrous (we personally have over 50 published applications for certain users in Okta). Remediation Remediation after a confirmed Okta access compromise may be a bit more involved than a BEC limited to a single Exchange Online mailbox. Here are the high level tasks: Terminate the user\u2019s active sessions to disrupt existing authenticated entities; Reset the compromised credentials; and Determine if an attacker accessed any published applications (hopefully not as this will require subsequent remediation and responses against those apps). We have a quick workflow here at Expel that will grab all of the associated SSO activity. Resilience Okta, in particular, has a feature called Adaptive MFA which creates behavioral profiles of each of your users and introduces a little bit of friction when an anomalous login occurs. This friction might be the difference between a compromise or not. If you\u2019re running sensitive applications in Okta, then you might consider applying application-level MFA . Lastly, while we have become more distributed in a post-pandemic world, you might also consider implementing Network Zones to effectively develop an allow list for access in your sign-on policies. Cloud attack trend: Cloud infrastructure When we started theorizing where to focus detection efforts in cloud infrastructure, it was apparent that most risk lied on access within the control (management) planes. It turns out that attackers are, in fact, interested in this sort of access . Excessive access to the control plane opens organizations up to a bunch of problems and the reality is that all of the \u201cshift left\u201d security in the world doesn\u2019t prevent the use of compromised credentials. We know this access may be for financial gain or perhaps even persistent access . The good news is that there are a variety of ways to prevent this. Remediation Cloud infrastructure response can have a bit of variance given that they each have completely different Identity and Authentication Management (IAM) implementations. In AWS, it\u2019s a little more straightforward. Identify all compromised access keys. Some exposed or compromised access keys may happen en masse so it\u2019s best to make sure you\u2019ve found them all. This can be done by pivoting based on the attacker access indicators such as IP address. Snapshot and remove any new infrastructure created by the attackers. Determine if any data plane access occurred (i.e. SSH access to your EC2) and respond as necessary. Resilience Inadvertently exposed secrets can exacerbate this problem so it\u2019s important to to get a hold over your public git repositories. There are commercially available products to identify exposed secrets such as GitGuardian , or you can go at it yourself and use open source projects like truffleHog . The good news is that repositories like GitHub delay the public API by five minutes to give organizations a head start to remediate these sorts of exposures. Another thing to think about is subscribing to AWS Security Hub to develop your own use-cases for automated incident response , or again, you can run at this alone via custom Lambda, CloudWatch or even your own SOAR platform. Another great AWS Organizations\u2019 feature: develop least privilege access control with Service Control Policies to limit the blast radius of compromised credentials. New attacks. New resources. So what\u2019s in store for us for the rest of 2021? Well, we wish we had a crystal ball to say for sure but we can make some pretty educated guesses based on what we saw over the last 12 to 18 months. Events like the SolarWinds breach reminded us that the cloud is absolutely a target (golden SAML in Azure) and that we need to stay diligent \u2013 and prepare for what might be around the corner. While attacks in the cloud and phishing aren\u2019t new, we know that bad actors will continue to get creative. And one thing is for sure: we\u2019ll continue to see BEC attacks at the same volume or even increase this year. Microsoft will hopefully roll out their more proactive controls such as deprecated support for BasicAuthentication for Azure Active Directory (AzureAD) in 2021. Although, it seems like it\u2019s going to be at least a year before that comes to fruition for orgs that have mail clients actually using those authentication protocols with Exchange Online. Fortunately, we\u2019ll continue to see the development of resources and services that address new and changing security needs. At Expel, we\u2019ve been working on providing new products and services to help our existing and new customers endure the onslaught of 2020, and the new challenges it presented. When our customers let us know that they were drowning in phishing emails, we created the Expel Managed Phishing Service . So, in addition to our analyst providing 24\u00d77 managed security, they\u2019ll also have eyes on every single email someone at your org reports as a potential phishing attempt. While we can\u2019t stop attackers from being cunning, we can use our expertise (as a community) to help each other not only keep our heads above water but also prevent getting blindsided again. Check out Expel Managed Phishing"
6
+ }
cloud-security-archives.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Cloud security Archives",
3
+ "url": "https://expel.com/blog/resource_topic/cloud-security/",
4
+ "date": null,
5
+ "contents": null
6
+ }
come-sea-how-we-tackle-phishing.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Come sea how we tackle phishing",
3
+ "url": "https://expel.com/blog/expel-phishing-dashboard/",
4
+ "date": "Jun 8, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Come sea how we tackle phishing: Expel\u2019s Phishing dashboard Security operations \u00b7 7 MIN READ \u00b7 KELLY NAKAWATASE \u00b7 JUN 8, 2021 \u00b7 TAGS: Phishing / Tech tools It\u2019s tough to stay afloat when you\u2019re drowning in phishing emails. While it\u2019s great that users are submitting suspicious-looking emails, you need to be able to glean meaningful information from all the data in those suspicious submissions. But how? And with what time? Our crew wanted to find a way to quickly show our Expel managed phishing service customers helpful data like who is attacking them, how often they\u2019re being attacked and whether or not their phishing training program is effective. Let\u2019s connect And this is where I come in. (Hi, I\u2019m Kelly, one of Expel\u2019s senior UX designers. I designed the Phishing dashboard.) In this post, I\u2019m going to talk (type?) you through the UX process that went on behind the scenes in creating the Expel Phishing dashboard \u2013 from figuring out which metrics would be the most useful for our customers to determining the right visualization for any given set of data. If you\u2019re developing a measurement framework for your own phishing program \u2013 or are just interested in learning how I created a dashboard centered on the goals of our users \u2013 you\u2019ll want to keep reading. Whale, what does the Expel managed phishing service do? Perfect meme, courtesy of the internet TL;DR: We triage and investigate the emails customers of our managed phishing service report as potential phishing. At its base, users submit suspicious looking emails to us so our SOC analysts can triage the email and determine whether or not the submission is benign or malicious. If the email is deemed malicious, our analysts do the legwork to figure out if there was an actual compromise, and if there was a compromise, we inform you and provide instructions to remediate the situation. If the email had malicious intent but users didn\u2019t fall for it, then our analysts conclude their investigation and offer recommendations to help improve overall security to ensure no one does fall for it in the future. Casting a net for goals I joined the phishing team in its infancy, and as a UX designer here at Expel, my job is to ensure that we keep our customers\u2019 goals top of mind when we create products. So, I started by asking questions: What\u2019s the purpose of this dashboard? What would customers be most interested in seeing on the dashboard? How often would they use it? How would they use it? Also a lot more questions. I talked to a few of our phishing proof of concept customers to get answers to these questions. I also talked to a few of our Engagement Managers (EMs), who are very in tune with what customers as a whole are generally trying to accomplish. These conversations helped me discover what our customers wanted to be able to do with their phishing programs, what holes they saw in other services. After a number of informational interviews, I formed four goals for the Phishing dashboard. Help customers report up to their executives on the state of phishing at their organization. Help train users who report the most false positives, and reward users who are great at catching phish! Identify oppor-tuna-ties to improve overall security and prevent future phishing. Show customers what they can expect to see. It\u2019s likely that if they\u2019re interested in our phishing service, they\u2019ve used other phishing-related apps to bulk up their program. If they\u2019re used to getting certain kinds of metrics around phishing, I wanted to make sure that the first iteration of our Phishing dashboard met that baseline at the very least so customers would never feel like they\u2019re lacking by just working with us. Deep diving for metrics I wanted to see what other products in the phishing space were doing when it comes to serving metrics, in order to design effectively. So, I looked at the ocean of phishing apps and software, combed through public product documentation and YouTube videos, and took inventory of all the metrics these products were showing on their dashboards and reporting. I compared these metrics to the ones we were already collecting for our proof of concept customers. Before I condensed this list and got rid of the duplicates, there were 132 data points. But, like I said, that was before getting rid of duplicates. And there were actually a lot of duplicates. So, I did the classic UX method of a good ol\u2019 analog card sort. Basically, I wrote every single metric (even the duplicates) onto a Post-It Note and grouped them by category. I did this a few times to get different kinds of groups. Then I grouped these metrics based on the goals I mentioned above. Photo of my analog card sort and my shadow self These were some of the metric categories I came up with. But it\u2019s actually not my opinion that matters the most here. Remember, our customers are the ones I have to keep in mind when designing. After condensing the list of metrics down to a manageable number, I was able to run an unmoderated, completely remote card sort with a customer and EMs to see how they\u2019d use these metrics, and if there were any metrics they thought were unnecessary or missing. I\u2019m proud to say that the categories these users came up with were quite similar to my own. Reeling it in for feasibility and tackling visualizations Once I had a shorter list of metrics and categories that would meet the goals for the Phishing dashboard, I knew I\u2019d have to reel it in based on time and technical feasibility. So, I met with the phishing engineers to discuss which items on the metrics list were realistic for a first version, and which metrics we\u2019d have to revisit for a later version. I let go of more complicated metrics like susceptibility by department and phish category (it\u2019s bookmarked for a future version though\u2026 maybe don\u2019t quote me ). But capturing key baseline metrics \u2013 being able to collect data and list out most common subjects, attachments, users and user accuracy \u2013 was definitely feasible. The next step was figuring out how to most effectively visualize these metrics. I looked at popular dashboard designs, aesthetically pleasing dashboards and whatever showed up in \u2018best dashboards\u2019 searches. I blocked out their visualizations to understand ideal page layout, the kinds of metrics and visualizations that got prioritized, and what kind of visual weight is given to any particular graph. You can\u2019t really just take a metric category and throw it into a pie chart and call it done. So much of good design in dashboards is finding the right visualization for the right group of metrics to tell the story that your users need. For example, a group of metrics I knew we needed to show were: Total user submissions for a given timeframe, How many of those submissions were malicious; and How many of those submissions were benign. It seemed like the most obvious visualization for this group would be to put it in a pie chart that shows the quantities in each metrics group and how they make up the whole of total submissions. Or maybe the most obvious visualization is to just show the raw counts of these numbers, or in a funnel, like our Workbench\u2122 Alerts Analysis Dashboard funnel. Example of straight counts, and adapting these metrics into graphics on our Workbench Alerts Analysis Dashboard But in talking to customers, I already knew that the straight quantity of submissions and their subsequent outcomes wasn\u2019t the interesting part of this data. In fact, showing straight quantities for this might be the least informative way of expressing this data. The story is what\u2019s important here. Below is what ended up being the final version of this data visualization, and it offers so much more information than a pie chart could. Customers are more interested in looking at how the outcomes of their suspicious emails trend, and whether or not there\u2019s a spike. If there\u2019s a spike, then you can investigate why there was a spike. You can interact with the legend to turn on and off certain outcomes, compare the lines and easily screenshot this for reports. Example of Expel Phishing Dashboard line graph Once I did this for all of the metric groupings that would appear on the Phishing dashboard, I laid it out and started chumming for feedback from current customers. And, wahoo! The feedback was largely positive, and I made some adjustments to wording and changes to which graphs got to be the principal in the school of visualizations. All aboard the Phishing dashboard tour Let\u2019s walk through the Expel Phishing dashboard 1.0. Reminder: if you\u2019re already an Expel customer, don\u2019t be koi, you can preview and interact with this krill-iant dashboard in Workbench! The image below shows submissions by outcome over time, which is what customers first look for upon landing here. You can look for spikes and trends in the data. On the right, we have some information on malicious senders and how many emails are sent per sender. We also have the number of unique submitters so customers can see how many of their users are reporting emails as potentially phishy. This can be an indicator for how effective training or end user education is. Expel Phishing Dashboard top level metrics of submissions over time and unique senders and submitters Moving down the dashboard, second level on the left, we have a horizontal bar chart. This gives customers information about how many submissions we\u2019re receiving from their users, and how many of those submissions turn into actual security incidents. On the right, we have information on the frequent submitters of malicious, benign, and all email submissions to give customers insight into which users may need more training. Metrics displaying how submissions funnel down to incidents, and submitter leaderboards In the next image, on the third level on the left, we show customers the kinds of attachments that show up in malicious emails. This helps customers create custom rules in their secure email gateway (SEG) to limit similar incoming emails. On the right is how often we use customer integrated technology to assist in our phishing investigations. This is to give customers an idea of their return on investment in their security vendors. Information on malicious attachment quantity and how often our analysts leverage your tech in phishing investigations Lastly, along the same vein as malicious attachments, we have frequent domains, senders and sender domains. This can help customers not only create rules in their SEG to limit incoming emails, but can also help them see if there\u2019s a themed campaign against their org. The final metrics on the Phishing dashboard provide information about recurring themes in malicious emails Hook, line, and sinker Of course, that\u2019s not the end of my job, or the end of the Phishing dashboard. After all, this is only version one. Bird\u2019s eye view of the primary Phishing dashboard mockup The Expel Phishing dashboard is on its maiden voyage, and I hope you enjoyed swimming alongside me. I\u2019m excited to be on this journey with our Expel managed phishing customers and the rest of the Expel crew. Want to see where we take the dashboard next? Hop aboard!"
6
+ }
companies-with-250-1000-employees-suffer-high-security.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Companies with 250-1000 employees suffer high security ...",
3
+ "url": "https://expel.com/blog/companies-with-250-1000-employees-suffer-high-security-alert-fatigue/",
4
+ "date": "May 2, 2023",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Companies with 250-1,000 employees suffer high security alert fatigue Security operations \u00b7 3 MIN READ \u00b7 CHRIS WAYNFORTH \u00b7 MAY 2, 2023 \u00b7 TAGS: Careers / MDR In our recent report on cybersecurity in the United Kingdom (UK) , IT decision-makers (ITDMs) point to a corrosive dynamic threatening the effectiveness of their security operations centres (SOCs) and the well-being of their security and IT teams. In sum, fatigue stemming in large part from a barrage of alerts and false positives is disrupting workers\u2019 private lives, driving burnout and staff turnover at a time when there\u2019s a critical talent shortage in the industry. The effect is evident across the board, but companies with 250-1,000 employees (what Expel calls the commercial segment) are being hit especially hard. Let\u2019s review the findings and consider possible reasons why the 250/1k segment is suffering so badly. Regardless of these findings, we believe there\u2019s hope. At the end, we\u2019ll discuss strategies to help businesses not only survive, but thrive in this environment. Fatigue and burnout is worst for companies with 250-1,000 employees More than half of ITDMs say their SOCs spend too much time on alerts , with larger companies (250+) more likely to call it out as a particular concern. (Problem alerts include low-risk/low priority notifications and false positives.) Respondents in the 250/1k segment were most likely to say their teams spend too much time addressing alerts (60%). This segment also views the issue as more urgent, with a quarter saying they strongly agree. ITDMs in the 250/1k segment are also significantly more likely to cite alert fatigue as a problem for their security teams. The risk associated with fatigue is huge. As we noted in the UK report, an International Data Corporation (IDC) study found that a dizzying number of alerts are ignored\u201427% among companies with 500-1,499 employees (which includes a big chunk of the segment we\u2019re examining here). This revelation\u2014that more than a quarter of threat alerts hitting the SOC are being ignored\u2014should keep leaders and board members awake all night, every night. Alert fatigue and the 3CX hack In the recent 3CX attack, many of the platform\u2019s users had seen their endpoint protection software incorrectly flag known, good software as malicious in the past. Since 3CX\u2019s software was expected in their environment, many analysts assumed the endpoint protection software was incorrect, rather than suspecting the software had been the victim of a supply chain attack. \u2013 Greg Notch, Chief Information Security Officer, Expel Alert fatigue and burnout: the human toll Alert overload, alongside all the other challenges associated with running a 24/7 SOC (during an era plagued by a 3.4 million-person talent shortage ), represents an unsustainable infringement on security pros\u2019 personal lives. Ninety-three percent of ITDMs surveyed (and 95% in the 250/1k category) say their personal commitments are at least occasionally cancelled, delayed or interrupted because of work. But, as chart 3 indicates, the 250/1k group is affected significantly more often\u201451% of respondents say it happens all or most of the time, a stunning 15% more than the next highest segment. Unsurprisingly, then, ITDMs in this key segment say their groups experience substantially higher degrees of burnout \u201414% higher than the ITDM total. Staff turnover The upshot here is that burned-out workers make mistakes (like the missed alerts that happened in the 3CX supply chain attack) or leave (perhaps both). The potential for attrition is especially distressing, given the talent deficit noted above. Again, companies in the 250-1,000 employee range feel the crush worse than those in other segments. This cohort feels a greater intensity on this measure than other respondents. Its 27% positive response is eight points higher than the all-segment average. Why are companies with 250-1,000 employees having a harder time than other segments? Greg Notch, Expel\u2019s chief Information security officer (CISO), says these companies are \u201cbig enough to have big company problems, but lack the structure and funding to build a security program sufficient to defend their enterprise.\u201d The folks trying to keep those programs afloat are understaffed, so they\u2019re naturally burning out. Also, because they\u2019re stuck doing repetitive work just to keep the lights on, it\u2019s preventing their career growth into more strategic roles. So they leave to find those opportunities elsewhere. And it\u2019s easy for them to do that because of the talent shortage. He also says it \u201cdoesn\u2019t help that ransomware targeting is now going wider and down-market. As a result, these folks are in live-fire situations with bad business outcomes.\u201d The UK security report makes a couple of things clear. First, SOCs are under tremendous stress as they try to safeguard their organisations, and if CISOs and their teams feel overwhelmed the data illustrates why. Second, the pressure is substantially worse for IT/security teams in organisations with 250-1,000 employees. And now, the good news Given the dramatic worldwide talent shortage, it\u2019s na\u00efve to imagine that all organizations can find and afford the people needed to build and run their own SOCs. Managed detection and response (MDR) addresses these problems. MDRs are fully-managed, 24/7 services staffed by experts who specialise in detecting and responding to a wide range of cyberattacks, including phishing, ransomware, and threat hunting. By marrying human expertise to advanced technologies, MDR analysts can detect, investigate, neutralise, and remediate advanced attacks. This eliminates an organisation\u2019s need for a large staff. The best MDRs relentlessly research the latest hacker tactics and develop advanced tools to process massive amounts of data and automatically sort signal from noise\u2014meaning a company\u2019s analysts see the important alerts, not all the alerts. The list of benefits goes on, but the bottom line is that, for many organisations, MDR means broader, deeper, more sophisticated cyberdefense (and fewer headaches) for less money. If any of this sounds relevant for your business, we encourage you to review the full report and drop us a line ."
6
+ }
connect-hashicorp-vault-and-google-s-cloudsql-databases.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Connect Hashicorp Vault and Google's CloudSQL databases",
3
+ "url": "https://expel.com/blog/connect-hashicorp-vault-and-googles-cloudsql-databases-new-plugin/",
4
+ "date": "Aug 31, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Connect Hashicorp Vault and Google\u2019s CloudSQL databases: new plugin! Engineering \u00b7 3 MIN READ \u00b7 DAVID MONTOYA AND ISMAIL AHMAD \u00b7 AUG 31, 2022 \u00b7 TAGS: Cloud security / Tech tools We take protecting credentials seriously, and database (DB) credentials are no exception. They\u2019re juicy targets for attackers and often hold the keys to all your sensitive information. Making sure they\u2019re short-lived, rotated, scoped, auditable, and aligned with zero trust principles is central to boosting an organization\u2019s security posture. As you may know from our previous post, 5 best practices to get to production readiness with Hashicorp Vault in Kubernetes , we\u2019re long-time users of Vault, which specializes in credential management and offers a large plugin ecosystem for different databases. Sounds like a slam dunk right? Not so fast. As we began to explore using Vault to manage credentials for our Google-managed CloudSQL instances, we found ourselves stuck between two less-than-ideal out-of-the-box options, forcing us to compromise on operational complexity or, worse, security. Caught between a rock and a hard place, we dug deeper and built a new tool to meet our requirements. We think it\u2019s broadly useful for organizations using Vault and Google CloudSQL. And now, the good news: Expel is excited to open source a new Hashicorp Vault plugin. It brokers database credentials between Hashicorp Vault and Google\u2019s CloudSQL DBs and it doesn\u2019t require direct database access (via authorized networks ) or that you run Google\u2019s CloudSQL auth proxy. If you\u2019re wondering how that\u2019s possible, the plugin uses Google\u2019s best practice for authentication via IAM rather than a standard database protocol. Sound like something you could use? The plugin codebase can be found in GitHub . Why build a custom plugin? To better understand why we built this plugin, let\u2019s look at some of the challenges posed by using Vault\u2019s default database plugins to connect to CloudSQL instances. Per Google\u2019s documentation , there are two primary ways of authorizing database connections. Option 1: use CloudSQL authorized networks Google allows users to connect to CloudSQL databases using network-based authentication. To improve the security posture of your DB, Google recommends enabling SSL/TLS to add a layer of security. This requires users to manage an allowlist of IP CIDRs and SSL certificates on both the servers and clients for the databases they wish to connect to. As you can see, this gets tedious quickly. Imagine you have hundreds of CloudSQL databases\u2026 no one wants to manage that many firewall rules or certificates. Option 2: use CloudSQL Auth proxy Google\u2019s recommended approach for connecting to CloudSQL instances is to use the Auth proxy . Its benefits include: Uses IAM authorization instead of network-based access control (no more firewall rules!) Automatically wraps all DB connections with TLS 1.3 encryption regardless of the database protocol As we started exploring approaches for connecting our Vault instances to CloudSQL databases, we contemplated using the cloudsql-proxy (but shuddered at the operational complexity of running such a specialized sidecar along with our Vault servers). Developing a Hashicorp Vault plugin So, how exactly did we end up writing our own Vault plugin? As we researched options, we landed on a GitHub issue that referenced an interesting new Go connector for CloudSQL . The Google Cloud team had recently released a generalized Go library for authenticating to CloudSQL databases the same way that their auth proxy does. Being Go developers, our interest really piqued\u2013could we use this new library to get the best of both worlds (low operational complexity and security best practice)? By creating a new Vault plugin based on Google\u2019s Go connector, we were able to integrate Vault with CloudSQL databases all while taking advantage of Vault\u2019s existing capability to create and manage database credentials. The plugin simply initiates the database connection using the new Go connector for CloudSQL instances and then delegates everything else to the community-supported Vault database plugin. How to use it Ok so you\u2019ve made it this far. You understand what problem the plugin is solving and how it\u2019s solving it. Now let\u2019s talk about how you use it. A step-by-step guide to building and deploying this plugin can be found here . Conclusion Although \u201cbuilding a new way\u201d often seems daunting, our journey with Vault and CloudSQL was rewarding and we hope our plugin will be useful to others facing similar issues. As we continue our journey, watch this space for future posts describing how to employ Vault as a database credential broker for workloads and audit across the stack. Finally, have a look: we\u2019ve posted a step-by-step guide on GitHub detailing how to set this up in your environment."
6
+ }
containerizing-key-pipeline-with-zero-downtime.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Containerizing key pipeline with zero downtime",
3
+ "url": "https://expel.com/blog/containerizing-key-pipeline-with-zero-downtime/",
4
+ "date": "Feb 23, 2021",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Containerizing key pipeline with zero downtime Engineering \u00b7 8 MIN READ \u00b7 DAVID BLEWETT \u00b7 FEB 23, 2021 \u00b7 TAGS: Cloud security / MDR / Tech tools Running a 24\u00d77 managed detection and response (MDR) service means you don\u2019t have the luxury of scheduling downtime to upgrade or test pieces of critical infrastructure. If that doesn\u2019t sound challenging enough, we recently realized we needed to make some structural changes to one the most important components of our infrastructure \u2013 Expel\u2019s data pipeline, and the processing of that data pipeline. Our mission was to migrate from a virtual machine (VM)-based deployment to a container-based deployment. With zero downtime. Let\u2019s connect How did we pull it off? I\u2019m going to tell you in this blog post. (Hi, I\u2019m David, Expel\u2019s principal software engineer.) If you\u2019re interested in learning how to combine Kubernetes, feature flags and metric-driven deployments, keep reading. Background: Josie\u2122 and the Expel Workbench\u2122 In the past year at Expel, we\u2019ve migrated to Kubernetes as our core engineering platform (AKA the thing that enables us to run the Expel Workbench). What\u2019s the Expel Workbench? It\u2019s the platform we built so that our analysts can quickly get all the info they need about an alert and make quick decisions on what action to take next. In addition to some other very cool things. Want to see it in action? Get a free two-week trial of Expel Workbench for AWS Back to Kubernetes. While known for its complexity (who here likes YAML?), Kubernetes comes with a large amount of functionality that can, if used correctly, result in elegant solutions. Full disclosure: I\u2019m not going to dive into all the things we do with Kubernetes, or what is Kubernetes for that matter. Instead, I\u2019m going to focus specifically on our data pipeline and detection engine (we call her Josie). Our detection pipeline receives events (or logs) and alerts from our customer\u2019s devices and cloud environments. Then, our detection engine processes each alert and decides what to do with it. We have some fundamental beliefs about detection content and our pipeline: Never lose an alert; Quality and scale aren\u2019t mutually exclusive; The best ideas come from those closest to the problem; and Engineering builds frameworks for others to supply content. This means our detection pipeline is content-driven and can be updated by our SOC analysts here at Expel. We also hold the opinion that content should never take a framework down. If it does, that\u2019s on engineering, not the content authors. With these beliefs in mind, we were faced with the challenge of making structural changes to how we are running our detection engine, ensuring quality, not losing alerts and still enabling analysts to drive the content. Josie\u2019s journey to Kubernetes What we knew Ensuring this migration didn\u2019t disrupt the daily workflow of the SOC was key. Just as important was not polluting metrics used for tracking the performance of the SOC. That\u2019s why we wanted an iterative process. We wanted to run both pipelines in parallel and compare all the performance metrics and output to ensure parity. We also knew we wanted to be able to dynamically route traffic between pipelines, without the need for code-level changes requiring a build and deploy cycle. This would allow us to atomically re-route and have that change effective as quickly as possible. The final requirement was to retain the automated delivery of rule content. While the existing mechanism was error-prone, we didn\u2019t want to take a step backward here. Tech we chose We were already moving our production infrastructure to Kubernetes. So we took full advantage of several primitives in Kubernetes, including Deployments , ConfigMaps and controllers . We chose LaunchDarkly as a feature flag platform to solve both the testing in production and routing requirements. Their user interface (UI) is the icing on the cake \u2013 tracking changes in feature flag configuration as well as tracking flag usage over time. The real-time messaging built into their software development kit (SDK) enabled us to propagate flag changes on the order of hundreds of milliseconds. Preparing Josie for her journey If you\u2019ve read our other blogs, you\u2019ll know that Expel is data-driven when it comes to decision making. We rely on dashboards and monitors in DataDog to keep track of what\u2019s happening in our running systems on a real-time basis. Introducing a parallel pipeline carries the risk of polluting dashboards by artificially inflating counts. To mitigate this, we added tags to our custom metrics in DataDog . After the new tag was populated by the existing pipeline, we added a simple template variable , defaulting to filter to the current rule engine. This ensured that existing users\u2019 view of the world was scoped to the original engine. It also enabled the team to compare performance between the parallel pipelines in a very granular way. We then updated monitors to include the new tag, so they alerted separately from the old engine. The next step was to add gates to the application that would allow us to dynamically shift traffic between rule engines. To do this, we created two feature flags in LaunchDarkly: one to control data that is allowed into a rule engine and one to control what is output by each engine. Finally, we set up a custom targeting rule that considered the customer and the rule engine name. Initial: Kubernetes Once the instrumentation and feature flags were functional, we began setting up the necessary building blocks in Kubernetes. When setting up pipelines, I try to get all the pieces connected first and then iterate through the process of adding the necessary functionality. So, we set up a Deployment in Kubernetes. A Deployment encapsulates all of the necessary configuration to run a container. To simplify the initial setup, we had the application connect to the Detections API service on startup to retrieve detection content. This microservice abstracts our detection-content-as-code, giving programmatic access to the current tip of the main branch of development. Note that we configured the LaunchDarkly feature flags before turning on the deployment. The first flag controlled whether or not this instance of the detection engine would process an incoming event from Kafka. This flag allowed us to start with a trickle of data in the new environment, and gradually ramp up the volume to test processing load in Kubernetes. The second flag controlled whether this version of Josie would publish the results of the analysts\u2019 rules to the Expel Workbench. This allowed us to work through potential issues encountered while getting the application to function in the new environment, without fear of breaking the live pipeline and polluting analyst workflow. You can see the diagram I created to help visualize the workflow below. LaunchDarkly feature flags control flow Load Testing Once the new Deployment was functional inside Kubernetes, we began a round of load testing. This was critical to understand the base performance differences between the execution environments. We performed the load testing by first enabling ingress for all data into the new detection engine, but kept egress turned off. We then rewound the application\u2019s offset in Kafka. The data arrived in the rule engine and performed processing, but any output would be dropped on the floor. The processing generated the same level of metric data that the live system did, so we could compare key metrics such as overall evaluation time, CPU usage and memory usage. LaunchDarkly feature flags control flow Output Validation While we iterated through the load test, we also tested the data that was output by the new system. We pulled this off by tweaking the feature flag targeting rule to allow egress for the new detection engine for a specific customer. We chose an internal customer so that we could see the output in the Expel Workbench, but not disrupt our analysts. We triggered alerts for this customer then checked to see if each alert was duplicated, and if the content of each duplicated alert was identical. LaunchDarkly feature flags control flow Rule Delivery Once we were sure the new execution environment was capable of processing the load as well as generating the same output, we began to tackle the thorny problem of how to deliver the rule content. At Expel, our belief in infrastructure-as-code extends to the rules our SOC analysts write to detect malicious activity. The detection content is managed in GitHub, where changes go through a pull request and review cycle. Each detection has unit tests that run through CircleCI on every commit. Getting detection content from GitHub to the execution environment is tricky. The body of rules is constantly changing, and the running rule engine needs to respond to those changes as quickly as possible. Previously, when a pull request was merged, delivering the updated rule content involved kicking off an Ansible job that would perform a series of operations in the VM, and then restart processes to pick up the change. The entire process from pull request merge to going live could take as long as 15 minutes. Not only that, there wasn\u2019t much visibility into when those operations failed. That\u2019s when we asked: Could Kubernetes help us improve this process? The team wasn\u2019t happy with the direct network connection on startup behavior, mainly because it introduced a point of failure and rule changes weren\u2019t captured after startup. After talking with our site reliability engineering (SRE) team, we decided that the Detections API should store a copy of the rule\u2019s content in a Kubernetes configmap. We then updated the Kubernetes Deployment to read the ConfigMap contents on startup. This decoupled the application from the network so that service failures in Detections API would not break the rule engine. But this introduced the possibility of a few other failure modes. If the saved rule content was not getting updated correctly, the running engine could be stuck running stale versions of the rule definitions. One possible cause of this is the size limit on ConfigMaps. Fortunately, addressing these possible failure modes was fairly straight forward. We used monitors in DataDog. We made use of a reloader controller to react to changes in the ConfigMap. This controller listens for changes in the ConfigMap and triggers an update to the Deployment. When Kubernetes sees this change in the Deployment, it initiates a rolling update . This process ensures that the new pods start successfully, then spins down the old pods. With both of these changes in place, we arrived at a solution that simplified the operation of the system and allowed it to react to changes in rule content faster than the original implementation. Below is a diagram of the entire process. Expel containerized rule engine Live Migration With the new Deployment performing well and responding to rule changes, we were ready to shift live processing from the old system to the new. We decided to do a phased rollout. We started with a small subset of our customer base, turning egress off in the old implementation and on in the new. We allowed the system to run for a couple of days, and then slowly increased the number of customers routing to the new system. After a few more days, we shifted all customer egress to the new pipeline and turned off egress on the old one. We kept the old system running in parallel so that if we encountered any discrepancies or problems, we could easily flip back to it. After letting both run in parallel for a week, we decommissioned the legacy VM system. LaunchDarkly feature flags control flow What this means for developers Large-scale change to a critical business component is a daunting task. Throughout the process, we made sure to keep both the SOC and leadership in the loop. You\u2019ve probably seen us mention the importance of communication a few times. Regular communication during each phase, especially the planning phases, was critical. We needed to learn about the key dashboards and monitors in play. This also helped us mitigate the risk of having to answer to an angry SOC. Here are some tips based the lessons we learned along the way: LaunchDarkly provides a rich set of feature flags. While it provides a richer feature set than what we took advantage of, we were able to deploy code live but control execution at a very granular level through the use of feature flags. Our main goal here was to know in advance which subset of customers would be processed by which engine so that their associated engagement managers could be prepared for questions. Adopt observability. Our investment in being driven by metrics paid dividends here. The existing DataDog dashboards were comprehensive and we easily compared both systems simultaneously. We also leveraged the existing corpus of monitors by adjusting their targets to take an additional label into account. Don\u2019t overlook the primitives available in Kubernetes. They gave us the flexibility to respond to content changes at a much faster pace, and with greater visibility. While Kubernetes does support live reloading of configmap content, the current iteration of the engine doesn\u2019t take advantage of it. Our plan was to dynamically reload rule content in the running pod, instead of restarting on change. This alleviated hot-spots around waiting for Kafka partition ownership to settle, further decreasing the time it took for detection content to go live. I hope that this post helped give you some ideas and maybe even saved you some time problem solving. Want to play around with some of the things we\u2019ve built? Check out the Expel Workbench\u2122 for AWS ."
6
+ }
could-you-go-a-week-without-meetings-at-work.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Could you go a week without meetings at work?",
3
+ "url": "https://expel.com/blog/week-without-meetings/",
4
+ "date": "Dec 8, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Could you go a week without meetings at work? Talent \u00b7 3 MIN READ \u00b7 LAURA KOEHNE \u00b7 DEC 8, 2020 \u00b7 TAGS: Company news / Employee retention / Guide Wait\u2026what!? If you felt your stomach tighten in horror, followed quickly by a thrill up your spine at the idea of a whole week without meetings, you\u2019re not alone. Many Expletives felt the same as we prepared for our first Week Without Meetings in September. Eliminating all internal meetings for a week is a bold move, one designed to shock the system and make us more intentional about our meeting choices. The experiment paid off by increasing flexibility, and giving us the space and energy we needed. Want to try this at your company? Here are some lessons learned at Expel and tips for how your company can do it too. Why a week without meetings First, why\u2019d we do it? As school started, we\u2019d heard from parents and caregivers that what was needed most was flexibility to do work at a time when they personally had fewer distractions, along with fewer meetings. And we generally agreed that meeting-stuffed days, with hours on Zoom, were draining and left little time for individuals to do work and, even more important, work on strategic projects. We wanted to change Expel\u2019s meeting culture: Reducing the number of meetings (yes!) while improving the value of remaining meetings and encouraging more asynchronous collaboration. Pro tip: Before scheduling a week without meetings, define specific objectives for your program. It\u2019s not enough to just stop meeting for a week. You\u2019ll want to use the pause created by this event to support long-term behavior changes that meet your objectives. Expel focused on these behaviors: Being intentional about the decision to have a meeting Making the meetings we do have more productive Using asynchronous collaboration to work together more flexibly Getting feedback on our meetings for continuous improvement Here\u2019s a quick decision tree we created to help employees decide whether or not they needed to schedule a meeting: Meeting decision tree adapted from Real Life E Time Coaching and Training But why actually stop meetings? Eliminating all internal meetings for the whole week may seem drastic, but sometimes when you\u2019re after urgent, collective behavior change you need a big gesture. We wanted to catch attention immediately, to build awareness and have all Expletives experience the positive benefits of having fewer meetings first-hand, together. Plus, we couldn\u2019t very well schedule a meeting to talk about reducing meetings, could we? (Although, to be honest, those of us planning it met a lot while working out the details\u2026go figure!) How did you pull it off? A Week Without Meetings gave us the \u201cloud pause\u201d we needed to slow down and become more selective about our meeting habits. Here are the steps we took at Expel to prepare our people for a week without meetings (you can use these tips, too): Give several weeks advance notice so people can reorganize their schedules. Provide clear guidance and explicit permission for making decisions about which meetings to schedule and accept. (Our goal was to eliminate all internal meetings. Some meetings stayed: customers, of course, and a few managers met with job candidates or onboarded new hires. The point is to discern what can only be done in a meeting.) No meetings doesn\u2019t mean no work. Depending on what you\u2019re trying to do, there are plenty of ways to collaborate outside of meetings . Help your team use the tools available to them. Help managers prepare their teams for Week Without Meetings. Discuss strategies for communicating and maintaining forward momentum for the week. Is that all? Remember, the week itself is part of a behavior change initiative that started before the big event, and continues to this day. Some other keys to our success include giving every Expel manager a chance to weigh in on the idea before it launched; preparing managers with talking points and tools to use with their teams; providing all Expletives with learning resources to support the changes (see a selection on sidebar) and continued reinforcement of key concepts by executives who share their \u201cmeeting mojo\u201d with our company weekly. Would you do it again? Absolutely! In our first Week Without Meetings, many Expletives reported a noticeable increase in energy because they had more time to focus on getting work done. Interestingly, a good number said they became more engaged in their work. Overall we found the experience so beneficial, Expel just completed a second Week Without Meetings in November and plans to continue this tradition quarterly. Here are some key themes from the feedback in September: Week Without Meetings impact If you\u2019re going to implement your own Week Without Meetings, have a mechanism for gathering feedback asynchronously during that time. Share it at the start of the week and encourage people to post observations and ideas as they have them. Expel uses a \u201chotwash\u201d document that asks what\u2019s going well, \u201cmeh\u201d and badly. Derived from the hotwash, the bubbles above are keyed like a stoplight (green = good) and the size indicates the relative volume of comments by theme. A final word If an idea brings up a knee-jerk \u201cNo way!\u201d follow it up with a \u201cWhy not?\u201dWithout that approach, Expel\u2019s Week Without Meetings wouldn\u2019t have made it off the drawing board. Go ahead. Ask \u201cWhy not?\u201dand see what happens when you ditch meetings for a week. We can\u2019t wait to hear how it goes. If you try it, send us a note \u2013 we want to hear about your experience ."
6
+ }
creating-data-driven-detections-with-datadog-and.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Creating data-driven detections with DataDog and ...",
3
+ "url": "https://expel.com/blog/creating-data-driven-detections-datadog-jupyterhub/",
4
+ "date": "Feb 11, 2020",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Creating data-driven detections with DataDog and JupyterHub Security operations \u00b7 5 MIN READ \u00b7 DAN WHALEN \u00b7 FEB 11, 2020 \u00b7 TAGS: Get technical / How to / SOC / Tools Ask a SOC analyst whether brute forcing alerts brings them joy and I\u2019ll bet you\u2019ll get a universal and emphatic \u201cno.\u201d If you pull on that thread, you\u2019ll likely hear things like \u201cThey\u2019re always false positives,\u201d \u201cWe get way too many of them\u201d and \u201cThey never actually result in any action.\u201d So what\u2019s the point? Should we bother looking at these kinds of alerts at all? Well, as it often turns out when you work in information security \u2026 it\u2019s complicated. Although detections for brute forcing, password spraying or anything based on a threshold are created with good intentions, there\u2019s always a common challenge: What\u2019s the right number to use as that threshold? More often than we\u2019d like to admit, we resort to hand waving and \u201cfollowing our gut\u201d to decide. The \u201cright\u201d threshold is hard to determine and as a result we end up becoming overly sensitive, or worse, our threshold is so high that it causes false negatives (which isn\u2019t a good look when a real attack occurs). At Expel, we\u2019ve been working since day one to achieve balance: ensuring we have the visibility we need into our customers\u2019 environments without annoying our analysts with useless alerts. How data and tooling can help As it turns out, security and DevOps challenges have quite a bit in common. For example, how many 500 errors should it take to page the on-call engineer? This is similar to a security use case like password spraying detection. These shared problems mean we can use a suite of tools that are shared between security and DevOps to help tackle security problems. Some of our go-to tools include: DataDog , which captures application metrics that are used for baselining and alerting; and JupyterHub , which provides a central place for us to create and share Jupyter Notebooks. Step 1: Gather the right data To arrive at detection thresholds that work for each customer (by the way, every customer is different \u2026 there\u2019s no \u201cone size fits all\u201d threshold), we need to collect the right data. To do this, we started sending metrics to DataDog reflecting how our threshold-based rules performed over time. This lets us monitor and adjust thresholds based on what\u2019s normal for each customer. For example, as our detection rule for password spraying processes events, it records metrics that include: Threshold Value , which is the value of the threshold at the time the event was processed; and Actual Value , which is how close we were to hitting the threshold when the event was processed. By charting these metrics,we can plot the performance of this detection over time to see how often we\u2019re exceeding the threshold and if there\u2019s an opportunity to fine tune (increase or decrease it): This data is already useful \u2013 it allows us to visualize whether a threshold is \u201cright\u201d or not based on historical data. However, doing this analysis for all thresholds (and customers) would require lots of manual work. That\u2019s where JupyterHub comes in. Step 2: Drive change with data Sure, we could build DataDog dashboards and manually review and update thresholds based on this data in our platform but there\u2019s still room to make this process easier and more intuitive. We want to democratize this data and enable our service delivery team (made up of SOC analysts, as well as our engagement management team) to make informed decisions without requiring DataDog-fu. Additionally, it should be easy for our engagement management team to discuss this data with our customers. This is exactly why we turned to JupyterHub \u2014 more specifically, Jupyter Notebooks. We\u2019ve talked all about how we use JupyterHub before , and this is another great use case for a notebook. We created a Jupyter Notebook that streamlined threshold analysis and tuning by: Querying DataDog metrics and plotting performance; Allowing the simulation of a new threshold value; and Recommending threshold updates automatically. As an example, a user can review a threshold like below, simulate a new threshold and decide on a new value that\u2019s informed by real-world data for that customer. This lets us have more transparent conversations with our customers about how our detection process works and is a great jumping off point to discuss how we can collaboratively fine tune our strategy. Additionally, we added a feature to automatically review historical performance data for all thresholds and recommend review for thresholds that appear to be too high or too low. There\u2019s room for improvement here but we\u2019ve already had luck with simply looking at how many standard deviations off we are from the threshold value on average. For example, here\u2019s what a threshold that is set way too high looks like: By automating data gathering and providing a user interface, we enabled our service delivery team to review and fine tune thresholds. JupyterHub was key to our success by allowing us to quickly build an intuitive interface and easily share it across the team. Step 3: Correlate with additional signals Arriving at the right threshold for the detection use case is one important part of the puzzle, but that doesn\u2019t completely eliminate the SOC pain. Correlation takes you that last (very important) mile to alerts that mean something. For example, we can improve the usefulness of brute force and password spraying alerting by correlating that data with additional signals like: Successful logins from the same IP , which may indicate a compromised account that needs to be remediated; Account lockouts from the same IP , which can cause business disruption; and Enrichment data from services like GreyNoise , that help you determine whether this is an internet-wide scan or something just targeted at your org. By focusing on the risks in play and correlating signals to identify when those risks are actually being realized, you\u2019ll significantly reduce noisy alerts. Every detection use case is a bit different, but we\u2019ve found that this is generally a repeatable exercise. Putting detection data to work Detection data \u2014 in particular, knowing what true negatives and true positives look like \u2014 gives us the capability to more effectively research and experiment with different ways to identify malicious activity. One example of this comes from our data science team. They\u2019ve been looking into ways to avoid threshold-based detection to identify authentication anomalies. The example you see below shows how they used seasonal trends in security signals for a particular customer to identify potential authentication anomalies. By using that seasonal decomposition combined with the ESD (Extreme Studentized Deviate) test to look for extreme values, we can identify anomalous behavior that goes beyond the usual repetitive patterns we typically see. Thanks to these insights, we can automatically adjust our anomaly thresholds to account for those seasonal anomalies. We\u2019re lucky to have tools like DataDog and JupyterHub at our disposal at Expel, but improving detections is still possible without them. If you haven\u2019t yet invested in new tools, or are just getting started on continuously improving your detections, ask the following questions of the team and tools you already have: What does \u201cnormal\u201d look like in my environment? (ex: 10 failures per day) When is action required? (ex: when an account is locked) What other signals can we correlate with? (ex: login success) How many true positive versus false positive alerts are we seeing? Questions like these give you the ability to reason about detection in terms of your environment and its unique risks. Regardless of where the answers come from, this feedback loop is important to manage your signal-to-noise ratio and keep your analysts happy. Big thanks to Elisabeth Weber for contributing her data science genius to this post!"
6
+ }
customer-context-beware-the-homoglyph.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Customer context: beware the homoglyph",
3
+ "url": "https://expel.com/blog/customer-context-beware-the-homoglyph/",
4
+ "date": "1 day ago",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Customer context: beware the homoglyph Security operations \u00b7 3 MIN READ \u00b7 PAUL LAWRENCE AND ROGER STUDNER \u00b7 MAY 16, 2023 \u00b7 TAGS: MDR This type of phishing attack can be ridiculously sneaky We love when our customers run red team engagements. Aside from testing and validating current security controls, detections, and response capabilities, we see it as a great opportunity to partner with our customers on areas of improvement. Here\u2019s the story of how a red team helped Expel improve our phishing service and how we used our platform capabilities to detect some sneaky activity. So, what happened? Our client\u2014let\u2019s call them Acme Corp\u2014had an enterprising red teamer with a clever idea. For one of their exercises, the red team purchased a domain: \u1ea1cmehome[.]com. Notice anything odd? Let\u2019s look closer: \u1ea1cmehome[.]com vs acmehome[.]com If you missed it, don\u2019t feel bad. That\u2019s the point. A bit of background The problem is that the \u201ca\u201d isn\u2019t an \u201ca\u201d at all, but an \u201c\u1ea1.\u201d It\u2019s a homoglyph \u2014\u201done of two or more graphemes, characters, or glyphs with shapes that appear identical or very similar but may have differing meaning.\u201d This one specifically is a Vietnamese particle used \u201cat the end of the sentence to express respect.\u201d Fast Company called homoglyph attacks (aka homography or Punycode attacks) one of the four most intriguing cyberattacks of 2022 . [They\u2019re] a type of phishing scam where adversaries create fake domain names that look like legitimate names by abusing International Domain Names that contain one or more non-ASCII characters. In other words, hackers discovered at some point that a lot of alphabets, like the Cyrillic and Russian alphabets, have characters that look like English or what we call Latin English. So, a Cyrillic \u201ca\u201d will be different from a Latin English \u201ca,\u201d but when these characters are used in domain names, they are indistinguishable to the naked eye. This allows phishers to spoof brand names and create look-alike domains which can be displayed in browser address bars if IDN display is enabled. There are lots of homoglyphs and the potential for mischief is off the hook (which is why top-level domain registries and browser designers are exploring ways to minimize the risks of h\u00f5m\u00f2gI\u00ffph\u00ec\u010d ch\u00e4\u00f4s). There\u2019s even a homoglyph \u201cattack ge\u00f1erator.\u201d This app is meant to make it easier to generate homographs based on homoglyphs than having to search for a look-a-like character in Unicode, then copying and pasting. Please use only for legitimate pen-test purposes and user awareness training. [emphasis added] Back to Acme. The red team\u2019s fake domain used the Vietnamese homoglyph to trick users into thinking it\u2019s the actual domain\u2014in this case, acmehome[.]com\u2014when that itty-bitty dot under the \u201ca\u201d makes a huge difference. The tactic also relies on a security operations center (SOC) analyst who\u2019s been staring at mind-numbing alerts slipping up and not noticing the difference in domain names. In truth, for most SOCs and attackers, this isn\u2019t a bad strategy. What we did After meeting with the red teamers, we uncovered a need to better scrutinize unique domains within emails that could intentionally trick the naked eye. Technology to the rescue. Since we have a content-driven platform capability\u2014customer context (CCTX)\u2014Expel was easily able to change the platform behavior to recognize the attack for that homoglyph site in Acme\u2019s Workbench\u2122. Having a platform that\u2019s content-driven means Expel users can change how the platform operates without having to engage with engineering teams to release new features. NOTE: When you have a platform that allows users to drive content and configuration, it means that once you understand how a feature works, you can bring your own creativity to solving problems. It\u2019s really fun when you\u2019re able to adapt a feature (especially if it allows for \u200crapid response to new or emerging techniques) to accomplish something unanticipated during the design of the feature\u2014which is what happened in this case. The result? Acme Corp\u2019s red team conducted a similar attack again, and this time the SOC caught it with CCTX. What does it all mean? Multiple things, possibly. First, homoglyphs represent a technique that SOCs need to account for. Second, there are branding reasons (as well as security ones) to sort homoglyph usage. While most businesses with accents and other homoglyphs in their names (Soci\u00e9t\u00e9 G\u00e9n\u00e9rale, A.P. M\u00f8ller-M\u00e6rsk, and Nestl\u00e9 come to mind), they typically use unaccented letters in their URLs. Would an analyst notice if a phishing attack used the homoglyph? Or, if the accented URL works (for example, lor\u00e9al.com), what if hackers put a different accent into play (\u00e8 vs \u00e9)? Third, this potentially matters even more for companies in nations whose languages employ \u200cextended iconography (this includes most non-English-speaking countries). Which means it matters more for cybersecurity firms serving them. Like us. Short version: homoglyph attacks are prevalent and sneaky. They pose particular challenges for human analysts, but as our Acme Corp case demonstrates, the combination of well-placed automation and humans leads to great results. If you have questions, or \u200cthink your organization might be at risk, drop us a line ."
6
+ }
cutting-through-the-noise-riot-enrichment-drives-soc.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Cutting Through the Noise: RIOT Enrichment Drives SOC ...",
3
+ "url": "https://expel.com/blog/cutting-through-the-noise/",
4
+ "date": "Jul 15, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Cutting Through the Noise: RIOT Enrichment Drives SOC Clarity Security operations \u00b7 2 MIN READ \u00b7 EVAN REICHARD AND IAN COOPER \u00b7 JUL 15, 2022 \u00b7 TAGS: MDR / Tech tools Flash back to your days in the SOC. An alert shows up and your investigative habits kick in ( OSCAR , anyone?). It takes a few minutes, but you eventually determine that this alert is benign network traffic and not, in fact, command and control (c2) traffic to attacker-controlled infrastructure. Can you remember what information you used to reach that conclusion? (Of course not, but maybe remembering a particular third-party open source intelligence (OSINT) tool or query is enough to generate a sense of nostalgia for you.) At Expel, we arm our analysts with the best OSINT available to quickly and accurately spot benign or false positive alerts. This creates space to tackle suspicious activity head-on. More signal. Less noise. Enter the Greynoise RIOT (Rule It Out) API. Greynoise RIOT API To paraphrase the Greynoise team, RIOT adds context to IPs observed in network traffic between common business applications like Microsoft Office 365, Google Workspace, and Slack or services like CDNs (content delivery networks) and public DNS (domain name system) servers. These business applications often use unpublished or dynamic IPs, making it difficult for security teams to keep track of expected IP ranges. Without context, this benign network traffic can distract the SOC from investigating higher priority security signals. We use the RIOT API, plus several other enrichment sources, to help our analysts quickly recognize IPs associated with business services and dispatch network security alerts that don\u2019t require further investigation. Ruxie\u2122, our ever-inquisitive security bot, uses these APIs to collect enrichment information and parse the results for human consumption. RIOT Destination IP Summary RIOT info guides analysts as they orient themselves with alerts. A color-coded enrichment workflow helps them identify noteworthy details. For example, RIOT recognizes the above IP as trust level 2 , but it\u2019s classified as a CDN. Attackers can use a CDN to obfuscate their true source via domain fronting. IPs tagged as trust level 1 are more likely to be associated with an IP that\u2019s managed by a business or service, rather than a CDN. \ufeff CSI: Cyber \u2013 \u201cAll I got is green code\u201d Ruxie also enriches other pieces of network evidence, like domains. Analysts can immediately see the date a domain was registered: a recently registered domain should be treated with additional scrutiny since they\u2019re often associated with recently built attacker infrastructure. Malicious domains tend to be promptly taken down, forcing attackers to start over from scratch. More advanced attackers are known to buy and hold useful domain names for extended periods prior to an attack. RIOT arms our analysts with a simple, colorized tool for surfacing enrichment details so the SOC can quickly spot and dispatch non-threat activity. This means that when Josie\u2122 (our detection engine) and Ruxie (our orchestration bot) have decided an alert is worthy of review, the SOC can get to work on a triage knowing they\u2019re not wasting their time."
6
+ }
dear-fellow-ceo-do-these-seven-things-to-improve-your-org-s.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Dear fellow CEO: do these seven things to improve your org's ...",
3
+ "url": "https://expel.com/blog/dear-fellow-ceo-do-these-seven-things-to-improve-orgs-security-posture/",
4
+ "date": null,
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Dear fellow CEO: do these seven things to improve your org\u2019s security posture Tips \u00b7 6 MIN READ \u00b7 DAVE MERKEL \u00b7 APR 17, 2019 \u00b7 TAGS: Managed security / Management / Overview / Planning You\u2019re at the helm of a fast-growing company. You\u2019re adding staff rapidly, and your team is starting to specialize. Hopefully most of your folks now have one job (or maybe two) instead of the five or six everyone had in the early days. Customers are flying at you left and right (not a bad thing!). Leading a fast-growing org has its perks. And yeah, it\u2019s exciting. But as you scale, you\u2019ll inevitably be breaking things as you stress the organization and look to add more capabilities and maturity everywhere you can. Oh, and did I mention that the \u201csnake that kills you today\u201d starts to change shape as you grow, too? It used to be that you were crossing your fingers to make the quarter. Now it\u2019s, \u201cDo we have mature enough finance and business processes to support Sarbanes Oxley?\u201d Another challenge that often pops up if it hasn\u2019t already: Do you have any clue what you\u2019re doing around information security? Maybe you started to care about that yourself. Maybe a well-traveled board member started asking some uncomfortable questions. I get that \u201cinformation security\u201d is probably toward the bottom of your list of \u201cthe snake(s) that\u2019ll kill you today.\u201d But here\u2019s the thing: a reckoning is coming and it usually shows up at a time that\u2019s least convenient. The good news: You can turn the (information security) ship around. Or get two hands back on the wheel if you\u2019ve been spending your time focusing on other things. Here are seven simple things you can do right now that\u2019ll get your org\u2019s security posture on track. 1. Hire an information security business executive, and have her or him report to you Yes, have this person report to you \u2014 the CEO. Don\u2019t be tempted to have him or her report the CIO, CTO or general counsel. You want a business executive that owns this domain as a close advisor, someone who can translate from security lingo to the language of your business and back again. This person should be a business executive . Someone that understands what your business does, its value proposition and the fact that their role isn\u2019t \u201csay no\u201d \u2014 it\u2019s \u201cfigure out how to say \u2018yes\u2019 while managing risk.\u201d Here\u2019s a litmus test on whether or not you have the right person \u2026 do the CIO and/or CTO respect the CISO\u2019s technical acumen? Would you hesitate to put this person in front of your board of directors so he or she can educate them on what they should care about and how they should hold the organization accountable for security risk? Do you respect this individual as an executive and can you see yourself proactively seeking his or her counsel? If you answered \u201cno\u201d to any of those questions, keep looking. 2. Identify the org\u2019s top information security risks and write them down As an executive, part of your job is to think about potential risks to the business and devise strategies to address them \u2014 like competitors, markets and external events that may impact your business. Security risks are as important to evaluate as any of the more \u201ctraditional\u201d business concerns that you\u2019ve historically considered. You have capable leaders to deal with risk in all parts of your business. They should all be at the table when you\u2019re talking about security because security impacts every part of your org. If you followed my advice above, you\u2019ll have a CISO \u2014 he or she can (and should) drive this process for you. Additionally, have your general counsel think about the potential legal ramifications of a security incident. And what about your CFO? How will a security-related misstep impact your bottom line? You get the idea. Bring all those brains to the table and work together to think through the various risks and the ripple effects they\u2019ll have on the broader org. Your execs need to be bought into that response plan, not victims of it. 3. Create your incident response \u201cbrain trust\u201d When something goes sideways (and trust me, it will) who will you call? Sure, the teams with technical expertise will be on the short list, but remember to think about all those potential ripple effects and make sure the right people are at the table when a bad thing happens. This includes legal counsel and even your corporate communications lead. Once again, your CISO will drive this process, but it needs to be sponsored by you so everyone knows it\u2019s important. The best way to prepare for a real security incident is to flex those muscles and practice responding as a group. A great way to do this is to orchestrate a tabletop incident response exercise. Your CISO can get started with your own by downloading our guide to tabletop exercises right here, which has everything you need to simulate a security incident: Oh Noes! A New Approach to IR Tabletop Exercises . When the CISO comes to you to get it scheduled make sure you support the initiative and give it weight. 4. Build out a true security team Create a security team that\u2019s separate from IT. When security is fully subordinate to IT you run the risk of thinking about security as a technology problem instead of a risk management capability. When security is part of IT, it can incentivize bad behavior. Security could be viewed as purely a cost instead of a necessity to manage risk. As a result, it could face significant budget pressures. Putting security under IT can also make it difficult to champion certain kinds of spends. For example, maybe buying security technology widgets is easy since IT is used to buying tech. But perhaps doing thoughtful risk assessments that span not just technology but business objectives, processes and functions becomes more challenging, if not outright impossible. Radical pro tip: consider having your IT team report to security \u2014 we did it and it works. Remarkably well, in fact. IT decisions almost always involve some aspect of cyber risk. By having your IT function report into security you enable security to be woven into your IT processes and decision making. This helps your organization build security into your systems and infrastructure from the get-go rather than \u201cbolting it on\u201d as an afterthought. 5. Put some quick security controls in place while you build a security program Conducting thorough assessments to understand security risks and technical control gaps are great, but the reality is that attackers aren\u2019t going to take a time out while you get your house in order. That\u2019s why it\u2019s essential that you and your CISO get (or keep) some basic security tools and processes in place quickly, while you simultaneously dive deep into a review of your security processes, programs and tools to figure out what needs fixing. As you work through your assessment, there are plenty of decisions you\u2019ll need to make as you figure out how you want to operate and lay a foundation that minimizes risk. For example, do you want to build your own SOC or use a vendor? What framework will you use to build and measure your new security program? Do you need new technology or are the tools you already have sufficient? 6. Pick a security framework that you\u2019ll use to assess your org Work with your CISO to pick a framework \u2014 there are plenty to choose from like the NIST Cybersecurity Framework , ISO 27001 , COBIT or something more specialized like HiTRUST \u2014 and stick with it. This will help your exec team communicate your position and plans in a consistent way among one another and with others (like your board, investors and outside counsel) who\u2019ll want those details. By using a framework to organize your planning and assessment activities, you\u2019ll be able to develop a coherent strategic plan, figure out where the gaps are and start to close them quickly. As a bonus, if you\u2019ve socialized the framework with your board, they\u2019ll be able to follow where you are on the journey and ask smarter questions. 7. Track your progress and learn from it Since you hired a CISO first , that person can drive this for you, and he or she will likely use the framework you picked above to backstop their conversations with you and your board about progress. As with so many things, your role is to give this weight. You need to care, ask questions and hold both your CISO and the rest of the organization accountable for delivering on initiatives to improve posture and manage risk. I know what you\u2019re thinking: \u201cThis sounds like any other aspect of my business \u2026 get a leader, listen to their counsel, assess business risks and initiatives in their area, take prompt action and posture for future success.\u201d BINGO. Security is not mystical, as long as you treat it as another function that\u2019s just as important as other key areas of your business, and hire a security leader who is a true peer to the rest of your exec team."
6
+ }
detecting-coin-miners-with-palo-alto-networks-ngfw.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Detecting Coin Miners with Palo Alto Networks NGFW",
3
+ "url": "https://expel.com/blog/detecting-coin-miners-with-palo-alto-networks-ngfw/",
4
+ "date": "Jun 30, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Detecting Coin Miners with Palo Alto Networks NGFW Security operations \u00b7 5 MIN READ \u00b7 MYLES SATTERFIELD, BRIAN BAHTIARIAN AND TUCKER MORAN \u00b7 JUN 30, 2022 \u00b7 TAGS: MDR / Tech tools TL;DR 35% of the web application compromise incidents we saw in 2021 resulted in deployment of cryptocurrency coin miners. The Palo Alto Networks next-generation firewall (PAN NGFW) helps detect and investigate coin miner C2. This post walks through a cryptojacking example and provides helpful advice on how to avoid it in your own environment. Cybercriminals are always looking for new ways to make money. These methods don\u2019t always include holding data for ransom (although this tactic is a popular one). In fact, bad actors don\u2019t necessarily have to elevate privileges or move laterally to make their coin. Q: How? A: Cryptojacking . Cryptojacking is when a cybercriminal steals an organization\u2019s computing resources to mine various crypto currency blockchains. As our end-of-year report indicated, 35% of the web application compromise incidents we saw in 2021 resulted in deployment of various cryptocurrency coin miners. It\u2019s a sweet gig for the bad guys, too: after the miner is deployed, they can sit back, relax, and watch the money pile up. So how can organizations spot cryptojacking? One of the answers is Palo Alto Networks next-generation firewall (PAN NGFW) series. In addition to affording visibility into network traffic, PAN NGFW embeds different types of command and control (C2) detections. As the use of cryptojacking increases, we\u2019ve noted how PAN NGFW has helped detect and investigate coin miner C2 activity in our customers\u2019 environments. Throughout these investigations, we\u2019ve used PAN NGFW\u2014specifically, firewalls and Cortex XDR \u2014to quickly identify and respond to coin miner infections. To be clear: we don\u2019t believe coin miners are inherently bad\u2014it\u2019s the groups that are exploiting vulnerable web-apps for cryptojacking that are the problem. In this post, we\u2019ll walk through why we\u2019ve found PAN NGFW is great at detecting cryptojacking, and some actions we\u2019ve integrated into Ruxie\u2122, our detection bot, to help. Detecting cryptojacking with PAN NGFW Over the past year, 40% of PAN NGFW \u201cCoinMiner\u201d alerts triaged by our SOC were true positive\u2014an extremely high-performance result. In fact, anytime we ingest a PAN NGFW \u201cCoinMiner\u201d alert into Expel Workbench\u2122 (our analyst platform) we create a high severity alert where we aim to have eyes on the activity within 15 minutes. Our response time for this class of alert? Six minutes. Bottom line: the fidelity of these alerts is quite good. In coin mining incidents detected by our SOC, PAN NGFW \u201cCoinMiner\u201d alerts typically detected network connections to known mining pools (for example, \u201c moneropool[.]com \u201d), use of the JSON-RPC protocol, methods (example: \u201c mining.subscribe \u201d) associated with coin mining, and algorithms used by the miner (example: \u201c nicehash \u201d). Let\u2019s consider an example PAN NGFW coin mining alert in Workbench, the investigative steps we take to determine if the activity is a true positive, and some Ruxie actions we use to boost our investigation. Let\u2019s walk through an example alert This is what a PAN NGFW \u201cCoinMiner\u201d alert would look like in Workbench. Initial Palo Alto next-generation firewall coin-miner alert First, let\u2019s take a look at the source and destination IP addresses and ports. We can see the source IP address starts with 10. \u2014indicating the address is internal to the organization. Additionally, the source and destination ports reveal that the source IP address is likely the client and the destination is the server. (The source port is a part of the ephemeral port range and the destination port is 80, and likely HTTP traffic.) Therefore, if this is coin miner traffic, it\u2019s likely a miner installed on the internal machine reaching out to the mining server. Some quick research on the IP address indicates it\u2019s likely part of a hosting provider. Shodan suggests the IP address has port 80 open, but it\u2019s unclear as to what service is being offered. If we take a look at the application field, we see json-rpc is used. Some research shows crypto miners use json-rpc to communicate with their mining pools. Let\u2019s step through the communication flow covered in the reference: Diagram of json-rpc Stratum mining protocol The miner sends a login request to the mining pool for authorization If the authorization is successful, the server sends back a job for the miner to do After the miner completes the job, it sends back a submit to the mining pool server The server sends a response to the miner on whether the submission was successful or not The information from the alert and our research indicates this activity may align with coin mining. Now we can use information from Ruxie to get a better understanding of the traffic going back and forth. We have a Ruxie action that pulls netflow data involving the destination IP address- 45.9.148.21 . In the screenshot below, the data shows consistent communication from the source internal IP address 10.1.2.3 to the destination 45.9.148.21 . Additionally, there\u2019s consistency between the bytes being transferred each time the source IP connects to the destination. Netflow Ruxie action from source to destination IP addresses Finally, we have Ruxie download a packet capture file (PCAP) from the Palo Alto console (if available). Ruxie parses out readable strings as well as info from different layers in the packet. PCAP Ruxie Action What does this mean? The raw data from the packet above indicates active coin-mining activity. The json-rpc data suggests the server is giving the miner a job, specifying details such as the seed_hash and algorithm to use. This activity aligns with step 2 in the overview of mining communication traffic above. We can infer that a miner at or behind the source IP address performed the login process in step 1 because the server wouldn\u2019t have sent the job recorded in the PCAP if it didn\u2019t receive a successful login. At this point, we have enough evidence to conclude there\u2019s a coin miner installed on the host at or behind the IP address 10.1.2.3 . If we have access to endpoint technology, we can use it to determine what process is generating this traffic. We got \u2018em\u2014now what? To improve resilience, we first ask, \u201cHow did the coin miner get here?\u201d If we don\u2019t have access to the source machine of the activity, we may never uncover the answer. However, we can think about some of the common ways coin miners are deployed: Public application exploitation Attackers can exploit public-facing software that\u2019s vulnerable to a remote code execution (RCE) vulnerability to deploy crypto miners. How to prevent: Keep public-facing applications and software up-to-date. As our end-of-year report indicated, we typically see cybercriminals exploit one to three-year-old vulnerabilities. Access key compromise In the past, we\u2019ve watched attackers gain access to long term Amazon Web Services (AWS) access keys\u2014access keys that start with AKIA\u2014and abuse access to deploy EC2 instances and run crypto miners on the deployed instances. How to prevent: Make sure you aren\u2019t exposing access keys in public repositories and implement least privilege for AWS users. Phishing emails/USB devices Coin miners can be deployed via phishing emails or infected USB devices. How to prevent: Disable autorun on Windows 10 machines and educate end users on the impact of phishing emails. Key takeaways While we understand it\u2019s next to impossible to completely prevent coin miners from being deployed in your environment, here are three key recs for detecting coin mining activity in your org: Look for internal-to-external connections over the json-rpc protocol or to known mining pools (Monerohash, c3pool, and minergate, among others). If you\u2019re using a Palo Alto firewall, investigate their CoinMiner Command and Control Traffic and XMRig Miner Command and Control Traffic alerts. Consider services like Shodan and Censys to see what the internet can see about your attack surface."
6
+ }
detection-and-response-in-action-an-end-to-end-coverage.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Detection and response in action: an end-to-end coverage ...",
3
+ "url": "https://expel.com/blog/detection-and-response-in-action-an-end-to-end-coverage-story/",
4
+ "date": "Sep 8, 2022",
5
+ "contents": "Subscribe \u00d7 EXPEL BLOG Detection and response in action: an end-to-end coverage story Security operations \u00b7 12 MIN READ \u00b7 NATHAN SORREL \u00b7 SEP 8, 2022 \u00b7 TAGS: MDR What does a comprehensive detection, response and threat hunting strategy look like? Glad you asked. Expel provides three primary service offerings\u2014managed detection and response (MDR), phishing prevention, and threat hunting\u2014and we offer those in a few different flavors to customers around the world. One size doesn\u2019t fit all when it comes to service delivery. Each customer\u2019s distinct environment, risk, and security posture requires that tools work together, so we built Expel to connect all of those services into one coherent, unified experience. The whole really is greater than the sum of its parts. So how do our MDR, phishing, and threat hunting services work, and most importantly, how do they work together ? The following soup-to-nuts description of Expel\u2019s security process borrows details from several real-life detection situations, and the accounts illustrate how our team shut hackers down. While we\u2019ve changed some particulars for the sake of privacy, this story accurately represents how our teams go from triaging alerts all the way to threat hunting and back. We\u2019ll walk you through the entire incident to illustrate how different players on the team and our complementary services reinforce each other. Detection: alert and triage It\u2019s a Sunday at 7:17am EST. The day shift analysts have arrived and are catching up on last night\u2019s activity. Reading through customer communications and recent investigations, the analysts soak up the news. Tools are logged into, browser tabs are organized, and the day begins. Girish checks on a verification request for updates he sent to a customer yesterday. Jenni flips through alerts, looking for \u201cthe weird.\u201d Chris puts the finishing touches on an investigation that looked odd at first, but was quickly explained by some research and a little IP prevalence mapping. Let\u2019s meet our talented crew. Girish, a detection and response analyst, helps keep all the balls in the air. His gift for leadership, organization, and process comes in handy when ensuring 24\u00d77 coverage across three shifts and 25+ analysts. In a given week Expel analyzes hundreds of incidents and conducts dozens of investigations. Girish, and others like him, keep the trains running. Chris\u2019 superpower is level-headedness. In security, where a frantic response can lead to disaster, Chris doesn\u2019t react, he responds, by taking a few seconds to reflect on the facts of a case. He radiates calmness, making the whole team make better, smarter decisions. Jenni seems to have threat intel on speed dial. She can research and document activity better than almost anyone. Offering accurate understanding and attribution regarding attack type can be profoundly helpful during an investigation. All of these folks have spent thousands of hours reviewing suspicious activity and investigating the \u201creally bad\u201d stuff from our customers. At 7:48 am EST, an alert arrives \u2014 DNS queries originating from the process Regsvr32.exe. Windows Defender ATP detects a common Windows binary making unusual network connections. This alert arrives in our medium severity queue and is examined by an analyst within 10 minutes. With our automation-forward approach, raw alerts are analyzed immediately by our detection bot, Josie\u2122. It commonly takes less than five minutes for Josie to escalate an alert to a human analyst, and for that analyst to confirm the alert is a threat. We consistently triage our highest fidelity alerts in about two minutes. We track our response time in minutes and we like it that way. Jenni takes a look and quickly notes the processes involved. Its parent is Winword.exe and Jenni begins to comb through its command line arguments. Her experience, combined with open-source tools like Echotrail.io, tell her that the process Regsvr32.exe isn\u2019t commonly generated by the Microsoft Word process. Its network connections heighten her interest, so she digs deeper. Beyond the experience of seeing thousands of alerts a month, our analysts use in-house datasets and open source tools (like Greynoise ) to determine the prevalence and meaning of observed events. Asking questions like, \u201cIs this activity actually uncommon on a global scale?\u201d and \u201cDoes this IP address have a reputation?\u201d leads analysts to better understand what they\u2019re seeing. Her first step is to look for any highlighted text on the Expel Workbench\u2122 alert page, which may indicate this host was involved in a previously disclosed exercise. But the CCTX around the endpoint name shows no indication that this activity is known or expected. \u201cThe host is not known\u2026the user is \u201cmukhi\u201d\u2026wonder who that is? \u2026Where is the\u2026\u201d Jenni\u2019s voice trails off as she thinks aloud through the evidence in front of her. We call it customer context or \u201cCCTX.\u201d It\u2019s most commonly displayed in Workbench as highlighted text. CCTX can be any specific insight provided by the customer related to expected activity from users, endpoints, or network locations, and it helps us quickly assess a situation. Additionally, our analysts flag red team assets, previously compromised hosts, and other artifacts for future reference. Each piece of CCTX information saves our analysts minutes of research, keeping our alert-to-fix times low. After initial triage and lacking further context, Jenni creates an investigation within Workbench and sets about organizing her research. Response: investigation and context This one will require more time and digging. Jenni launches a \u201cPermaZoom\u201d 24\u00d77 video call with the rest of the team. \u201cAnyone else see that one in the medium queue? It doesn\u2019t look right.\u201d More analysts jump in to help. DeShawn, always eager to lend a hand, takes a look. \u201cI\u2019m gonna see if any other hosts are talking to that domain,\u201d Tucker chimes in. Chris offers to scope the environment for other instances of the Word document. The Expel security operations center (SOC) is very much a team. Analysts bring their own capabilities and knowledge sets to the table and investigations quickly take shape around the collective strengths of the group. One analyst examines the endpoint within Microsoft Defender for Endpoint while another looks at IP/domain prevalence. A third examines recent phishing activity. It\u2019s not uncommon to have three or more analysts collaborating on the same incident. The collaboration between our analysts also extends to you. The Expel Workbench lets our customers see everything we see in real time \u2014 not after the fact. Workbench gives them potent investigative and data collection tools to power their own daily SOC activities. Jose, an Expel phishing analyst, says he just saw an email submission containing a Word document similar to the \u201ctax help\u201d one identified in the alert. \u201cCan someone grab the Word doc off the host?\u201d he asks. Analysts on the phishing team are pros at triaging suspicious documents. The faster Jose can get that file, the faster he can provide the support Jenni and the team need. Jose gets Chris\u2019 help scoping for evidence of file execution while he compiles a list of users who received the email. While our services offer tremendous value individually, integrating them provides even more coverage against an attack \u2014 a benefit highlighted by this case. The root cause of most attacks? Phishing emails. MDR and phishing services together make up the Expel SOC, and they communicate extensively, maximizing effective response across our customer base. Since Jose and other phishing analysts are at the front edge of so many attacks, they can alert MDR analysts sooner about potential business email compromise (BEC). Attacker trends are commonly noted by phishing analysts, who pass the information on to their MDR counterparts. Overall, having both services in place means fuller coverage and quicker response. Back to the story. Thankfully this customer, Vandelay Industries, provides the Expel SOC with Live Response access via their EDR console, meaning Jose can directly acquire the file for fuller analysis. Detonating the document in our sandbox confirms that the document isn\u2019t, in fact, the \u201cTax Planning Help Guide\u201d its name suggests (we know \u2014 we\u2019re as shocked as you are). \u201cHey, Jenni,\u201d says Jose, \u201cthis sandbox execution looks bad.\u201d Jenni looks at the endpoint timeline (since the malicious document was first opened). \u201cI\u2019m guessing that JPEG isn\u2019t really a JPEG,\u201d she mumbles, as she runs the hash through VirusTotal. Remediation: incident to fix \u201cI\u2019m gonna spin this up into an incident,\u201d Jenni says. \u201cThey need to isolate that host.\u201d For many incidents, automation baked into our process lets Jenni instantly both notify the customer about what we\u2019re seeing and suggest remediation steps. More hosts, hashes, and domains will be added to the list of suggested remediation steps as the SOC gathers indicators of compromise (IOCs). \u201cDear Vandelay Industries, Today At 5:47 UTC Windows Defender detected \u2018Regsvr32.exe\u2019 being spawned from `Winword.exe\u2019 on host DESKTOP-3AB921 and making network connections to BadDomain.com\u201d\u2026 Contain the host \u201cDESKTOP-3AB921\u201d Block the malicious Word document \u201cTax Planning Help Guide.docx\u201d with SHA256 hash \u201cba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad\u201d Sinkhole the domain BadDomain.com Block emails from \u201cBadEmails.com\u201d with subject line \u201cDownload the Tax Planning Help Guide\u201d We will update you if we identify any other involved hosts. Within 20 seconds of the incident\u2019s creation, our customer has meaningful action they can use to nip the attack in the bud. And yes, we\u2019re tooting our own horn here. We\u2019re good at what we do and we do it quickly. \u201cWhich customer was that incident for again, Jenni?\u201d asks Deshawn. \u201cI see two more alerts in the medium queue that look similar.\u201d \u201cVandelay Industries.\u201d Jenni replies. \u201cIs that the \u201cDESKTOP-3AB921\u201d host you\u2019re seeing, or a new one?\u201d \u201cSame customer, but new hosts\u2026 both of them. I\u2019ll drop those in the incident and assign those remediations to Vandelay,\u201d DeShawn adds. \u201cThanks,\u201d she says, \u201cI\u2019m gonna make this incident \u2018critical\u2019 and update the customer in Slack. Would you mind scoping those hosts for anything new\u2026domains or otherwise? Whoa. Vandelay already yanked that first host off the network. That was quick!\u201d At this point, much of the heavy lifting is done. Jenni and another member of the Global Response Team (GRT) will continue to deep dive into anything that\u2019s still not fully understood. They\u2019ll ask questions like: How many users received the email and how many clicked on the malicious attachment? What\u2019s the source of the email? How many hosts are involved? What network activity did we see? Was there any evidence of persistence or lateral movement? Did the malicious files successfully execute? Should the hosts be reimaged? New IOCs are added as they are discovered and any new alerts that come through are attached to the Incident. The GRT is composed of senior and principal-level analysts who serve as incident responders for critical incidents. These are our most seasoned analysts and they help validate all aspects of the compromise. Next question: \u201cHow can we help the client avoid this next time?\u201d Resilience: prevention The team shared remediation steps with Vandelay and Jenni awaits confirmation. David has joined the Zoom call as a member of the GRT to help Jenni finalize things. Jenni tells David that, \u201cSo far, we\u2019re seeing execution on three hosts from what appears to be \u2018click-through\u2019 by users into a phishing campaign. That led to a malicious Word file. I\u2019ve updated the customer but am still waiting for them to respond. Two of the hosts are still online. The incident is \u201ccritical\u201d because of the multiple hosts, so they should have received a notification by now, but still no word. I\u2019ll ping their account rep and have them reach out by phone.\u201d David thinks out loud. \u201cSo they don\u2019t have auto-containment in Workbench enabled, Let me get into the console and poke around.\u201d Elapsed time since we first issued customer recommendations: 40 minutes. This situation is tricky, as we\u2019re dealing with multiple hosts and decreased weekend staffing by the customer. What can we do when there\u2019s an active threat but the customer is out-of-pocket? Good news: clients can opt into our automated remediation service, which can automatically contain hosts as needed. Unfortunately, Vandelay isn\u2019t taking advantage of this feature. \u201cI think we\u2019ve added all the relevant artifacts to the remediation actions,\u201d David explains. \u201cI\u2019m checking to see if we can suggest anything that\u2019s helpful for the future. Looks like they\u2019ve been recommended previously to turn-off allowing \u2018wscript.exe\u2019 to open shell scripts. I\u2019m seeing that recommendation nine, ten\u202611 times total, over the past year. I\u2019ll add it to the Resilience section again.\u201d This particular customer had a total of 20 endpoint-related security incidents within its environment last year, more than half of which would have been avoided with the proper wscript.exe resilience policy in place. While resilience steps are not always easy to implement, they can make a substantive, positive impact on a customer\u2019s security posture. Expel SOC analysts are up early anyway and available 24/7, but most people don\u2019t want to be awakened on a Sunday morning by a critical incident. Your weekend on-call folks, not to mention your CISO, will thank you for preventing incidents like this. PagerDuty automation, enabling auto-containment and completing resilience recommendations are small investments that can be made to improve response times for future incidents. PagerDuty can wake you up if something goes wrong. Auto-contain authorization lets us isolate compromised hosts even if you don\u2019t wake up. Completed resilience action can help you avoid these issues altogether. Let\u2019s say you want to take a deeper look into your environment. Are my remediation steps working as expected? What else is \u201cRegsvr32.exe\u201d doing on our endpoints? Do we have any coverage gaps? Threat hunting: validation and high-level understanding [The next day; the familiar <ding-dong> sound chimes as Bryan joins the Zoom call] \u201cHey gang, is Jenni on? She asked me to pop in\u2026something about a wscript.exe hunt?\u201d Bryan knows both the red and blue side of cyber and now gets to employ those years of experience in a threat hunting capacity. Our hunting service, a big step beyond detection and response, lets us dig deep into customer data to find not only detection gaps and suspicious events, but also to verify resilience. Our hunting catalog easily expands to scope for both confirmation of resilience and absence of emergent IOCs. We ask questions like: Was multi-factor authentication (MFA) really enabled for all users? Is the Server Message Block (SMB) protocol accessible on public facing servers? What Amazon Web Services (AWS) region should we not see in this environment? Does Java.exe ever have any suspicious child processes? These questions are crucial. If you think you\u2019re hardening your infrastructure, don\u2019t you want to be sure? \u201cHey Bryan, I\u2019m here,\u201d Jenni chimes in. \u201cVandelay had a thing yesterday where \u2018wscript.exe\u2019 was involved. I wanted to see if we can do some hunting on how commonly that process is used in their environment. Also, I\u2019d love to be able to verify that shell scripts no longer get opened with wscript? We\u2019ve recommended that resilience action to them a bunch of times. It really helps if they\u2019re able to get a better picture across their systems. Is that something we can do?\u201d A lot of in-house security teams are so busy they rarely have time to baseline or research their own environments. Questions like, \u201cWhat parent process typically spawns wscript.exe?\u201d can slip down the priority list. And \u201cWhich users and domains are most commonly seen executing Okta impersonation events?\u201d Or \u201cWhat AWS users do we see commonly using long term AccesskeyIDs?\u201d Expel threat hunting can provide some much needed insight into these and other endpoint, SaaS, and cloud questions. \u201cHey Jenni, glad to jump on. Have they ever confirmed implementation of that resilience step?\u201d Bryan asks. \u201cI wonder if it\u2019s something they\u2019ve simply chosen not to do.\u201d Jenni says, \u201cI saw back in October they marked that action as complete. I\u2019m wondering if they pushed the policy but didn\u2019t quite get the protection they\u2019d intended. We\u2019re still seeing it run, obviously. Do we have a hunt we could employ to scope wscript activity across all their hosts?\u201d \u201cThe Historical Scripting Interpreter hunt would shed some light on that for them,\u201d suggests Bryan. \u201cThey\u2019re using Windows Defender right? I\u2019ll ping their account rep to see if they want to get the process going. Thanks for bringing this up.\u201d\u201cYeah they are using Defender,\u201d she replies, \u201cand thanks for doing that. Let me know if you need anything from this end.\u201d \u201cThanks Jenni, I\u2019ll keep you posted on how it progresses. Might have you run the analysis when the hunt kicks off. Great catch on the incident, by the way.\u201d The Expel threat hunting service iterates around a historical POV and a broader range of detection complexity. We conduct regular monthly hunts on your tech and infrastructure, and we run periodic IOC hunts as new threats emerge. Even more fun: with Expel, you can even take advantage of evolving draft hunts for testing and development. We afford our hunting customers better visibility across their whole landscape. Whether its cloud infra, SaaS applications, network, or endpoint-related hunts, our coverage includes a wide array of technologies. For example: AWS\u2019 EC2 modifications hunt Duo\u2019s Suspicious Duo Push activity hunt Cloud apps\u2019 data center logins hunt Cloud infra\u2019s Azure Successful Brute Force hunt Azure Successful Bruteforce hunt We also provide additional insights and resilience recommendations to help reduce risk exposure in the future. Threat hunting allows you to validate that you\u2019re as secure as you\u2019re trying to be, and provides a path forward on things that still need some attention. What else can hunting do? And, where do we go from here? Completing the circle: better detection \u201cWe\u2019re definitely seeing it come through the queue,\u201d says Bryan, \u201cbut I want us to elevate its severity to high. We\u2019ve seen this technique spike this month in particular. The Vandelay incident really highlighted the recent uptick in this usage of a JPEG file as an obfuscated script file. OSINT calls it a Shorse Attack. I don\u2019t know where they get these names\u2026\u201d \u201cSo basically,\u201d Peter replies, \u201cif the command line contains \u2018wscript\u2019 plus\u2018.jpg\u2019 or \u2018.jpeg\u2019 we categorize it as a HIGH. Right? Peter, an Expel senior detection and response analyst, joins Bryan to make sure the activity gets categorized appropriately. If the detection logic produces higher-fidelity signal, we want to elevate the severity to get analysts\u2019 attention more quickly. \u201cExactly,\u201d Bryan says. \u201cWe ended up running that query across another five or six customers and found that it\u2019s a lot more prevalent than the months prior. This adjustment should surface these alerts to an analyst even quicker.\u201d Peter nods. \u201cSounds good. That change should be live within the hour. I\u2019ll holler if I have any more questions.\u201d \u201cThanks, Peter. I\u2019ll check back in a few days. This Shorse stuff makes me wonder if this might be a good long-term hunt for our catalog. Basically, wscript.exe being run containing any atypical file types in the command line. I\u2019ll let you know what I find.\u201d Whether it comes out of our threat hunting experience, a phishing campaign, or new threat intel, Expel constantly adjusts the dials on our detection capabilities. We try to harness every ounce of analyst attention and brain power toward customer alerts, and we never want to waste a scrap of what we learn. Completing the feedback loop is critical to properly facing a rapidly evolving threat landscape. Tomorrow, even if attackers start using electric toothbrushes to launch attacks, we\u2019ll be able to respond. What end-to-end coverage means to us We dramatized the Vandelay incident for readability, but we see events like this all the time at Expel. Like, every single week. And each time we work through the alert \u2192 investigation \u2192 phishing \u2192 incident \u2192 hunting \u2192 better detection \u2192 alert cycle (and its various permutations), we get faster and better, to make you safer. Jenni, Girish, Tucker, Jose, Chris, DeShawn, David, Bryan, and Peter are just a few members of the team keeping eyes-on-glass all-day-every-day. This is 360\u00b0 security at its best. You\u2019re invited to test drive our comprehensive MDR, phishing and hunting services to experience the full benefits."
6
+ }