By: Josh Davies, Product Manager at Alert Logic
Security professionals don’t sleep that well. At least this is the commonly presented trope. Perhaps it comes from the antithesis of the stereotypical, ominous, hooded hacker with dark bags beneath the eyes, skin translucent from the radiation of LED displays, tirelessly testing and breaking into systems. Black hat and white hat security professionals (and yes, both sides are professional now) have near identical skills, both on the red offensive side of security and the blue, defensive side. Perhaps it is because we are cut from the same cloth, or that you must know your enemy to defeat them.
The sleepless hacker is such a recognisable image that when I was a SOC analyst and we had customers attend a SOC tour, our director would quickly pull all the blinds and dim the lights before the frosted glass was disabled to reveal teams of industrious security analysts, seemingly oblivious of time, working 24/7/365.
As impressive as I knew the sight was to those looking in, I always found something quite comical about the song and dance. Although I preferred a room full of natural light, I was perfectly able to continue my incident response, tuning and threat hunting activities in the dark. It got me thinking how a lot of cybersecurity is intertwined with theatre, like the militaristic terms we repurpose, the multitude of dashboards, screens and visualisations we employ to give a sense of tangibility to the ethereal cyber world. I would sometimes even play along, opening pages like hackertyper.net or one of the many threat map visualisations available when I knew what I was actually working on was not quite as glamourous. This theatre had always been something I’d enjoyed about my industry – walking in on NASA-esque tiered arrangement of desks and screens, it inspired awe and gave a sense of purpose.
One SOC tour, I realised that these theatrics were just as important for our customers. Seeing a team of incident responders, seemingly suspended in a constant period of dusk, gave comfort that a team of security experts were reliably watching over organisational data and systems, reassuring them that we would be there to match the faceless hacker.
However, theatrics alone are easily exposed once you peel back the surface and dig into the substance. So with that in mind, let’s look at 5 key components that should be part of any successful cyber security/safety strategy to grant security professionals peace of mind and better sleep at night.
Visibility is the first element and is the foundation for security hygiene. You can’t secure what you can’t see – or more accurately, what you don’t know you have. Visibility begins with asset discovery, a regular or continuous process that identifies any asset that is added to your network and collects host metadata into some form of topological or inventory view.
This simple context is invaluable for security analysts when it comes to incident response. Being able to view data points when the host was created, where it sits in the network and what is running on it -, speeds analysis, and allows the analyst to focus on the flagged activity, reducing the need to dig further into historic data.
IT owners can also use the data with their knowledge of current projects and strategy to identify whether new hosts are expected or unexpected. Any unexpected host(s) are clearly grounds for further investigation but having that additional asset information means we can get to resolution quicker, without investigating manually, for fewer false alarms when devs are asked to identify a host based on a single datapoint.
Once you know what your assets are, we need to have visibility into what they’re doing. Understanding the actions of an asset begins with log collection. Different log sources require different collection methodologies, ranging from agents, forwarding to remote collectors and to API integrations.
Log collection is essential to any security strategy; however, relying on logs alone does not provide enough telemetry and means you’re less likely to catch a breach in time. Network IDS (NIDS) and host-based IDS (HIDS) will increase your visibility, complimenting and enriching log data.
NIDS performs deep packet inspection and can see the entirety of requests and responses traversing a network. In my experience, NIDS events have been the first to alert analysts of a breach. Encryption limitations caused some to proclaim NIDS to be ‘dead’, however, recent data shows that NIDS has shot up to fourth place as a breach discovery method (behind actor disclosure, monitoring partner/service and security researchers) which is the highest position NIDS has been in 10 years of data. Long live NIDS.
HIDS is usually performed by EDR tools which create their own telemetry based on processes and applications running on servers and workstations, beyond standard logs. Similar outcomes can be achieved by enhancing your logging levels and using tools like FIM to monitor security sensitive directories.
Harnessing the capabilities of all three of these data sources is optimum, but if this is not practical, then relying on logs plus a form of IDS allows security professionals to rest knowing they have visibility into more activity than logs alone.
It is important that these disparate data sources are unified into a single console. Having separate management consoles for different environments/sources makes cross correlation incredibly difficult and is an obstacle to efficient analysis. The recent adoption of hybrid working means that the majority of organisations operate on premise, in data centres and even in multiple private/public cloud(s). Organisations must resist the urge to buy “best of breed” point products for monitoring each and instead identify all the sources/integrations that are important to address risk and identify an appropriate tool that can unify all.
Holistic visibility is likely more important than some specialist/targeted features, however, teams may have gotten used to features of existing products and invested a lot of time in their configuration and deployment. If your network strategy has changed recently (which it likely has) then security professionals and decision makers need the courage to re-evaluate the market and resist falling victim to the sunk cost fallacy. Even with the appropriate tool in place, visibility needs to be revisited periodically to ensure that new and existing assets are covered and in good health. Visibility is key and if you don’t keep on top of visibility, your security strategy is built on shifting foundations.
- Exposure Management
Once you know ‘what’ you have, you need to understand any risks associated. Exposure management, in the form of vulnerability and misconfiguration checks, will allow you to understand the risks present in your network. Technology moves quickly and organisations want to capitalize on the new efficiencies they offer a business, but the rapid evolution of technology means new vulnerabilities and configuration problems are a consistent problem in IT.
Vulnerability scans and API checks help to identify your exposures and should be conducted frequently. Once you are aware of exposures in your environment they must be addressed. That is the ideal situation, but it is unrealistic to stay on top of all exposures – so many are out there that exposure management tools must update multiple times a day to remain current. As you can’t resolve all exposures, it is important to have a strategy to tackle exposures that present the largest risks and prioritise mission critical assets.
Intelligent exposure management tools will apply some additional context, like the criticality of a production server versus a dev/test/staging server, and grant ‘risk scores’ to prioritise your patching/hardening actions accordingly.
- Prevention Control
If you can stop a threat from becoming a threat, why wouldn’t you? This is why security professionals strategically place preventative controls across multiple layers of their security strategy. Firewalls can enforce access policies at the permitter, EDR/NGAV can block threats at the endpoint and a web application firewall is ideal for protecting your most targeted assets – public web applications.
But building your walls as high as possible to stop all threats has long been accepted as an outdated and unsuccessful model. Security professionals who put too much emphasis on preventative controls, likely need to wake up from whatever deep sleep they’ve managed to achieve by burying their heads in the sand.
Security controls are not meant to hinder daily operations – they are meant to prevent unathorised actions which would result in a negative impact on daily operations. Preventative controls must tread a fine line of blocking legitimate threats, without blocking legitimate activity. The result is a difficult balancing act between false positives (unnecessary blocks) and false negatives (actual threats that have been allowed through) and as a result blocking technologies have a narrower scope than detection.
Attackers are continuously trying to circumvent preventative controls by operating in the grey area between known bad and known good. Novel exploits may be used, or they may hide amongst legitimate activity, utilising existing processes that are legitimately used to update or manage a machine and instructing them to perform malicious actions in ‘living off the land’ attacks. In addition to circumvention, attackers will try to disable preventative controls before launching an attack. This is especially common in ransomware, where attackers attempt to kill processes which could block their next move.
For all these reasons, it is essential to assume compromise and have a way of detecting actions which may slip through the preventative nets. Prevention is necessary, but it’s only a start.
Successful exploitation allows an adversary entry to a system. There are then many more steps that occur to facilitate the installation and command and control phases of the kill chain, such as priviledge escalation and lateral movement. This gives defenders a window to reduce the impact of unauthorised entry. Consider ransomware: this is the very noisy and final action an attacker will deploy once they have gained access to as many systems as possible, disabling preventative controls, locating any backups and ultimately crippling an organization to extort the largest possible ransom.
Having a successful detection program in place, means security professionals will be warned of any successful exploits or subsequent activity, allowing them to disrupt the attack process before the attacker can achieve their objectives.
All of the data sources discussed in the visibility section need to be collated into a single analytics tool. Automated detection capabilities such as correlation, anomaly detection, UBAD and beyond will generate incidents for further investigation.
To realise their full potential, any detection tool must be supported with the appropriate human expertise and defined process. Expert analysts should be working 24/7/365, as most incidents will occur outside of business hours since attackers know they have a greater chance of carrying out the entirety of their actions if they act when networks are most vulnerable – when no one is monitoring them. It is no coincidence that most emerging threats I have been involved in were discovered on weekends. Additionally, waking up every morning and sifting through alerts not only increases your time to respond, but it can also be difficult to tackle the high volume of alerts and these organisations are susceptible to alert fatigue – a “boy who cried wolf” scenario where they miss the true positive incident amongst the noise of false positives.
Beyond automated detection, human led threat hunting activities can serve as a backstop for anything that is either too new or too noisy to be automated. 2021 has already broken the previous year’s record for the number of 0-day threats identified and without the support of specialist threat hunters, you are unlikely to be aware of a threat until it has already advanced within the environment. Having an effective threat hunting program in place facilitates early warning to any emerging threats and 0 days.
Implementing layers of detection and prevention as part of a defence in depth strategy will allow security professionals to rest easy in confidence that they will be made aware if/when their environment is breached – before it’s too late.
The primary metric for response is: how short are your response times? How quickly can you respond to, and remediate a validated security incident?
Speed is irrelevant if it is applied to the wrong tasks. Therefore, defining incident response policies is critical. These policies should outline the steps to take in a multitude of likely scenarios. All relevant stakeholders should be involved in defining these policies to ensure confidence in the defined steps as well as to get the buy in from relevant stakeholders in the business. Collaboration at this early stage means they can be scrutinised without pressure. Once you know what you must do, you can focus on speedy response times.
Responders should be empowered to carry out response actions. Where possible, just in time access controls should be implemented so responders can use these privileges when they need them, and only when they need them. An overly segmented structure, where responders are required to coordinate via multiple teams and individuals who hold the metaphoric keys to the castle, opens the process to inefficiencies.
SOAR platforms that allow you to automagically integrate your detection capabilities with your response controls, allows analysts to define automated playbooks that can trigger in the right conditions, or that can be quickly grabbed off the shelf and run by an analyst ad-hoc. Time consuming and common tasks are the best candidates for automated response. Approval mechanisms can allow you to test the suitability of a fully automated response, enabling full automation once you are confident in the process.
Security is a journey, not a destination. Everything outlined above needs to be regularly revisited to ensure it fits with the current threat landscape, challenges and the strategy of the organisation. Depending on where your current security maturity needle is, this may seem like a lot of work and an added pressure to those sleepless nights to address the components raised, but the understanding of the need for good security hygiene is at an all-time high, even amongst non-security professionals. Driving this culture shift in your organisation will provide the momentum and buy in needed to address holistic security hygiene and the effort will quickly pay dividends overtime. Additionally, managed security partnerships can be considered to fill in any immediate gaps you are currently unable address. If a 24/7 detection program is not possible in- house, security professionals can benefit from partnering with specialist MDR organisations who will that handle the initial triage and only escalate incidents that have been validated and enriched.
When security professionals take steps to reduce the likelihood of a successful attack, understand that they must be prepared for when an attack does succeed, and ensure that they can detect and disrupt attacks, everyone will be able to sleep better at night.
About the Author
Josh Davies is a Product Manager at Alert Logic. Formerly a Security Analyst and Solutions Architect, Josh has extensive experience working with mid-market and enterprise organisations; conducting incident response and threat hunting activities as an analyst before working with organisations to identify appropriate security solutions for challenges across cloud, on-premises and hybrid environments.