Mark Kazemier

My ramblings and experiences about Cyber Security

Author: Mark

  • Last day at RSA ending with a concert from Alicia Keys

    Last day at RSA ending with a concert from Alicia Keys

    Today was the last day of RSA 2024. I have been to four sessions and finished the day with a live concert by Alicia Keys. The sessions weren’t great, it seems as if the least experienced speakers are left for the last day. Alicia Keys made more than up for it however, she was awesome.

    The sessions I’ve gone to today were:

    Avoiding Common Design and Security Mistakes in Cloud AI / ML Environments

    In this session, Natalia Semenova explained how to set up your AI and ML environments, common mistakes people make, and how to prevent them. Although Natalia appears to be very knowledgeable on the topic, her speaking skills are not great, which made the session a bit dull. However, I do think it’s worth taking a look at her slides and reviewing the session if you have the opportunity.

    Natalia began with some intriguing points:

    • Cloud providers provide guidelines for setting up your environment. However, you’ll need to tailor these to suit your company and the specific use of the environment.
    • Companies often only think about the best-case scenario for their AI/ML pipelines. However, it’s crucial to remember that an AI/ML pipeline includes both your code and your data. This means that all code pipeline standards apply, but you also need to protect your training data!

    Types of GenAI usage

    Natalia used the Generative AI Security Scoping Matrix by AWS. This model defines the same scopes as the Forrester analyst in her presentation about DLP used on day one of RSA.

    Generative AI Security Scoping Matrix. Defining 5 scopes for AI use cases:
Scope 1: Consumer app, Scope 2: Enterprise App, Scope 3: Pre-trained models, Scope 4: Fine-tuned Models and Scope 5: Self-trained Models

    From scope 1 to 5 what you as a company can influence increases, but remember: with great power comes great responsibility.

    Define what you control

    Diagram showing the relationships between customer data, generative AI application, Training Data, the AI Model and the consumer and where you can put protections.

    The diagram above gives a broad view of how an AI application might look, regardless of its scope. Depending on the scope, you will have control over specific blocks and data flows. For example, in Scope 5, which involves an in-house developed AI model, all blocks and flows are within your control. However, for a pretrained model, you won’t have control over the training data (as the model was already trained) and the AI model itself, but you will have control over all other flows and blocks in the diagram. Ultimately, you should strive to implement security controls on every block and flow.

    Critical components to secure

    The presentation mentioned six critical components to secure:

    • Identity and Access Management
      Restrict permissions for Jupyter notebooks (previously running under the equivalent of a root account). Organize data into buckets based on usage patterns or departments. Provide access to these buckets on a need-to-know basis. Finally, establish a pre-provisioned environment for the quick and secure deployment of new models and experiments.
    • ML Models
      Treat an ML model like any other algorithm. Store the code and data in a private, secure Git repository, conduct code scanning, ensure that containers originate from a trusted repository, utilize prebuilt containers, and encrypt the containers at rest.
    • Software composition
      Review software packages, and make sure they come from a repository with approved software packages. Review privacy policies of packages and opt-out of data collecting services
    • Licensing
      Evaluate the licenses for packages and software you are using.
    • Compute resources
      Enable encryption in transit, enable internode encryption and set limits on resource usage.
    • (Training) Data
      Use only trusted data sources for training data. Verify if the intended data can actually be used or if consent is necessary. Have a plan for data subjects that revoke their consent at a later stage. Tag data and buckets that contain sensitive data.

    Logging and Monitoring

    In the slide deck, Natalia presents an example architecture for monitoring in both AWS and Azure environments. She recommends collaborating with your data scientists to assist in setting up effective monitoring rules. Key considerations for monitoring include:

    1. Endpoint
    2. Model accuracy over time
    3. Model biases
    4. Data drift

    To Patch or not to Patch – A Risk Management Decision

    Ahmik Hindman from Rockwell Automation, a US-based OT manufacturer, presented a strategy for implementing a patch management program in Operational Technology (OT) environments. Last year, he delivered a presentation on securing OT, and this presentation served as a continuation of that topic.

    I found this presentation very valuable, as Ahmik demonstrated extensive experience with OT and highlighted specific challenges relevant to this domain. It became evident from the talk that a generic patching strategy cannot be applied to OT systems.

    Understand compliance and standards

    Some industries, like energy, oil and gas, are part of critical infrastructure and have industry specific compliance standards that prescribe your OT patching standards. For other industries you can have a look at NIST SP 800-82 Rev. 3, a guideline on how to secure and patch your OT environment.

    Detailed install base

    Before initiating the patching process, it is essential to have a clear understanding of the environment. In the case of OT, an OT-specific scanning tool is necessary, as standard tools may struggle to accurately interpret firmware versions, hardware manufacturers, and may generate numerous false positives.

    Furthermore, a comprehensive scan is required to identify the backplane networks, which are typically managed from the PLC and do not utilize the TCP/IP stack.

    Initial vulnerability prioritization

    Get an OT specific tool that can correlate your OT inventory with the CVE database.

    Tailor CVEs to the application

    Use CVSS scores and tweak the scores with the CVSS calculator to your environment. Based on controls and specific apps this can drastically change the scoring and urgency of certain patches.

    Prioritize critical CVEs

    Begin with legacy or obsolete Industrial Automation and Control Systems (IACS) lacking security controls. Then, address your critical assets and vulnerabilities (CVEs) that are actively exploited or have exploit kits readily available.

    Utilize SSVC process

    The CISA has published an automated SSVC Decision Tree, which Ahmik advises using for making decisions regarding vulnerabilities. Following the decision tree can assign one of the following four statuses to your vulnerability:

    • Track: the vulnerability doesn’t need to be remediated immediately, but the organisation should keep tracking the vulnerability when more information becomes available
    • Track*: these vulnerbailities have something that asks for them to be monitored a bit closer
    • Attend: these vulnerabilities need immediate attention and should be remediated with priority
    • Act: these vulnerabilities need to immediate remediation with the highest priority.

    Assemble a Change Management Team

    The change management team validates vulnerability prioritization, determining whether a patch is necessary or if other mitigating controls can be deployed. In the event that a patch is deemed necessary, they will verify with the vendor if the patch is included on the vendor’s approved list. This list is crucial, as vendors only support specific versions of software to run on their OT devices, meaning that updating to the latest software version may not always be feasible.

    The change management team ultimately decides whether to deploy a mitigation or accept the risk.

    Testing and validation

    Replicate the environment, preferably with hardware. Some vendors do provide software emulators that you can use to verify patches and OT software. Ensure that you create a backup plan, as not all OT devices have a rollback option, making it impossible to revert to an earlier version once a patch is deployed.

    Deploy patch / mitigating control

    Ensure that you have proper backups of IACS software and configuration so that you can rollback if necessary. Keep in mind that you cannot always rollback a patch with OT devices. Additionally, make sure to only roll out patches during dedicated maintenance windows to minimize impact on the environment.

    Documentation and Change Management

    Document your latest deployments and make sure your latest configurations and scripts are added to the backups.

    Hacking the Mind: How Adversaries use Manipulation and Magic to Mislead

    This session was entertaining. The presenter, Robert Willis, is an amateur magician, and the session was more of a magic show. However, it lacked substance, making it less useful from a content perspective. Unfortunately, there are no slides or recordings available for this session.

    The premise of this session was that magicians and adversaries use similar techniques to deceive: misdirection, urgency, chaos, etc. The difference is that magicians use these techniques for entertainment, while adversaries use them for malicious purposes.

    A few insightful takeaways from this session:

    1. “Wind your watch”: When faced with urgency, refrain from acting immediately. Take a moment to think before proceeding. This is the most effective way to avoid falling for social engineering tricks.
    2. “When you look for the gorilla, you’re going to miss other unexpected events”: In other words, when you are focused on one unexpected event, you may overlook other unexpected events.
    3. Trust is easily exploited and is a vulnerability that cannot be patched. However, you can implement a mitigating control by adopting a “trust but verify” approach.

    5th annual RSAC SOC Report

    This is the conclusion of the conference presentation by the RSA SOC, which monitors the public WIFI at the conference. They presented their observations compared to previous years and highlighted some insecure practices that are still prevalent, despite RSA being a Cyber Security Conference.

    The SOC is sponsored by CISCO, which means that from an architectural perspective, the SOC is equipped with CISCO tools. They use Umbrella to monitor DNS, Netwitness for NDR, Splunk Enterprise Security as SIEM, and CISCO XDR to integrate these tools.

    The number of people connecting to the public WIFI and the volume of data transmitted over the network has decreased since the onset of COVID-19. The assumption is that more people are choosing not to connect to open WIFI and are using their own hotspots.

    The percentage of SSL traffic has increased from 70% to 80% this year. Interestingly, this percentage was much higher but sharply declined after COVID-19. In comparison to other regions, the US has the highest percentage of unencrypted traffic, followed by the UK, Asia, and then Europe (which is the opposite of what was mentioned by the speaker yesterday, who stated that Europe was less advanced).

    Despite the increase in SSL traffic, the SOC still detected 20,000 credentials sent in clear text over the network. These credentials were related to 99 unique accounts.

    In terms of operating systems, Apple is the most popular. The most popular chat app since 2023 is Mchat, taking over from WhatsApp.

    During the week, they discovered a major vendor leaking a significant amount of data. This issue was previously identified last year, shared with the vendor, who refused to address the data leak and claimed it was working as expected. They refrained from disclosing the vendor’s name due to legal reasons, but it is a major sponsor of RSA and is installed on many people’s PCs.

    Other instances of clear text information included the visibility of purchase orders and contracts of a Korean security vendor on the network, reading emails of individuals still using POP3 for email, and finding unrestricted access to someone’s home camera system.

    The key takeaways from this session are:

    1. Use a VPN when connecting to an open network. Many apps do not properly encrypt data, allowing anyone on the network to read the content of your communication.
    2. Enable the OS firewall.
    3. Keep your systems patched, preferably before attending the conference.
    4. Check your configuration settings to ensure they are secure.

    Closing Celebration

    And that concludes RSA 2024. As a parting note, I leave you all with a 20-second recording of the Closing Celebration of RSA featuring a live concert by Alicia Keys.

  • RSA day 3

    Day three turned out to be great. I had planned four sessions, two in the morning and two in the afternoon, with only one focusing on the topic of AI. The sessions proved to be interesting, and the content is easily applicable to life back home.

    Today, I attended the following sessions:

    State of the CISO 2024: Doing More with Less

    Based on the talk’s description and title, I expected the presenters to discuss strategies and tactics for working with a small budget. However, the session focused on Nicholas Kakolowski and Steve Martano presenting the State of the CISO 2024 report published by their company. The report examines data and trends related to the role of the CISO in the US without drawing real conclusions. Data comes from their own American consulting business and when asked about plans to include other continents, the presenters stated that Europe is less advanced than the US, indicating a narrow American perspective. I found Steve’s delivery of this message quite arrogant and my expectation is that they just don’t do business outside of the US. Nonetheless the presented statistics might be interesting (to be honest I have my doubts on some of the methodologies used to obtain the statistics).

    In the US, only 20% of CISOs are at the C-level. Most CISOs are at the executive, director, or VP level. More responsibilities are getting added to the role though. While topics like privacy and fraud are often shared responsibilities with other departments. In the case of GenAI, most CISOs have the full responsibility to make the risk decisions for the business.

    Not all CISOs are even reporting to the board. The report concludes that when CISOs aren’t often reporting or not-at-all reporting to the board, they are less satisfied with the board’s budget decisions and their mandate (neutral responders were removed from the dataset, however, so the question is how statistically significant this is).

    Budget trends

    The growth in the budget for Cyber Security has declined, as has the money available for staff. However, we must consider this in the context of the overall market, where budgets have been declining. The fact that the budgets on average still grow is a good sign. It indicates that businesses consider Cyber Security important, but there are limitations on how much they can spend on it.

    The budgets have made it harder to hire. Around 38% of the budget can be spent on hiring, and given the salaries for security personnel in the US, it is difficult to fill vacancies. Often, more than 1000 people apply, but only 10 are qualified, and they are too expensive for the budget.

    Retaining your team

    A large number of people don’t feel valued and supported, or are unsatisfied with career opportunities. The issue arises after 4-7 years. In the first 2 years, the job is still interesting and exciting. By the third year, people start to get bored, and by year 4, they leave. As hiring is difficult, it is important to ensure that your team feels valued and to create career paths within the organization.

    Data-driven Cyber Indicators of Malicious Insider Threat

    The MITRE Institute conducted applied behavior science research to better understand the behavior and potential risk indicators for insider threats. In a track session, James Doodson explained the methodology used and shared the conclusions from this research. Unfortunately, time was limited, and James had to rush through the slides, which meant he wasn’t able to discuss most of the details of the research. As this is a very interesting research, I would recommend taking a look at the slides for more details.

    Research questions

    1. How can malicious insiders exfiltrate sensitive information and evade detection?
    2. Are the Potential Risk Indicators (PRIs) different between remote and on-premises environments?
    3. Are there specific sequences of behavior which are useful PRIs for insider risk?

    Methodology

    MITRE assigned 150 individuals to perform a series of sensitive tasks on the live corporate network. These individuals had to carry out these tasks over a span of a week while also performing their regular duties. The tasks involved searching for sensitive information, collecting it, and exfiltrating it. The organization divided the 150 people into three groups:

    1. Benign users: these users had the task of sending the sensitive information to their client
    2. Malicious users: these users had the task of sending the sensitive information to a competitor to improve their job prospects
    3. Super malicious users: these users had the same task as the malicious users, but these individuals were highly cyber-savvy employees with a technical background

    MITRE collected data using network and host-based sensors over a period of three weeks: one week before exfiltration, the week of exfiltration, and the week after exfiltration. The participants kept a logbook of when they performed their tasks, so that the researchers could disregard data about the participants’ regular work.

    Additionally, the researchers used questionnaires and interviews to capture activities that were not visible in the network and endpoint sensor data.

    1. Identifying exfiltration and evasion techniques

    The researchers identified 13 exfiltration techniques and 11 sub-techniques that the participants used. Users with the benign user role used 6 of these techniques as well, such as sending attachments over email or uploading files to cloud storage. These indicators are useless for identifying a malicious insider.

    Malicious insiders used 9 unique techniques and sub-techniques, including printing hard copies (often done with personal printers at home), using removable media, and creating burner email accounts. These techniques might be useful potential risk indicators (PRIs).

    Super malicious insiders also used 9 techniques and sub-techniques unique to their group, such as adding the data to source code they would normally write and then uploading that to Github. Most of these techniques make use of techniques these people also use during their normal work, making it difficult to use them as individual PRIs.

    Participants used 76 different exfiltration techniques. Interestingly, regular malicious users use similar techniques as the tech-savvy ones. The tech-savvy insiders know the controls in place and use simple techniques when they think that is good enough. Also, don’t underestimate untrained people; they can find out a lot through Google.

    MITRE draws three conclusions based on these results:

    • There is no difference in the amount of time spent for the task between a benign vs malicious user.
    • Odd working hours are NOT a good PRI.
    • Focus on early behavior in the searching and gathering phases rather than on the exfiltration.

    2. Remote vs On-premise

    Remote and on-premise users show no difference. The same potential risk indicators (PRIs) can be used to detect insiders in both cases.

    3. Sequence of steps

    In previous research, MITRE focused on analyzing steps used together. This time, MITRE attempted to determine if the order of these techniques makes a difference. In total, the researchers found 5 million permutations between PRIs. The final set only contains 5 sequences for malicious users and 7 sequences for super malicious users.

    Interesting sequences that the researchers found:

    • Malicious insiders tend to persistently attempt to access a file in Sharepoint after receiving an access denied message, while benign users do not exhibit this behavior.
    • Malicious insiders frequently move from one file to another, exfiltrating each file before moving on.

    From this research, MITRE identified that External Threat Frameworks are ineffective for addressing insider threats. In response, MITRE is working on developing new strategies specifically tailored to combat insider threats.

    You’re doing it wrong! Common Security Anti Patterns

    This was a surprisingly interesting talk. Initially, I didn’t have high hopes for this session as both speakers are from Microsoft. However, they did a good job and effectively introduced the topic of Security Anti Patterns without promoting Microsoft. They shared some interesting patterns that I could definitely recognize happening in real life. I won’t be discussing all patterns in this summary, and I highly recommend taking a look at the slides yourself and watching the recording if you have the opportunity.

    First, what is an anti pattern? An anti-pattern is a common response to a common problem that is either counterproductive or ineffective. The idea of collecting anti-patterns is to create a common language to recognize these patterns and define best practices for avoiding them.

    Groundhog day incidents

    Groundhog day incidents are recurring incidents that require the SOC to conduct the same investigation repeatedly. The root causes for these incidents are two anti-patterns:

    1. Technology-Centric Thinking: This involves viewing Cyber Security as solely a technology problem and relying on technology, rather than focusing on processes and people.
    2. Silver Bullet Mindset: This refers to the belief that a single solution will completely solve a complex or ongoing problem. Silver Bullet thinking is more common than one might expect. For instance, assuming that patch management is the solution simply because the incident started with CVE-xxx is an example of the silver bullet mindset. It’s important to be mindful of these hidden moments of silver bullet thinking!

    The paradox of blame

    Comic showing the business not taking action and then blame the CISO when everything goes wrong.

    Blame leads to fear, which in turn leads to a “cover your ass” mentality. The solution is to adopt a blameless approach. It’s important to be kind to your colleagues and communicate in a direct and honest manner.

    Downwards spiral

    There is a downwards spiral between two anti-patterns:

    1. Department of No: The CISO department doesn’t align with the business and focuses solely on achieving 100% security, resulting in frequent rejections of business initiatives.
    2. Delay Security Until the End: Due to the constant rejections from the security department, the business either delays or bypasses security considerations.

    These two anti-patterns reinforce each other. By delaying security until the end, the CISO department loses the ability to influence the approach and ends up frequently rejecting proposals. This, in turn, hinders the business, leading them to avoid involving security early on.

    The way to break this cycle is similar to addressing the Paradox of Blame: be kind and try to understand each other. From the cybersecurity side, ask questions such as “How can I help you achieve your objectives securely?” and encourage open communication.

    Bizarre risk exceptions

    Granting permanent risk exceptions for business-critical workloads for political reasons, such as fulfilling a request from the CEO, can be detrimental. Just as fire can burn down a factory regardless of where it starts, the same is true for cybersecurity. It’s important to prioritize security measures to protect against potential risks, regardless of political pressures.

    Continuous simplicity and create clarity

    If we cannot create simplicity and clarity, we have already lost the battle. Clarity begins at the policy level. Policies should be written for people; if people don’t understand the policy, how can we implement it? It always starts with a clear and simple policy, and the complexity arises from implementing that policy into technology.

    Overcoming anti patterns

    Focus on security outcomes. The goal is to thwart attackers and increase resistance. Increasing resistance can be achieved by thwarting simple attacks and ensuring swift responses. Change your language: you are not just protecting a network, you are safeguarding business assets.

    Invest wisely. When an attacker gains privileged access, the situation can deteriorate rapidly. The same applies when critical assets are compromised. This is where your protections should begin. Prioritize Identity, Endpoint, and Email security before focusing on Network security.

    Enhance collaboration between teams. This can be achieved by establishing mutually beneficial arrangements. Provide engineering and Ops teams with technical summaries of attacks to help them prevent future attacks. This will help the SecOps team avoid recurring incidents. For the business, provide summaries of attacks to provide leadership with insights into real-world attacks, while also increasing the SecOps team’s awareness to fulfill their mission.

    Beyond the Hype: Research on how cyber criminals are really using GenAI

    This was the last track session of the day and I found it quite a refreshing one. Yes the topic is AI, but this session also brings everything back in perspective. What are cyber criminals actually using and what is just speculation?

    To be fair, malicious AI already existed for a long time. Mostly being used in poker bots and other bots. In this talk the focus was mostly on Generative AI and the use by criminals.

    Malicious code generation

    This doesn’t work that well yet. Large Language Models (LLMs) are capable of generating basic code samples and exploits. However, cyber criminals already possess these capabilities, so there is no need for that. LLMs are not yet proficient at creating more complex code, and therefore there appears to be limited utility in using GenAI to generate malicious code.

    Malicious tools with AI capabilities

    One thing that we do see is the addition of AI capabilities to existing criminal tools. These additional capabilities generally focus on generating phishing messages and using them in conversations with victims.

    Criminal LLMs: Worm GPT

    Only one true attempt at a criminal LLM was created. This information is known because the author publicly wrote about it on underground forums. However, after being put up for sale in June 2023, the creator discontinued it in August 2023.

    Scams

    More seemingly criminal LLMs are being sold on underground forums. Upon further inspection, most of these appear to be scams. They quickly disappear after a week or make outrageous claims about their capabilities. It is still one of the favorite activities of criminals: scamming, even among thieves.

    Jailbreak as a service

    Jailbreaks are methods used to deceive a LLM into bypassing its protections, enabling a user to exploit it for malicious purposes. LLM creators are prompt to patch jailbreaks when they become aware of them, leading to the continuous development of new jailbreaks. Keeping track of these jailbreaks can be tedious for a criminal, hence the emergence of jailbreak-as-a-service. This service monitors jailbreaks and enables attackers to seamlessly utilize the latest jailbreaks.

    Jailbreaks also render criminal LLMs unnecessary. When needed, regular LLMs can be used for malicious purposes. Moreover, most lucrative cyber attacks involve social engineering, which can be fully executed using commercially available legitimate LLMs.

    Deep Fakes

    The market has recently seen an increase in offers. Criminals are now offering ways to use deep fake technology to bypass KYC processes that require a selfie with a photo from your password. GenAI is proficient at replacing your face with that of somebody else, making such attacks easy.

    GenAI is used for extortion in relation to deep fakes. For instance, criminals use AI to create a clone of a child’s voice and use it to fake a kidnapping scenario. Another common use, which was mentioned during the main stage keynote, is sextortion, where fake nude pictures are created of mostly young boys to extort them.

  • RSA Day 2 – More talk about AI

    It is the second day of RSA and we continue to hear everyone rehash the same Generative AI statements. There is basically three type of comments that almost every session, and keynote speaker follows:

    1. I have a talk about something completely different, but AI is popular so here is my mandatory AI statement
    2. I had ChatGPT generate this bad joke and I must share this with the audience (even the central keynote speakers make this mistake)
    3. My talk has generative AI as a topic, but I am not an expert so I stay with generic, handwavy statements

    Despite of the AI annoyance it was a great day. I joined three track sessions and all three of them were great.

    Gartner’s top predictions for Cyber Security 2023 / 2024

    Leigh McMullen from Gartner presented seven predictions for the coming years. Before he presented these he showed a few previous predictions as well. Conclusion from these examples is that predictions are a bit hit or miss. So even though these predictions are based on data and Gartner analyst experience, you have to take them with a grain of salt.

    Prediction 1: By 2024 modern privacy regulation will cover the majority of consumer data

    Of course, we’ve had GDPR in Europe for quite some time now. Gartner predicts that the rest of the world, including the US, will catch up this year. This implies that privacy will become even more important, and consumers will start looking at companies for their respect for privacy.

    An interesting statement from Gartner is that only 10% of organizations “weaponize” privacy to gain a competitive advantage. They gave examples of this “weaponization” with Google and Apple. Google, for instance, had the “don’t do evil” statement in their strategy for a long time, which led to a lot of trust in the company. Despite this, they still sell customer data, but people still love Google. Another example is Apple, which has a strong focus on privacy and uses it in much of their marketing. Their market share has been increasing based on this privacy directive.

    Prediction 2: By 2025 nearly half of the people in the cyber security space will change jobs. 25% of these change to different roles entirely

    The Cyber Security world has been experiencing high levels of stress and pressure. The shift to hybrid working has further intensified this pressure, as the boundary between work and private life becomes more vague, making it difficult to disengage. To prevent this prediction from becoming a reality, we need to change the rules of engagement and prioritize the mental health of our people.

    Gartner recommends focusing on a culture shift that leans towards empowering autonomous, risk-aware decision-making, allowing the security team to concentrate on what truly matters.

    Tracking Human Error is a valuable KPI. An increase in human errors indicates that the team is overloaded, prompting the need to consider hiring more people or reducing the workload of the security team.

    Prediction 3: By 2025 50% of cyber security leaders has tried unsuccessfully risk quantification to drive decision maker

    Leigh was very clear about this prediction: Risk Quantification is a fool’s errand. Measuring probability and outcome is impossible. His recommendation is to focus on Outside-in threat quantification, using business assets instead of scenarios as the basis for quantification. This approach aligns with how businesses have traditionally operated, leveraging existing business cases.

    You can assess Denial of Service, Tampering, Exfiltration, and Theft, and map these against your value chain and assets. It’s much easier for the business to articulate the impact of downtime on a critical value chain, as these values have already been calculated in the past.

    Prediction 4: By 2025 50% of cyber leaders will attempt zero trust, but only 25% will realize its value

    Implementing Zero Trust is challenging, and realizing its value is even more so. Gartner recommends focusing your Zero Trust implementation on three zones—your systems of innovation, systems of record, and systems of differentiation—instead of attempting to cover everything.

    Typically, systems within the same zone work closely with each other. By establishing these zones, you can ensure greater flexibility for innovation without impacting your business systems.

    Following the zoning, Zero Trust becomes a continuous effort. It’s important to recognize that Zero Trust is not a one-time investment or a product that can be purchased; rather, it is a vision for Cyber Defense.

    Prediction 5: By 2025 60% of cyber defense tools will leverage Exposure Management data to validate, prioritize and detect threats

    Traditional detection solutions will struggle to keep pace with this trend, and the transition to modern solutions like XDR (eXtended Detection and Response) will accelerate. Leigh recommends exploring the Cyber Security Mesh Architecture.

    Prediction 6: By 2026 70% of boards will include one member with cyber security knowledge

    It’s important to remember that there are “imposters”—individuals currently serving on company boards who believe they understand Cyber Security, but in reality do not. Gartner recommends identifying these individuals and providing them with training to deepen their understanding of Cyber Security.

    Currently, there are very few CISOs in the world (Leigh mentioned he doesn’t know a single CISO) who are Board Qualified and capable of making independent decisions about the company without facing termination. Typically, only the CEO and CFO hold this level of authority. Therefore, it is advisable to clearly define who is truly responsible and therefore legally liable in the event of a breach.

    Prediction 7: By 2027 75% of employees will create, acquire or modify technology outside of ITs visibility

    This is a trend that is already underway. While it may be unstoppable, there are ways to embrace it. For instance, rather than providing a fully capable computer with maximum access, we could restrict access. Leigh humorously mentioned that the Xbox is the most secure Windows computer because it has restricted access. Most end users don’t require the ability to directly interact with a socket or encrypt a disk. By limiting this access and offering users Low Code solutions to build their own apps, we can still manage the threat landscape while adapting to this trend.

    Prediction 8: By 2027 50% of CISO’s have a Human Centric security practice

    A control that is bypassed is worse than having no control at all. Therefore, a human approach to cybersecurity is essential. Security teams will need to be more actively involved in solution design and development, enabling them to create controls that align with the business and user needs, ultimately reducing friction.

    Leigh has a typical American, over-the-top presentation style, which can be a bit overwhelming for us Europeans. However, the content of this talk was very interesting, and I would definitely recommend watching the recording if you can.

    Lessons Learned: General Motors Road to Modern Consumer Identity

    Andrew Cameron, an Enterprise Architect at General Motors, and Razi Rais, a Product Manager at Microsoft, led this session. They divided the session into two parts, with Andrew sharing General Motors’ experiences in implementing a modern identity for consumers, followed by Razi delivering a sales pitch for Microsoft.

    Andrew, from General Motors, shared some interesting insights. Firstly, he highlighted that consumer identities differ from corporate identities in four key ways:

    • Creation: The process for creating a consumer identity differs from that of a corporate user. For consumer identities, it’s important to reduce friction to keep the user engaged, while for corporate identities, roles within the organization need to be defined.
    • Proofing: The methods for verifying a consumer identity differ from those for a corporate user.
    • Business focus: The business focus of a consumer account is different from that of a corporate user.
    • Branding needs: Consumer-facing identities have greater branding needs compared to corporate accounts.

    Security is a delicate balance for consumer identities. It’s important to ensure security while minimizing friction. This is crucial for mandatory MFA and the types of logins allowed.

    General Motors shifted from a product-first to a customer-first approach. Instead of focusing on people who own a car, they now view individuals as customers who can potentially use multiple products.

    This shift also impacted their approach to identity. They now establish a centralized identity for a user but separate the user profile for specific apps the user is using.

    Microsoft initiated the conversation by outlining how an identity pipeline should be structured:

    1. Establish Intent: what does the identity want and why?
    2. Establish Proof: is the user who they say they are?
    3. Observe
    4. Orchestrate.

    For General Motors, this concept looked like the diagram below:

    Identity Pipeline

    The intent is determined at the edge, where the WAF helps filter out bots and clearly malicious requests. The Security Platform is responsible for establishing proof of whether the user is who they claim to be. The concept of Observe and Orchestrate involves using SIEM + SOAR to track user behavior throughout the process and respond later, such as by enforcing MFA or removing a user account entirely.

    I appreciate the diagram and the architecture used by General Motors. However, the original slides heavily focused on Microsoft. While the start of the session was very interesting, it eventually turned into a demo session of Microsoft tools that can achieve this architecture. It should be feasible to achieve a similar architecture with other tools, and in the final diagram, I changed the names to more generic terms.

    Despite the Microsoft sales pitch, I would recommend watching at least the first 30 minutes of this session if you have the chance.

    Unveiling the 2024 Data Breach Report

    Chris Novak presented the key findings of the Verizon Data Breach report for 2024,  which was released last week. Even outside of the RSA session, this report comes highly recommended.

    The Verizon Data Breach report provides an analysis of over 30,000 cybersecurity incidents and 10,000 data breaches. Verizon gathers its data from its own incident response team and over 100 partners, providing a solid foundation for drawing conclusions.

    In the past year, the exploitation of vulnerabilities increased by 180%, and breaches involving a non-malicious human element increased by 68%.

    The median loss for breaches involving ransomware or extortion was $46,000, while the median loss for Business Email Compromise (BEC) was $50,000. Although these figures may seem low, the dataset includes many small and medium businesses that cannot afford large sums of money, leading cybercriminals to adjust their demands. This also implies that the cost of a breach for large corporations will be significantly higher.

    The top three vulnerabilities identified were:

    1. Credentials: Primarily affecting web applications. Despite organizations implementing MFA, they often overlook “non-critical” assets, leading to their involvement in data breaches due to flat networks and lack of segmentation.
    2. Phishing: Surprisingly, the majority of phishing attacks still occur through email.
    3. Exploits: As previously mentioned, these incidents increased by 180% over the past year.

    Speed in addressing these vulnerabilities is crucial. Attackers are becoming faster at scaling and exploiting newly discovered vulnerabilities, while organizations are still slow in resolving them. On average, it takes 55 days to address 50% of all new vulnerabilities. To prioritize, Verizon recommends using risk quantification to determine which vulnerabilities to address first. For example, Internet-facing medium-risk vulnerabilities might be more important to resolve than critical vulnerabilities on systems deep within the network that are difficult to access.

    Verizon also presents a graph showing reported phishing attacks. The total percentage of people who did not click and reported, combined with those who clicked and reported, is around 20% of all phishing emails. While this figure should be higher, the positive aspect is that the trend is moving in the right direction.

    This year, Verizon introduced a differentiation between Ransomware and Extortion. Ransomware is defined as the traditional form where data is encrypted, while extortion involves the direct extortion of individuals or the threat to leak confidential information of the organization. Interestingly, the trend shows a decrease in traditional ransomware (indicating that controls are effective), but a sharp increase in extortion.

    Lastly, I’d like to share the joke at the end of the report. Although Verizon did not observe significant involvement of Generative AI in the breaches, they felt compelled to include something due to the current hype, leading to the following picture.

    Overall, this was another excellent session. While it served as a summary of the Verizon Breach Report, I would recommend reading the actual report. However, if you’re short on time or prefer not to read through a lengthy document of 100 pages, the session provided a well-presented summary.

  • RSA Day 1 – AI and Threat Modeling

    The first day of RSA reveals that the AI hype continues. Last year, ChatGPT and XDR were the most mentioned topics. This year, there’s no mention of XDR (or Next Gen SIEM), but about 60% of all track sessions have the word AI in the title, including 3 out of the 5 sessions I attended today.

    My focus at this RSA is to learn more about Data Loss Prevention, 3rd party risk, and threat intel. When selecting the track sessions, I chose based on whether the session would be directly usable in my work. Here are the sessions I joined today:

    Overall, the start of the conference was great. Most sessions met the expectations, and some even exceeded them.

    Use Generative AI to End Your Love/Hate Relationship with AI

    The first talk today was given by Heidi Shey, an analyst at Forester specialized in DLP. The topic was how to apply DLP (Data Loss Prevention) on AI, not as I initially expected based on the title how to use generative AI in your DLP program.

    Heidi defines four ways companies are using GenAI:

    1. Bring Your Own AI (BYOAI), people use their own tools such as ChatGPT in their work
    2. Embedded in other software
    3. Tuning and engineering a pre-trained model
    4. A completely custom model

    All of these uses need a different approach from DLP perspective. For example, many companies initially block BYOAI, however it is impossible to keep people from using it, and it would be better to put controls in place. Proposed controls are: creating visibility in the frequency of use and popularity of different GPT tools and create policy and training for employees around that. In addition, you could control block certain prompts using traditional DLP tooling.

    For AI embedded in software you will need to rely more on third party risk management tools, access control and data classification. Where for the last two approaches you will need that with in addition measures around your training data, pipelines, etc.

    Overall, this talk provided some good insights. If you have the opportunity, I would definitely recommend watching the recording of this session.

    Redefining Threat Modeling: Security Team goes on Vacation

    Jeevan Sigh, an expert in implementing Threat Modeling within organizations, delivered this talk. In this session, he shared an approach on how to shift left and train engineering teams to perform their own threat modeling with minimal involvement needed from the security team.

    Before delving into his approach, he defined four levels of maturity for threat modeling:

    1. No threat modeling
    2. Foundational: threat modeling is performed, but mostly on ad-hoc basis
    3. Scaled maturity: threat modeling is done on a systematic basis
    4. Bleeding edge: most of the work is automated

    Attempting to shift left only makes sense if you have a higher maturity in your threat modeling approach. You will first need to ensure that threat modeling is performed in your organization and that people understand the benefits before moving to a self-service approach.

    Jeevan outlined the steps to set up a self-service threat modeling program in your organization. He defined the following steps:

    1. Training phase: engage in interactive sessions with small groups of engineers (around 6 to 10 people). Over a period of 6 weeks, conduct multiple sessions to introduce the concept of threat modeling, perform hands-on exercises, and conduct the first threat modeling session with them on a system they know.
    2. Observation phase: take a step back and let the engineers lead a threat modeling session. The security team joins to observe and coach. The engineers’ ability to identify critical and high-risk vulnerabilities is crucial.
    3. Review phase: if the engineers were able to perform a threat model in the observation phase without missing any major vulnerabilities, you can move to this phase. Here, the engineers will perform the threat modeling sessions on their own, and the security team will review the created threat model after the sessions to verify that no major risks were missed.
    4. Security optional phase: this is the ideal scenario where everything is done by engineering, and the security team is optional.

    For more information, you can read the Blog Post by Jeevan on his approach and find the training materials he uses to train engineering teams to perform their own threat modeling:

    1. Redefining Threat Modeling (Blog Post)
    2. Training documents (PDFs)

    This was a great session, and Jeevan shared some valuable lessons learned, wins, and challenges.

    Can you spot a fake? AI and Cyber Risks in Luxury Retail

    This panel session featured Joseph Szczerba from the FBI interviewing Nicole Darden Fort, CISO of Nordstrom, and Lauren Heyndrickx, CISO of Ralph Lauren. I anticipated a discussion on counterfeit products, fake ecommerce stores, and the use of AI to detect and prevent these risks based on the session’s title and description. However, the actual session focused on the general topic of leveraging AI in business and implementing proper governance around these tools. Frankly, I found this session disappointing as it did not offer any new or interesting insights. It simply reiterated the same generic advice that is commonly given. From that perspective, any CISO could have participated in the panel, and no specific threats or approaches for the (luxury) retail industry were discussed.

    Based on this, I would not recommend viewing the recording of this session.

    What Hacking the Planet Taught us about defending Supply Chain Attacks

    Douglass McKee and Ismael Valenzuela, both instructors at SANS and experienced in offensive security, presented this session. They discussed an approach for managing third-party vendors and libraries to reduce associated risks.

    Douglass and Ismael began by defining two types of supply chain risks:

    1. External vendor: when a product you purchase is compromised due to a breach of the vendor
    2. Library or Software Development Kit used by one of your own products that contains a vulnerability

    They introduced the Overall Product Security Assessment process defined by SANS, which details the exact process for determining the functionality of a library or product and creating a threat model based on that. The threat model can then be used to define actions to mitigate risk.

    While the session was informative and engaging, I have reservations about the practicality of the approach, especially for less advanced organizations. Although the approach is ideal, it may be infeasible to implement for more than just your most critical applications. Nonetheless, I believe it is worthwhile to watch the recording of the session if you have the opportunity.

    How AI is Changing the Malware Landscape

    Vicente Diaz, a Google employee working with Virus Total, presented the last session of the day. During the session, he demonstrated the use of GenAI by VirusTotal to detect malicious scripts and highlighted the (in)effectiveness of GenAI in creating malware.

    VirusTotal introduced a Large Language Model (LLM) in a feature called Code Insight. This feature provides a brief description generated by an LLM to explain the functionality of the uploaded file. Code Insight offers users additional information beyond the outcomes of traditional antivirus tools that VirusTotal checks. This can be particularly useful in cases where a script, for example, downloads and executes a file. Such scripts can be malicious, but legitimate installers may exhibit similar behavior.

    Building on the experience with Code Insight, VirusTotal also began experimenting with LLMs to detect malware in scripts. From these experiments, they drew the following conclusions:

    1. LLMs are highly effective at identifying the programming language and file type based on a clear text file.
    2. LLMs are better at analyzing obfuscated files than traditional antivirus tools. For instance, they found that LLMs performed as well as antivirus tools for PowerShell and Office Macros, but significantly better for other scripting languages such as PHP.

    However, they encountered some challenges. Since LLMs analyze patterns in a script, as well as variable names and comments, attackers can influence the prompts and outcomes generated by the LLM by adding specifically crafted comments and variable names.

    Overall, I found this talk moderately interesting. There were some intriguing insights into how VirusTotal uses LLMs to analyze scripts. However, I have doubts about the research methodology used. For example, the speaker mainly focused on true positives but did not report on false positives. Additionally they only compared LLMs to traditional anti-virus tooling and I wonder how it would compare to more modern EDR solutions such as Crowdstrike, Microsoft Defender and SentinelOne.