Where Traditional TPRM Falls Short
Scroll Down to Keep Reading
Many of the principles behind traditional supply chain risk management and TPRM were established decades ago.
For instance, in 2008, FDIC issued Financial Institution Letter 44-2008 (FIL-44-2008): Guidance for Managing Third-Party Risk, which set many precedents we follow today, including the use of questionnaires. A few years later, in the 2010s, a new market emerged that started using new technology to gather and analyze massive amounts of data about organizations’ security performance. These solutions applied a scoring methodology to data to form security rating services (SRS) with the idea of providing a simple score to represent a snapshot of an organization’s cybersecurity posture.
However, today’s threat and business environments look very different from those of two decades ago. An increase in third-party vendors increases an organization’s attack surfaces and provides an attractive pathway to sensitive data. Increasingly complex supply chains, a lack of standardized security processes and limited resources (especially at smaller vendors) also contribute to more third-party breaches that compromise data, privacy and operations. In addition, we’re facing a massive skills shortage in cybersecurity, meaning that many security teams lack the expertise to stay ahead of third-party risk detection and response.
Today’s TPRM landscape
- Increase in the number of third-party vendors
- Complex supply chains
- Lack of standardized security protocols
- Increasingly limited internal resources
- Massive cybersecurity skills shortage
Key Problems with SRS Scores
- Frequent false positives impact grades
- Black box scoring methodologies
- Variations in scores across SRS companies
- Scores lack the context of each vendor’s specific services and relationship
- Scores only reflect a single point in time and become outdated quickly
All of those factors make it hard to get value from traditional TPRM strategies, such as questionnaires. Not only are questionnaires time-consuming for vendors to complete and for security teams to review and analyze, but they also only provide a point-in-time snapshot of a vendor’s posture. As companies scale to hundreds or even thousands of vendors, questionnaires become highly resource-intensive exercises with little long-term value.
Historically, SRS hasn’t been an effective or trustworthy way to gauge risk either. There are countless instances of false positives in SRS scores, leading cyber teams to waste valuable time trying to mitigate inconsequential or nonexistent issues. And while a letter- or number-grade security rating via SRS might provide a general idea of an organization’s cyber hygiene, the nuances between grades (e.g., C versus D) can be harder to decipher. It doesn’t help that the SRS organizations offering these ratings often provide little insight into how they actually determine grades. In many circumstances, organizations employ more than one SRS and each may provide the organization with a vastly different score further complicating any ability to determine an organization’s true level of risk.
A letter grade also doesn’t contextualize risk in relation to a vendor’s services. If two vendors have a C grade, for example, it could mean something very different based on the operational impacts your organization would suffer if those vendors were to experience a breach or attack. Consider last year’s ransomware attack on port operator DP World Australia. The company manages about 40% of the goods that flow in and out of Australia and the attack crippled those critical operations. A subpar SRS grade for a company like that, which would significantly impact your own business continuity, requires more critical evaluation than a vendor that manages your organization’s marketing website, for example.
Finally, grades only offer a point-in-time snapshot and are significantly impacted by past events, like a previous breach. Reporting on incidents after they happen isn’t a good indicator of future risk. A former breach, for example, may actually be a good thing if it illuminates a previously unknown vulnerability and leads to a patch. A score based on a vendor’s posture 30 or 60 days ago also doesn’t fully represent a vendor’s current position. The vendor may have implemented changes in its environment this month that could dramatically change its score, such as introducing new features or partners that incur additional risk.