By Chad Boeckmann & Adam Stone
May 16th, 2016
Let’s talk about benchmarking. It’s a question our team receives from clients both large and small. When discussing our information security-focused MAPP methodology and our TrustMAPP (formerly, Accliviti) platform, our clients (and their board members) want to know how they stack up, maturity-wise, to their peers. The common belief among this audience is that benchmarking data will help answer the question “how much information security is enough?”
This approach makes sense at a certain level; knowing how mature your organization’s security program is relative to your peers seems, on its face, to suggest that you are meeting (or not meeting) the standard defined by your industry. Like many statistics however, context plays an important role in deciphering benchmarking data. And it probably wouldn’t surprise the reader that without the benefit of context, the value of benchmark data diminishes. This is especially true for a topic such as information security program maturity.
How much information security is enough? When do we find confidence that the organization has invested the right amount of time and resources to reasonably safeguard our information assets? Fair questions. Based on our experience over the last ten years, knowing your maturity benchmark provides an incomplete answer. The reason is simple: each organization is different. The diversity of organizational cultures and risk appetites within a given industry – even in highly-regulated sectors – virtually guarantees that the results of a maturity assessment will yield an interpretation unique to the organization under review.
Looking at this from a practical view, consider Bank A and Bank B (organization size doesn’t matter in this case). Let’s say that Bank A conducted an assessment that yielded an average information security program maturity score of 3 (out of a scale of 1 Low – 5 High). Bank B underwent a similar assessment that returned a score of 3.75. Benchmarking one against the other, it appears that Bank B is more mature than Bank A. Does this mean that Bank A needs to invest into security more to catch up to Bank B?
Not necessarily. Though these two scores provide some information about the effectiveness of each organization’s information security program, the scores provide little insight into the culture and capacity that drives process maturity. This leads us to the core question: does knowing the maturity of your peers provide meaningful, actionable information with which a security leader can leverage? Our answer is a resounding “maybe.”
We recommend that companies look inward versus outward. Instead of focusing on the comparison of your security maturity to your peers, consider a more introspective approach. What is your company’s security program maturity goals? What drives these goals? How does our organization’s culture impact our ability to achieve these goals? What does it mean to score a maturity level of 3 versus 4 or 2?
Since many organizations have yet to conduct an information security program maturity assessment, we suggest that you use the results of your first assessment to set a baseline for your organization. Communicate the baseline to your executives and board members. Ask this audience to draw a line in the sand based on, of course, an understanding of organizational culture and capacity. Work to improve information security program maturity based on the goals defined by these key stakeholders. Doing so, security leaders will find that, despite the constantly shifting business priorities, focusing on your own maturity goals will produce far greater dividends than worrying about your peer’s security maturity.
To learn more about information security program maturity, you can request a copy of our popular white paper on MAPP (Maturity Assessment, Profile and Plan).