As an audit practitioner, you know better than most the need to ensure the effectiveness of risk management, control and governance processes in your organization. This need is only amplified by the rapid development of technology solutions being deployed as they add additional layers, which makes ongoing compliance even more challenging.
But what about your current environment? A hidden challenge to many audit and compliance professionals has been a 20-year-old “tool” granting elevated or privileged access to all types of production environments known as the Secure Shell (SSH). Awareness of this unknown access gap has been on the rise primarily by practitioner guidance and industry events discussing the protocol, and unfortunately because of large security breaches (such as the Sony breach) resulting from poorly managed SSH environments.
Security, audit and compliance professionals engaged in business-as-usual daily events struggle to maintain control and oversight. Blending in the threat world events along with unknown access controls is simply a disaster waiting to happen if all access is not properly accounted for and assessed.
This has been highlighted by ISACA’s new guidance, “SSH: Practitioner Considerations.” In collaboration with industry experts, practitioners and ISACA subject matter experts, the guidance provides an excellent overview of what SSH is, its background, assurance considerations and practitioner impacts and suggested controls.
The guidance helps to educate the audit community about the SSH protocol and what steps auditors need to take to ensure proper governance and continuous compliance of SSH keys environments. It is imperative that organizations follow the outline in the guide that walks you through the SSH keys life cycle management, including usage procedures, configuration management, ownership and accountability, deployment, provisioning, and governance.
Let us ask ourselves, “Why should we implement these controls?” It is simply to secure and protect our critical and sensitive data, and also to ensure the confidentiality, integrity and availability of our systems and data at all times.
SSH keys have been deployed for years without detection, ownership, provisioning or governance. In many cases, managing SSH keys has become a mundane task due to the sheer volume of keys that exist on every server, database, network appliance, etc. All organizations must adopt best practices, leverage automation, establish ongoing monitoring and auditing, and govern all access equally to ensure SSH access is authorized and that access falls within governance guidelines used for privileged access.
Given the pervasiveness and type of access granted by SSH, all audit professionals need to consider the insights from SSH: Practitioner Considerations: “Attesting to the state of access compliance is potentially incomplete without incorporating SSH keys. Ramifications of poorly managed SSH keys environments may lead to audit infractions and possibly a security breach.”
Secure Shell (SSH) is everywhere. Regardless of the size, industry, location, operating systems in use or any other factor, chances are near certain (whether you know about it or not) that it exists and is in active use somewhere in your environment. Not only does SSH come “stock” with almost every modern version of UNIX and Linux, but it is in a normative mechanism for systems administration, a ubiquitous method for secure file transfer and batch processing, and a standard methodology for accessing virtual hosts whether they are on premises or in the cloud.
Because of this ubiquity, SSH is important for assurance, security and risk practitioners to pay attention to. There are a few reasons why this is the case.
First, configuration. SSH can be complicated to configure, and incorrect or inappropriate configuration translates directly to technical risk and potential security issues. Why is it complicated? The configuration parameters border on the arcane, and they require knowledge of the underlying protocol operation to make sure strong selections are made. These configuration choices are highly dependent on both environment and usage, so what might be robust enough for one use case might be insufficient for another. Likewise, the client and the server (e.g., solid-state hybrid drives) have separate configuration options, and each option directly impacts the security properties of the usage.
Second, usage. Usage tends to be niche and tends to grow organically over time rather than (usually) being “deployed” in a planned-out, systematic way. It is natural that this happens because the number of SSH users in the organization is relatively small (most consisting of operations folks), the tool itself is ubiquitous (coming as it does “stock” on multiple platforms and (because it is a security-focused tool) it is sometimes viewed with reduced skepticism by assurance and security teams. These factors serve to make it less “visible” from a management point of view, meaning very often, organizations do not systematically analyze potential risk areas associated with SSH, evaluate the security properties of their existing usage or otherwise systematically examine configuration and other parameters.
Finally, it makes extensive use of cryptography. By virtue of how the protocol operates, cryptographic keys are integral to the protocol operation, and choices are available about how the cryptography operates, how keys are managed and distributed, and numerous other considerations. As we all know, managing cryptographic keys can be challenging and it is critical to get it right for the security of the usage to be preserved, and cryptography generally can be a subject area difficult to get right.
For these reasons, it is important that organizations pay attention to their SSH usage the same way that they would any other technology that they use. There are some specific practical considerations that organizations should address and important questions to ask themselves around usage, configuration and maintenance of SSH. ISACA’s recent guidance Assessing Cryptographic Systems lays out the general considerations for assessing a cryptographic system, but specific considerations for SSH remain, for example, specific configuration options for SSH and key management issues specific to SSH.
To help practitioners work through these issues, ISACA has published SSH: Practitioner Considerations. The goal of the publication is to give security, audit, risk and governance practitioners more detailed guidance about how to approach and evaluate SSH usage in their environments.
Ed Moyle is director of thought leadership and research at ISACA. Prior to joining ISACA, Moyle was senior security strategist with Savvis and a founding partner of the analyst firm Security Curve. In his nearly 20 years in information security, he has held numerous positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers and senior security analyst with Trintech. Moyle is coauthor of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as an author, public speaker and analyst.Category: Security Published: 9/21/2017 3:02 PM BlogAuthor: Ed Moyle PostMonth: 9 PostYear: 2,017
Editor’s note: Matt Olsen, national security expert and co-founder of IronNet Cybersecurity, will deliver the opening keynote address at CSX North America, which will take place 2-4 October in Washington, D.C., USA. Olsen, who says ‘no company should go it alone in cyber space,’ visited with ISACA Now about the role of cyber professionals in counterterrorism, evolving forms of attacks and sharing of threat information. The following is an edited transcript:
ISACA Now: How would you characterize your experience with the National Counterterrorism Center? What components of your work did you find most fulfilling?
The National Counterterrorism Center, or NCTC, was established after the terrorist attacks of 9/11. The mission of NCTC is to integrate and analyze all sources of terrorism intelligence, and then to share that information with partners across the federal government and with state and local law enforcement. The creation of NCTC was one of the primary recommendations of the 9/11 Commission and has fundamentally reformed the way the federal government approaches counterterrorism.
I was fortunate to lead NCTC at a time when its role was firmly established in the nation’s counterterrorism efforts. The most rewarding aspect of working at NCTC was its exemplary workforce of analysts, operators and policy experts. All of the officers at NCTC were committed to protecting the country and, despite many career options, had chosen to dedicate their professional lives to national security.
ISACA Now: What are the most impactful ways that cyber security professionals can make their mark on counterterrorism?
There is a close relationship between cyber security and counterterrorism. We have seen terrorist groups seek to obtain sophisticated cyber tools to carry out destructive attacks against the United States. Cyber security professionals can help defend against these efforts generally by hardening their networks and by adopting industry best practices for cyber security.
ISACA Now: What type of attacks do counterterrorism professionals need to be best prepared for going forward?
Counterterrorism professionals need to be prepared for a wide range of attacks from terrorists today. The most likely type of international terrorist attack here in the United States is an assault by a lone wolf who has been radicalized by terrorist groups overseas, such as ISIS. Such an attack is likely to be unsophisticated, but difficult to prevent.
We also need to be concerned about more sophisticated attacks, such as the deadly assaults in Paris and Belgium, which were extensively planned and coordinated by ISIS. Finally, we suspect that al-Qaida remains interested in aviation targets, and has plotted repeatedly to plant a bomb on a plane headed for the United States.
ISACA Now: From your personal experience, how challenging is it to find qualified cyber security professionals who can handle complex threats?
In my experience, we face a significant challenge in finding cyber security experts to fill positions across the private and public sectors. By some estimates, there will be more than one million unfilled cyber security jobs in the United States by 2020. Meanwhile, there’s no shortage of adversaries and bad guys trying to hack into our networks. We need to work hard to ensure that educational opportunities exist to train the next generation of cyber security experts.
ISACA Now: What steps can or should governments take to be more effective at sharing threat intelligence information?
The effective sharing of cyber intelligence and threat information is essential to improving our cyber security. The government should take the lead in ensuring that our laws, regulations, and policies promote and facilitate the sharing of information among companies, and between companies and the government. For their part, companies should take advantage of legal changes over the past few years to enter into sharing arrangements with other companies.
Today, it is feasible for companies to share threat information and gain situational awareness in real time across economic sectors. No company should go it alone in cyber space. Only through a “common defense” approach to cyber security can companies gain the visibility and access to expertise on a widespread basis. This will lead to better cyber security for all.
If only neurologist Oliver Sacks, who wrote “The Man Who Mistook His Wife for a Hat,” were still alive! He would find today’s neural networks (the hot new trend from the artificial intelligence community) extremely amusing.
His book describes a man whose brain damage results in the man thinking his wife’s head is a hat. Maybe there are more parallels between the brain and artificial neural networks than what meets the eye (no pun intended).
Neural networks are being leveraged increasingly often in information security to provide a higher level of protection, including against zero day attacks. However, what if the adversary targeted the neural network/machine learning algorithm itself?
In a recent article, Adam Geitgey describes an algorithm and even provides code for tricking a neural network-based image recognition system into identifying a photo of a cat as a toaster:
Note that knowledge of the neural networks is required in order to leverage back propagation. However, this approach is not new and other examples of misleading input causing machine learning to fail are known, such as the case of defacing a stop sign resulting in autonomous vehicles not recognizing the sign.
Let us make the algorithm more generic so that it can apply to a Data Loss Prevention (DLP) system. Assume we use a simple example that is well defined: DLP via Domain Name System (DNS) queries. Instead of a photo being analyzed, individual fields in protocol messages are analyzed to determine when malicious actors are trying to exfiltrate sensitive data, so in the algorithm we replace “photo” with “set of DNS queries”:
With such methodology, the adversary can successfully bypass such a Data Loss Prevention (DLP) system and imagine even tampering with valid data (e.g., an organization’s valid traffic) to cause the DLP to trigger a false positive.
What can security vendors do to prevent such hacks? Obviously the more the adversary knows about the neural network algorithm, the quicker he can successfully generate hacked input that will cause the system to fail. So, algorithm details must be protected. Geitgey recommends the use of ‘Adversarial Training’: include lots of hacked images or data created using back propagation, and include them in your training data set.
So, the question arises: are we building enough security into our security systems?
Editor’s note: ISACA’s recent tech brief on artificial intelligence is available as a free download.
Transitioning into an IT audit or assurance role can be daunting, overwhelming and outright scary at first. Like for many roles these days, individual performance expectations are high, your engagement results are heavily scrutinized by the client and senior management constantly expects a high level of value to be provided through your efforts. This blog post mainly focuses on overcoming some of these challenges for individuals new to the IT audit or assurance profession, but it may be useful for others as well. Here’s what I’ve learned over the past two years; hopefully it serves you well.
1. Stick to the objectives. It can be easy at times during an audit/assurance engagement to start drifting off into additional areas of risk or concern that is not part of your current engagement. Avoid making unilateral decisions that introduces the potential for significant increases in workload or risk achieving the objectives of your current engagement. Your manager should be the first person you talk to when an interview with an auditee or the testing of evidence uncovers additional potential risk not originally in scope of the engagement. Write down your concerns, the potential risk in some quantifiable or objective statement, and present your case to your management. Their support and guidance can take you further than what you could hope to achieve alone, and may allow for future engagements or discussions to be scheduled to address these potential risks. You set out to initially gain specific assurances on a particular area under review, right? Focus on doing just that.
2. Keep it simple. While IT auditors/assurance professionals typically focus on the inner workings and configuration of complex technology, the final result of your work is typically a report that is digested by a community or group that is non–technical in nature, such as an audit committee, your manager or senior management. Creating complex technical testing plans, working papers and developing reports containing overly technical content that only you can understand doesn’t produce a net positive benefit for you, and can actually be detrimental to your career. Ensure your work papers define the name and title of the individual you received it from, the date it was received, a short and specific purpose or reason as to why you are reviewing the evidence, and the key attributes supporting the purpose and the conclusion of your review. Practice report writing early and often, and accept feedback from management on methods to simplify reports and other communications you produce.
3. Utilize repeatable testing frameworks. It is ideal to spend less time and effort building out a custom audit/ assurance program from scratch each time you have an engagement. While every audit/ assurance organization operates differently and has different ways of completing and documenting their engagements, each organization typically pursues similar information. Tools like Excel offer a mechanism for engagements to be conducted in a uniform manner regardless of the focus area or engagement by use of its inherent table and cell structure. Excel tabs can be used to define the particular technology your testing is focusing on, such as Exchange, Active Directory, NetApp, etc. Columns can be used to define the repeating areas you define for every engagement, such as the perceived risks, identified controls, test methods, testing attributes, and results of your testing (i.e. effective or not effective). Rows can be used to provide focus for the items you have defined in the columns and allow the results of testing to be easily reviewed by others.
4. Network, network, network. Your ability to influence and maintain positive working relationships with various internal departments and senior management can be a major deciding factor between a smooth and stress-free engagement versus one where you wish you never accepted the role. It’s easy for auditors/assurance professionals to be viewed in an adversarial light due to the nature of the work they perform. Networking with past or future engagement clients provides an opportunity for you to be perceived as something other than that person who just stressed them out in an engagement. It may seem scary or uncomfortable at first, but email can be your friend here, such as randomly selecting a group of people you want to know more, Bcc’ing them and crafting a simple “I’d be interested in networking with you and knowing more about your interests or who you are as a person.” Another approach is sending direct emails to particular individuals or individuals you have an interest in on the same team. I’ve used this approach effectively to network with my CEO, chief security officer, project managers, and even my own team members. Networking is one of your most effective career tools, so use it!
5. Be humble. Your ability to grow and the speed in which you advance as an IT audit/assurance professional is dependent on your ability to consider and integrate constructive feedback provided by others. In my experience, it can be incredibly difficult and psychologically challenging for human beings to accept criticism because it may be subconsciously perceived as a social or personal attack, even when it isn’t. While all feedback is not useful and constructive, it’s key to identify, accept and improve on your strengths and weaknesses when communicated to you in feedback from others. A general positive attitude, open mind and a belief that people are attempting to help you get better will serve you well in the long run of your career.
6. Research early and often. A convenient aspect of the IT audit/assurance profession is that you’re more than likely not the first organization or individual to have audited or gained assurance on your targeted subject area. A simple practice is performing Google searches for related publications, white papers, and audit programs that will help narrow risks and testing objectives for which your engagement should focus. If you are auditing Active Directory, TechNet is a major helpful resource. If you’re auditing Oracle databases, there are security guides provided by Oracle helping you determine the database views and parameter files you may want to focus on. Audit/Assurance program services such as Knowledge Leader, the IIA and ISACA also provide a wealth of information that you can use to research related risks and focus areas for your engagement.
7. Do not assess risk in a vacuum. You should consider involving, if not outright integrating business area management and stakeholders into your risk assessment process for each engagement. Some organizations perform an organization-wide risk assessment to scope their annual engagement plan, and this is where stakeholder involvement can end in identifying risk. Considering what the business areas perceive as risky provides at least two immediate benefits to you and the success of your engagement. It will serve to validate whether your assessment of risk was aligned with the client, and the client also can see the value you are providing in addressing concerns regarding relevant risks. Clients should not be allowed to influence final direction of the engagement, but their inputs are invaluable to delivering a final quality work product.Category: Audit-Assurance Published: 9/19/2017 3:23 PM
As an IT auditor at a software company, I discovered that security vulnerabilities in our bespoke product had not been getting released to clients on a timely basis. We had been doing penetration tests for years, but obtaining the penetration test report had not translated to the fixes being released to the users. Our clients remained exposed to known vulnerabilities, a situation that meant my employer was assuming all potential liability for the situation.
There were, it turned out, many things that slowed delivery of the fixes. Some factors were organizational and some were technical. I address the organizational challenges of client resistance and lack of internal commitment in my recent Journal article. But I will offer an additional insight for readers of Practically Speaking on overcoming technical complexities in patching a bespoke software product.
The technical complexities in the environment were nearly endless. Some penetration test findings applied only to certain versions of the software. Some fixes were beyond the capabilities of the development platform and required extensive software workarounds. Other fixes required a minimum browser level that was beyond a client’s reach. Sometimes a peculiarity of the client environment prevented one fix or another from working, e.g., a homebrewed single sign on or bespoke antivirus configurations could hamper the rollout of bug fixes. These complexities prevented the patch bundle from each year’s penetration test report from being deployed into the production environment.
The problem turned out to be both the vendor and the client believing each year’s findings could be resolved via a monolithic patch. By bundling the fixes, we greatly increased the likelihood that some complexity would render the patch unsuitable or undeliverable to a given client. Working with the developers, we devised a matrix that broke down each year’s penetration test results, with a row for each distinct finding in each report. We worked with the product owners to understand which finding applied to which version of the software. When a software fix was created, we recorded the version control branch number for the fix against the relevant finding. When a release was scheduled for a client, we worked with the project management office to ensure that the relevant fixes got scheduled.
It was messy and labor-intensive, but it worked. Supported by a strong version control system and a documented package release program, a reliable program for tracking penetration test fixes to production was put into effect. In time, we eradicated client exposure to known vulnerabilities, resolved our employer’s potential liability and were ready for each year’s fresh batch of findings.
Read Michael Werneburg’s recent Journal article:
“Addressing Shared Risk in Product Application Vulnerability Assessments,” ISACA Journal, volume 5, 2017.
In the early days of computing, use of private networks was more prevalent than it is now. Given that, the use of a network protocol (such as Telnet) that transmitted data in plain text was not cause for much concern. As the use of public networks increased, however, a more secure network protocol was needed. Offering encryption, authentication, and other security mechanisms, the Secure Shell (SSH) protocol has been adopted by organizations as a more secure means to connect remote servers to clients.
The security mechanisms offered by SSH are worthy of this widespread adoption. The use of SSH, however, has an element that requires consideration. For the typical Fortune 500 enterprise that has several million SSH keys granting access to its production servers, a substantial portion of them are unused. This large number of keys can be attributed to those with SSH keys having the ability to generate additional keys outside of the enterprise’s access management process. Also, weaknesses in an enterprise’s process for disabling SSH keys when administrators or developers separate from the enterprise or move into new roles can contribute to unneeded SSH keys. So, the bottom line is an environment may exist where new keys are being generated while existing keys are not being disabled.
As suspected, this proliferation of SSH keys introduces the risk that access is held by those who no longer need it and, perhaps more concerning, by those who should not have the access at all. Given this potential risk of SSH key proliferation, why isn’t SSH a more prominent subject of discussion? A possible reason is that the rights granted through SSH keys may not be fully appreciated or understood by those other than administrators and developers. Another possibility is that the enterprise knows that the access granted by SSH keys is required for administrators and developers to perform their duties and believes that there is little, if anything, that can be done to manage SSH keys.
SSH keys present a unique risk to manage. After all, how common are access management practices where users can grant themselves access without approval? Risky enough. But, add the attraction factor of SSH keys: SSH is widely used to manage servers, routers, firewalls, security appliances, and other devices through accounts with elevated privileges. This makes SSH keys a particularly attractive target for malicious actors. Elevated privileges can be attained through a single SSH key (that wasn’t approved in the first place) to allow a malicious actor access to a server from which elevated privileges can be used to create a new key that allows access to another server (transitive trust). So, the risk is real. But, the solutions are, too.
One of the first elements of the solution is to identify an owner of the enterprise’s SSH strategy and protocol. The lack of a person or group with the authority and the responsibility to manage SSH can leave the enterprise in a position where policies and practices either are not created or are created but not enforced.
The second element of the solution is to develop and enforce a strong key management program. If an enterprise has not addressed key management, it would be worthwhile first to perform an inventory of SSH keys, determine how many keys are enabled and who the users are. Using the results of the inventory, disable unneeded keys and assess whether the access that should remain is at the appropriate level. Lastly, implement and enforce a continuous monitoring program to ensure that the SSH key management practices remain aligned with the enterprise’s strategy.
So, yes, the risks associated with SSH keys are real. An enterprise’s performance of an SSH key inventory and implementation of consistent monitoring of its SSH key program, however, will go a long way toward mitigating that risk.
Editor’s note: ISACA’s new SSH protocol audit/assurance program is free to members and available for download.
Editor’s note: Ade McCormack is keenly interested in the anthropological factors that drive digital innovation. McCormack, who will deliver the opening keynote at CSX Europe 2017, to take place 30 October-1 November in London, UK, visited with ISACA Now about the main drivers that have set digital innovation in motion, why some CEOs are hesitant to invest in digital transformation and more. The following is a transcript, edited for length and clarity:
ISACA Now: How is our attention being ‘hijacked’ these days, and what can we do about it?
There is money in capturing people’s attention, so it has become a professional pursuit. The brain conceptually is a muscle, so we need to exercise it to improve our ability to maintain our attention.
ISACA Now: You’ve written about trading privacy for convenience. Are you surprised at how often people make that choice?
The Facebook generation does not place the same emphasis on privacy as older generations. The older generation has perhaps been remiss in not spelling out the associated risks, but then there was no Digital Age parenting playbook at the time. In any case, we all make this trade-off. The question is how informed is one’s decision-making in this respect.
ISACA Now: From a management perspective, what are some key aspects of digital transformation that often are overlooked?
The problem with digital transformation is that it is an investment in the organization’s future – a future where it will likely be a future CEO who takes the credit. Very few CEOs are this selfless. They are not true stewards of the organization, hence the lack of transformational activity today.
ISACA Now: What have been your main takeaways from how the business community has been preparing for next year’s GDPR deadline?
Given its mandatory nature, only foolish organizations would not categorize this as a strategic priority.
ISACA Now: What are some of the anthropological factors that have set the digital age in motion?
There are a number of these, including our need to be mobile, social and creative. I look forward to exploring in more detail during my keynote.
There has been a lot written over the past year or so about the EU General Data Protection Regulation (GDPR) – what is required, and what needs to be accomplished sooner rather than later in order to meet the May 25, 2018 compliance date. And with 99 articles, with hundreds of requirements within them, covered within the GDPR, there are certainly many topics that must be addressed.
While seven to eight months may seem like a long time to address them all, it is important for those responsible for GDPR compliance activities to realize that some of those activities will necessarily take many weeks of planning and preparation, and then most likely many additional weeks of actual implementation.
One case in point is performing a GDPR compliant data protection impact assessment (DPIA). I’ve heard and read a variety of statements made about DPIAs over the past several months, and I want to correct and clarify a few of the ones that I’ve heard that have been especially of concern.
A significant purpose for requiring organizations to conduct DPIAs is to identify and reduce the data protection risks within projects, networks and systems, reduce the likelihood of privacy harms to data subjects, and to determine the levels to which all of the applicable 99 GDPR articles have been implemented by the organization. Traditional PIAs have not fully addressed consideration of harms to data subjects (but that is important for all to address whether or not it is for DPIA), and certainly traditional PIAs did not look at the specific DPIA requirements that are unique from traditional PIA topics covered.
To the specific point of performing a DPIA, I recommend that organizations use a framework that not only addresses and meets the GDPR requirements, but can also meet other requirements for performing other types of privacy impact assessments. I’ve created a PIA framework, based upon the ISACA Privacy Principles, which consolidates similar privacy principle requirements and topics into the 14 ISACA Privacy Principles, and maps all the DPIA requirements within them, in addition to those DPIA questions also mapping to other standards, frameworks and regulatory data protection requirements.
I will go over the associated methodology on 28 September at the “How to Perform GDPR Data Protection Impact Assessments” ISACA webinar (www.isaca.org/Education/Online-Learning/Pages/Webinar-How-to-Perform-GDPR-Data-Protection-Impact-Assessments.aspx), and will also point to a spreadsheet I created for ISACA members to use for performing DPIAs, as well as a new version of an automated DPIA tool I created for ISACA to make available to members.
I hope you can join me!Category: Privacy Published: 9/14/2017 3:09 PM
It is a terrible time for privacy in the United States. There are very few institutions that we entrust to hold nearly all our financial records, and one of them, Equifax, admits to losing them.
The full impact of the breach will be felt over time, and right now nothing has changed in our lives besides a new worry and uncertainty. Perhaps, like with other breaches such as Anthem and Yahoo, we will have to live in fear for decades with not yet having felt the direct impact.
However, I would argue that Equifax has a potential to be the most impactful breach to its victims. The repositories of data that include personal, financial and confidential information will not dissipate easily over time. Unlike with many medical conditions, or simply stolen passwords, victims of financial and personal information theft do not get better. We can’t escape our credit history and financial situation, so the abusers of the stolen data will be able to pursue us through the years.
How did it happen? We do not have all the details, but one may argue that an organization charged with holding this type of data would not fall to an attack vector that was a known problem for half a year prior. Even with that vulnerability, a breach of a single website should not lead to any stolen data. There must be safeguards.
Let’s say a web server was not patched, but it is the job of intrusion prevention systems to detect an exploit. When hackers were roaming free within a compromised server and its databases, where were the security safeguards identifying the abuse? Further, consider that hackers reportedly stole gigabytes, if not terabytes of data. This type of unusual activity should be noticed by the network traffic monitors, and defensive tools.
Yet, Equifax infrastructure allowed for the data theft without much of an alert. And for us, the victims, what’s the recourse? One year of free credit protection from TrustedID service, owned by Equifax? There is something to be said about a company offering protection against identity theft that could not protect its own data. And what happens after one year? Would hackers delete the stolen data, or would they keep abusing the accounts while the victims resort to paying Equifax for a protection service from the loss that it caused?
There is a lot of angst, and confusion, and too many questions that we do not know how to answer. The scariest thing is that we do not know what is coming and how badly this will impact the victims.
The big question still remains: who do we trust with our data? Do we, the consumers, have any say or choice? Should there be government sanctions for these types of events?
As a security professional, I see another lesson in not-so-good security practice. What could have been done to prevent this? What could have been done during the incident response and investigation?
Time will tell if this is the most impactful breach for us, or if this is a scary event from which the stolen data never sees large-scale abuse. Stay tuned.
Editor’s note: Alex Holden will be presenting on optimizing defenses against invisible threats at CSX North America, to be held 2-4 October in Washington, D.C.Category: Security Published: 9/12/2017 11:42 AM
In the wake of major disasters, companies often retrench to their board rooms and ask questions about the state of their own resilience. These questions follow one of two tracks: First is a retrospective post-mortem of their own company, or preferably an affected competitor. It starts with a question like, “How would we be affected or react if this happened to us?”
In the wake of the Equifax consumer data breach, many of the stories in the past days share well-articulated insights that are nonetheless written with that full 20/20 hindsight in play. There even is evidence that Equifax itself took this path two years ago in the wake of the Experian data breach. While well-intentioned, this hindsight-driven approach is fundamentally flawed.
When penning an article or responding to a board question post-mortem, we are afforded luxuries that our disaster-distressed selves would not be afforded in a real scenario. For example, compare your mental state now versus in a true disaster – the shock and suddenness (then) vs. the quiet reflectiveness (now). One of the greatest underestimations of post-mortems is the effect of imperfect and often conflicting information during a live, unfolding crisis. To illustrate this, consider the following three fictitiously timed statements:
If your blood pressure progressively elevated from “slight nuisance” to “we may lose our company,” then you’re likely in good company with Equifax’s executives as they gleaned more information about the incident from the initial discovery until today. It is much easier to think about your actions for Day 0 when you know what Day 20 looks like, but we almost never do.
Anyone who has read a post-mortem report, though, will attest that it is unlikely that the report captures the nuances of timing and progressive urgency. Instead, the report highlights the diseased final state and what the company should have done to protect itself in the first place, often forgetting about all the other possible infections that could be acquired.
So, if studying Equifax and gleaning lessons learned, even in light of the little we know, is an easy but relatively unproductive sport for our own resilience, what is the alternative?
The second track is the one that we advocate for in our trainings with executives and boards. This track makes a much more natural supposition about the state of risk in cyber security. Instead of assuming perfect hindsight about random one-off events, let’s instead suppose that Equifax treated cyber risk much the way weather risk is accounted for by a large farming cooperative. Our assumptions regarding cyber threats would instantly shift from being unknown and one-off to mitigatable risks.
Figure 1—Cyber risk is an influencer to traditional enterprise risk categories
Farmers understand that crops are their most important assets. They understand and monitor any threats, from weather to insects to hungry predators that might affect those crop assets. They also know the vulnerabilities that their particular crops, in their particular locations, have compared to those of other farmers in other locations, and they have people at the ready to mitigate the impact to their farms, should disaster strike.
The Equifax breach will likely change many upcoming boardroom agendas and spur more communications about cyber breaches among senior executives. Executives, security professionals, and the public at large should then take this opportunity to think about what their most important crops are, what true vulnerabilities exist in them, and learn better how to mitigate against those risks.
I am sure most practitioners by now have probably heard about the Equifax breach. If you have not yet, get ready to hear about it nonstop—probably for the next year or 2 at least. Why? Because it eclipses even the 2013 Target breach (which people are still talking about) both in number of individuals potentially impacted (143 million) and the potential sensitivity of the records involved (which include social security numbers, dates of birth, addresses, credit card numbers and driver's license information.)
The details of this are still unfolding, so we do not have the full picture yet. It will probably be a few months before we do. But in the meantime, I think we know enough to highlight at least a few lessons learned. Specifically, things that it behooves organizations to have in mind as they plan (and ideally exercise) their own incident response strategies. We can use what happened with Equifax as an illustration of why these principles are a good idea.
Since it is early in the cycle, it bears noting that a grain of salt should be applied as we go through these. After all, emerging details might change our understanding of specifically what transpired. But in the meantime, there are a few items that, based on what we know right now, are useful takeaways for other organizations planning their own response processes.
Lesson 1: Application Security
The first takeaway is based on what we know about the root cause. We do not fully understand what happened, but we do know that it was caused by (per Equifax) a “website application vulnerability.” It is unclear whether they mean an issue in the underlying web server (or supporting software) or an issue in the application running on it, but we know that organizations tend to struggle with application security—so I do not think anyone would be surprised if it is the latter. This event should, therefore, serve to underscore the importance of application security generally, i.e., having a robust development and release life cycle, “building security into” production applications, and the importance of robust testing of both applications and the underlying technical substrate upon which they reside.
Lesson 2: Optics
The second thing we can learn is about management of the optics during the incident response process and the breach notification process. Equifax is taking a bit of flak in the press and on social media because 3 key executives (including the chief financial officer and president of US operations) sold more than US $2 million worth of stock in August. That is after the breach was discovered internally, but before it was disclosed to the public. Equifax told the press that these executives had no knowledge of the breach at the time (because otherwise it would be illegal), but had they known, Equifax could have avoided what is bound to be a source of serious bad press for them in the coming months.
This is why it is a good idea to foster and maintain clear, open and timely communication channels between all areas (including executives and legal counsel) as incident response events unfold. Additionally, the point has been made in the press that the free identity theft protection offered by Equifax requires those accepting that offer to give up their right to sue or participate in a class action. Those are not great optics either. Consumers are likely angry about this, so hanging out an offer “with strings attached” is potentially caustic.
Lesson 3: Encryption
It probably stands to reason that, had the information that was compromised been encrypted, it would have been included in the information made public about the breach. To the previous point about the optics, stating that the information was encrypted would defuse quite a bit of Equifax’s current public relations nightmare. Based on this, we can probably assume for now (though later facts might certainly indicate otherwise) that it is not encrypted.
Encryption of data at rest is not difficult to deploy nowadays; that is true whether the data are structured or unstructured. So, if you have a database of millions of social security numbers, bulk storage of files containing financial information or any other situation that could be explosive from a privacy standpoint, asking yourself why those data are not encrypted is probably a useful question to ask. There are absolutely reasons where it can be challenging to implement encryption, but balance that against the explosive potential consequences of a large-scale breach.
Lesson 4: Be Alert to Deadlines
Equifax is also taking criticism in the press about the time that it took to notify impacted individuals. They discovered the breach on 29 July, but we are only learning about it now. Keep in mind that some jurisdictions have a specific “clock” by which you must notify customers (e.g., Florida, USA, is 30 days—or 45 with an extension). Of course, law enforcement may direct an organization to delay that notification (we do not yet know if that was the case in this situation or not), but it is helpful to take these deadlines into account, include them in incident response planning and put mechanisms in place to make sure they are followed during an incident when stress levels are high.
There are likely numerous other lessons that will surface as events unfold. If so, there are likely numerous other learning opportunities that will surface. We will just need to watch, wait and analyze them as they come.
Ed Moyle is director of thought leadership and research at ISACA. Prior to joining ISACA, Moyle was senior security strategist with Savvis and a founding partner of the analyst firm Security Curve. In his nearly 20 years in information security, he has held numerous positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers and senior security analyst with Trintech. Moyle is coauthor of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as an author, public speaker and analyst.Category: Security Published: 9/8/2017 2:54 PM BlogAuthor: Ed Moyle PostMonth: 9 PostYear: 2,017
ISACA’s IT Audit Leaders Forums, conducted this year at North America CACS and EuroCACS, fostered productive dialogue about real-world challenges impacting IT audit directors.
I was fortunate enough to participate at EuroCACS, and I was especially pleased with the insights gained from listening to my peer IT audit directors. The opportunities and risks discussed were a confirmation that our audit teams are focusing on a similar set of emerging risks.
On top of this, the forum facilitated an honest and open discussion amongst peers about the short- and long-term challenges we are facing to continue to perform high-quality audits.
Going forward, special emphasis needs to be placed on raising our joint understanding of:
Altogether, it was an inspiring and insightful forum that provided support for selecting the proper audits while shaping my 2018 audit plan and growing my European network.
Editor’s note: For additional insights from the 2017 IT Audit Leaders Forums, see ISACA’s IT Audit Leaders Forum recap.Category: Audit-Assurance Published: 9/11/2017 3:03 PM
Most of us have heard the phrase “What you don’t know can’t hurt you.” While this may hold true for some circumstances, in the case of an audit, the opposite is true.
A large part of an auditor’s job is to discover and know about exposures and gaps that could hurt the organizations for which they work. An auditor’s remit includes finding, analyzing and documenting an ever-increasing list of things that organizations don’t know about but have the potential to cause damage.
This task can be harder than it sounds, particularly when it comes to an organization’s use of technology. Why? One reason is that auditors need to be alert to the specific risks, threats, issues and other problem areas that can arise related to the specific technologies in use. One area that is particularly challenging is the assessment of cryptographic systems: modules, software, and application components that employ cryptography, and the use of cryptography generally throughout the organization.
Several factors make assessing cryptographic systems more difficult than other technologies. First, it’s ubiquitous – almost every organization (whether it’s known or not) makes extensive use of cryptography to secure everything from data transmissions to employee remote access. Cryptography is used for authentication, to securely store data, and to prove the integrity of that stored data. But despite its ubiquity, it’s a little like the plumbing in our homes: there when we need it, but not something we stop to think about unless something goes terribly, terribly wrong.
Second, cryptographic assessment is not a skill set in which all auditors have extensive experience. Many seasoned auditors know the fundamentals of how cryptography works, but implementation details, i.e., the mathematics underpinning its operation and the engineering aspects of authoring a library, toolkit, or component, aren’t generally at the top of an auditor’s tool box.
Because many auditors aren’t deep crypto experts and there are few general assessment guides for audit of these systems, cryptographic assessment may get short shrift during audits. This is a potential security concern, because poorly implemented, ill-used, broken, insufficient, or other operationally deficient use of cryptography can represent significant risk to an organization.
Now, this doesn’t mean that every auditor needs to be the next Alan Turing – just like they don’t need to be Brian Kernighan to assess a business application written in C! But many could benefit from having a guide that explains the basics of cryptographic system assessment to help them find and identify potential risk areas; for example, potential implementation issues, best practices, known weak configurations, etc.
To help address this, ISACA has authored Assessing Cryptographic Systems. This free resource provides information to the IT audit community about commonly occurring issues in cryptographic systems as well as one possible methodology to assess the use of cryptography in an organization. As a companion piece, ISACA released a sample security policy, “Sample Policy on the Use of Cryptographic Controls,” that can be adapted by an organization to supplement or refine its existing policy on this important topic.
Please take a look at these resources, and let us know if they helped you with your audit work by leaving a comment on this post.
When growing up, many of us probably heard warnings from our parents to be careful in certain environments—the local woods, a busy side street, or at the beach. Our parents cautioned us out of concern for our well-being, and it served a purpose.
Their warnings were meant to raise our awareness of our surroundings, and ensure we would exercise care when appropriate. They reminded us that the safety of our environment depended upon the decisions we made. Today, we would be well-served to add one more domain to those dangers areas drilled into us: the world of cyber.
Like the woods and the beach where we played when we were young, cyber offers a great amount of reward, tempered with significant risk if we’re not prepared.
How do we evolve to a CyberCulture, though? How do we convince people that, for all the positive potential of technology, there is a dark side as well? How do we especially reach today’s digital natives, who have grown up largely responsible for their own security in cyberspace, and take security somewhat for granted?
It starts with an initial decision: at what level should cyber security be a part of our daily lives? For a CyberCulture, in which security is a top-of-mind concern, the answer is simple—cyber security should be as prevalent in our lives as possible. There is one security measure that comes to mind that’s prevalent anywhere we look, from shopping carts, to cars, to airplanes, regardless if we are in Kenya, Kolkata, or Kentucky.
Cyber security needs to become the modern-day equivalent of seatbelts that can keep us protected when we are navigating down new roads at high speeds. Yes, cyber security is a ‘security’ issue—but it’s a safety issue as well, for all of us. Nations, enterprises and individuals need strong cyber security—and all these entities need it for both safety and security. Most significantly, cyber security needs to become pervasive at all of those levels, and no one level is more important than another. To create a safe, secure CyberCulture, people, enterprises and nations needs to function in as complementary and synergistic a manner as possible.
For nations and governments, cyber security must be a prime concern, across the breadth of government, at all levels, and in all functions of government. Last month’s DefCon 2017 gave us an object lesson in protecting the entirety of governmental operations, when conference attendees hacked various election equipment in a matter of hours. Assessing the capabilities—and vulnerabilities—of that equipment should be as regular an activity in government as ordering office supplies. It should be part of a CyberCulture.
For individuals, the journey towards a CyberCulture should begin as early as possible. We need to make cyber security and good ‘online hygiene’ part of core curricula at the pre-university level, to imbed the concept of security online at the earliest possible levels, and ensure that tomorrow’s digital (and eventually cognitive) natives don’t make cyber security an afterthought. Much like many universities already include humanities or similar courses as graduation requirements, we need to give similar importance to cyber security courses at the university level.
And, just like we would subject potential candidates for a cyber security post to an evaluation of their abilities, maybe it’s time to start evaluating all potential hires—regardless of where they will work in the enterprise—on their abilities to assist in securing the enterprise through sound personal security habits. Likewise, the enterprise should be evaluated on a regular basis for how cybersecure its operations are, not merely from a technical standpoint, but from a cultural standpoint as well. In today’s digital economy, everything is connected; a hack of the cyber infrastructure of one enterprise imperils all with whom they work.
Creating a CyberCulture in which cyber security is as pervasive and commonplace as seatbelts isn’t a ‘nice goal’—it’s a necessity. We are all part of the digital economy now; our digital footprints span continents, borders and time zones. We’ve all helped to make cyberspace what it is today, contributing to its awe-inspiring power and frightening vulnerabilities. It’s up to all of us to make cyber security what it can be, tomorrow, and to ensure that future digital natives continue to enjoy the positive potential of technology.
Buckle up… it promises to be a thrilling ride!
Editor’s note: This blog post by ISACA CEO Matt Loeb originally appeared in CSO.Category: ISACA Published: 9/7/2017 3:07 PM
Editor’s note: Siphiwe Moyo, author and motivational speaker, will deliver the closing keynote address at Africa CACS 2017, which will take place 11-12 September in Accra, Ghana. Moyo, an expert on developing human capital and strategically managing change, recently visited with ISACA Now about what he terms an ‘entitlement culture’ and how the financial markets produce important life lessons. The following is an edited transcript:
ISACA Now: What is the biggest key to an organization developing a healthy culture that leads to strong morale among its workforce?
Having the correct job and organizational fit, including placing people in jobs that are in line with their strengths.
ISACA Now: How do you define an ‘entitlement culture?,’ and why is it problematic?
It’s a culture where people feel that someone else owes them something. It’s problematic because without taking responsibility for their own lives and progress, people cannot perform fully.
ISACA Now: One of your books is Bulls & Bears: Life Lessons from The Financial Markets. In what ways do the markets produce important life lessons?
If you study the markets long enough, you see how they really teach us about life. Markets go up and down but if you have your fundamentals correctly, over the long term you will succeed. Success is about doing those small things every day that lead to big results eventually.
ISACA Now: What are some unique opportunities for enterprises in Africa?
Infrastructure – railroads, and regional integration in terms of roads, energy and water.
ISACA Now: Today’s business technology professionals are dealing with a rapidly evolving technology landscape. What is some advice you will give Africa CACS attendees about how to ensure their organizations are embracing these changes constructively?
It’s about mindsets. Although we know in our heads that we need to embrace change, we know that if the external environment changes faster than our organizations, the end is near, as Jack Welch says. People often feel change fatigue, and I will be helping the delegates with how to overcome that.
I find working as an IT auditor a fulfilling and enjoyable job; however, as with any profession, there are times when it can be hard. There are certainly days when I feel that there are “clowns to the left of me, jokers to the right.”1 The clowns are auditees who are always pushing back on audit recommendations or, if they do accept them, never seem to implement them. The jokers are the audit committee members who seem to have never-ending requirements for more and more assurance without allocating any additional resource.
However, I know it is unfair to categorize auditees as clowns. They, too, are short of resources and are constantly trying to juggle implementing new systems with keeping the lights on. Responding to and dealing with audit takes time. I, therefore, believe that it is vital to agree on defined standards and benchmarks with the auditee that will be used by audit to evaluate the subject matter. In other words, agree on and establish the criteria.
Likewise, the audit committee members are not jokers. They are trying to meet stakeholders’ needs while worrying about the cyber security of the enterprise and a constant stream of new legislation, such as the EU General Data Protection Regulation (GDPR). However, not all services or their supporting applications are equally important to the enterprise; therefore, it makes sense to categorize the applications and provide greater assurance for those deemed more critical.
I discuss both concepts together with the idea of using control self-assessment in my recent ISACA Journal column, “Doing More with Less.” I would be delighted to hear your thoughts. Do you agree? How can it be improved?
Nevertheless, this is just one suggested approach for getting more assurance. I believe that collaboration among ISACA members is key to getting the most from our time and resources. Why reinvent the wheel? I would, therefore, also like to hear how you do more with less. If there are clowns to the left of you and jokers to the right, be assured that ISACA is “stuck in the middle with you.”2
Read Ian Cooke’s recent Journal column:
“Doing More With Less,” ISACA Journal, volume 5, 2017.
1 “Stuck in the Middle with You,” Stealers Wheel, a band from Paisley, Scotland, released this popular song in 1972. https://genius.com/Stealers-wheel-stuck-in-the-middle-with-you-lyrics
Analyst firm Gartner projects that worldwide spending on IT security products and services will grow 7 percent, year over year, to reach a total of US $86.4 billion in 2017.
Historically, organizations have had a tough time allocating security expense budgets because:
In addition, in the absence of established norms on security spending metrics, many organizations adopted a magical figure of 4% of the total IT budget as the acceptable to spend on information.
Later, in line with the changing times, ISACA rightly clarified that security is a business enabler, and any spend on it needs to be monitored as an investment in line with the tenets of IT governance.
Now, with the current technological tsunami and the accelerated business initiatives struggling to keep pace, on top of regulatory pressure, information security – unsurprisingly – has become the number one priority. Gartner analysis further substantiates this by emphasizing the facts and figures through its analysis. The firm’s significant points include:
All in all, this is a great news for the security profession. However, why should any organization spend millions of dollars on anything without a solid cost justification? Security costs, like any other costs, should be justified, for after all, more funding does not necessarily mean better security.
Investments in security controls do not directly contribute to revenue, but they prevent losses and safeguard reputation. Hence, security professionals should be able to help their organizations by using suitable security ROI metrics to choose the most economical and technically acceptable solution.
This will surely set in motion a strong, win-win relationship between the security profession and business leaders for the coming years, and establish security practitioners as a trustworthy partner to clients worldwide.Category: Security Published: 9/5/2017 8:52 AM
Privacy has had its Chernobyl moment.
Maybe it was when a foreign power stole everything every American had submitted for a clearance form from the Office of Personnel Management. Maybe it was when an insurer lost control of the health records of millions of Americans. Maybe it was when the United Kingdom spilled its child benefit data. Maybe it was when India created a biometric ID system and sort of forgot about controls.
However you want to define a privacy Chernobyl, it, or something like it, has happened.
We exist in a world where our expectation of privacy has been shattered, diminished and demeaned, and yet privacy invasions still outrage us. What we haven’t done is built a cap, and certainly not a sarcophagus that’s designed to protect the radioactive slag for an appropriately long time.
Privacy failures still make the news. Failures on the part of firms who have promised to take it seriously still result in 20-year consent decrees. (Recall that 20 years ago, in 1997, Alta Vista was still the dominant search engine, the Motorola flip phone was dominant amongst those weirdos who bothered with a cellphone, and 56k was pretty good internet connectivity through your phone line. Will word choices that seem agreeable today be sensible after 20 more years of technological acceleration?)
I want to encourage you to use Implementing a Privacy Protection Program: Using COBIT 5 Enablers With the ISACA Privacy Principles as a way for you to realize that personal data is radioactive, and you want to start treating it as such. If you accumulate too much, you risk a meltdown, but even when you have it in small doses, you want to be intentional about it. You want to know why it’s here, how you’re protecting it, and how to get rid of it when the risk exceeds the reward.
You should be thinking of ISACA’s new privacy protection guidance as an important move forward in your privacy journey. It’s a necessary step, and going through the steps will help you understand if there’s more that you need to do.
Editor’s note: Additional privacy-related guidance can be found in ISACA’s new white paper, Adopting GDPR Using COBIT 5.
About Adam Shostack: Adam is a consultant, entrepreneur, technologist, author and game designer. He's a member of the BlackHat Review Board, and helped found the CVE and many other things. He's currently helping a variety of organizations improve their security, and advising and mentoring startups as a Mach37 Star Mentor. While at Microsoft, he drove the Autorun fix into Windows Update, was the lead designer of the SDL Threat Modeling Tool v3 and created the "Elevation of Privilege" game. Adam is the author of "Threat Modeling: Designing for Security," and the co-author of "The New School of Information Security."
Many of us are creatures of habit, and changing our ways can be difficult. It is much easier to do so, however, when the new way is more convenient – not to mention more secure – than the old method.
That’s just the case with the new password guidance from NIST, released in June. The guidance calls for longer phrases that are easier to remember, as opposed to use of special characters, blends of uppercase and lowercase letters, and frequent password resets – all hallmarks of NIST’s previous, well-entrenched password guidance.
This move toward improved usability was not done at the cost of sound security. In fact, the creator of NIST’s previous direction on passwords, Bill Burr, acknowledged to The Wall Street Journal that the older guidance was “barking up the wrong tree,” and not based on the caliber of data that he would have preferred. The new password guidance will make for passwords that are actually more difficult to hack.
While NIST’s new guidance figures to be well-received, raising awareness is the short-term challenge.
An ISACA micro-poll, conducted just after NIST’s announcement, showed that the majority of the respondents – audit and security professionals at organizations with more than 5,000 employees – were unaware of the new guidance, and consequently unsure how quickly it could be implemented. While those results are no surprise given how fresh the guidance is, it reinforces that there is much awareness-spreading to be done – including at ISACA. We have a range of opportunities to support NIST’s guidance by updating the training and education materials we offer our professional community, as well as reinforcing the change at ISACA conferences and through our exam procedures.
At the enterprise level, changing password policies is a necessary first step before implementation. Otherwise, enterprises will be implementing password procedures that may contradict existing policies, which could cause headaches when external auditors flag the disconnect.
Emphasizing multifactor authentication is another important piece of the puzzle. The majority of respondents to ISACA’s poll indicated that less than half of their applications require two or multifactor authentication – a practice that should be adopted more widely and is strongly advocated by NIST. Multifactor authentication should be more accessible than ever given the advancement of fingerprint and facial recognition technology. Even when multifactor authentication is in use, NIST’s new password guidance remains relevant, since passwords often are among the factors being used.
We are in the early stages of what will be a major course correction on passwords. NIST’s previous guidance is heavily entrenched, with 95% of respondents to ISACA’s poll indicating their enterprise adheres to practices such as frequently causing passwords to expire and requiring passwords to contain lower and uppercase letters, numbers or special symbols. Users on the other hand, have frequently complained about the difficulty of remembering complex passwords and having to cope with expired passwords. Chances are they will welcome this more user-friendly NIST guidance.
The level of buy-in for the previous NIST password guidance did not happen overnight, and it will not be the case this time, either. But given the opportunity to simultaneously improve security and alleviate password frustrations of the status quo, it only is a matter of time before NIST’s new guidance gains widespread momentum.