The partial US government shutdown is the longest in modern history and continues to drag on as both political parties remain entrenched, refusing to budge from their respective positions. The inability to reach an agreement, or at least to open the government, may have lasting impacts on the effectiveness of cybersecurity in the federal government.
The near-term effects of the shutdown are more apparent than some of the downstream impacts. We regularly see or hear about the furloughed staff not receiving a paycheck, the growing list of .gov websites with expired Transport Layer Security (TLS) certificates, the unavailable National Institute of Standards and Technology (NIST) content, or bare bones staff left to perform system monitoring. Conversely, it is much harder to quantify the adverse long-term impact of the prolonged government shutdown. Let’s take a closer look at some affected elements, though the extent of the consequences will only be known at a later date.
NIST resources being affected by the shutdown hurts both the public and private sectors. Its guidance is heavily relied upon for compliance and security, regardless of industry. NIST is expected to release updates to major Special Publications in 2019 such as 800-53: Rev 5, 800-53A: Rev 5, 800-160: Rev 2, and 800-171: Rev 2. Updates to FIPS 199 and FIPS 200 are also on the horizon. The shutdown may cause delays to the completion of these efforts and thus push back adoption by the government and private industry.
The government already faces an incredible cybersecurity skills and resources gap. The shutdown is surely going to exacerbate this problem by making it more difficult to attract talented new employees and fill critical needs. University graduates are going to think twice before taking a job with the government compared to the private sector. It may get to the point where existing government employees possessing in-demand skills may start seeking new employment opportunities.
DHS’s new Cybersecurity and Infrastructure Security Agency suffers from a large percentage of its staff currently furloughed. The new agency “leads the national effort to defend critical infrastructure against the threats of today, while working with partners across all levels of government and in the private sector to secure against the evolving risks of tomorrow.” But with such a significant portion of its staff not working, the agency’s ability to meet its goals and objectives will be affected.
Some government projects that are not currently on hold may soon be reaching the point where they run out of funding and have to be stopped. This not only results in more furloughs, but will also cause delays to implementation schedules. An increase in contractor furloughs may cause them to seek new employment opportunities, leaving the government project short-staffed when the shutdown ends. The lost time will have to be made up through scope reduction or sliding the schedule to the right. Unfortunately, the end result is likely to be increased spending by the government and a final product delivered later than originally planned.
We are all hopeful that the government shutdown will conclude in the near future and agencies can get back on track quickly. Regardless of when it ends, the extent of the lasting impact on cybersecurity is daunting.
Author’s note: Jason's views are his own and do not necessarily represent IBM's positions, strategies or opinions.Category: Security Published: 1/16/2019 9:00 AM
The cybersecurity profession is facing a shortage of qualified talent to fill an increasing demand for positions, as so many reports inform us. What I find self-fulfilling about our “talent dilemma” is the acknowledged rapid rate of technology change, yet the ongoing quest for specific technical experience and expertise. We seek plug-and-play people to match technology components, rather than individuals with foundational skills and an aptitude and desire to learn changing technology.
As processes and people internal and external to our organizations continually adapt to ongoing technology changes, our profession needs individuals with skills in systems thinking, problem-solving, innovation, and collaboration. Cybersecurity professionals also need strong business proficiency, including communications skills and the ability to manage risk in support of desired business outcomes and risk tolerance levels of our organizations. We need a workforce that reflects the diversity of customers we serve, going beyond external traits of gender and race, to a robust variety of experiences and ways of thinking.
Yet, when we look at job postings for information security positions, we see traditional male-dominant language, a long list of specific technical infrastructure and coding experience, and a preference for technical or information science degrees, particularly computer science. Do those elements yield the applicants with broad skills and perspectives we need, or is that the CV customary for our current homogenous information security workforce?
The most common trait across the cybersecurity industry is the absence of a common path to a cybersecurity career. According to the 2017 Global Information Security Workforce Study that surveyed 19,000 cybersecurity professionals worldwide, 87 percent of us started in a career path outside of cybersecurity. Of those, 30 percent came from non-IT, non-engineering backgrounds, including business, marketing, finance, accounting, military and defense.
I looked at the “non-traditional” education of strong performers on our past and present information security team at Harvard, and I found the following degrees: German, English, Philosophy, Fine Arts, Comparative Literature, and International Relations, among others. I also found some didn’t have college degrees at all. In addition to a desire for ongoing learning, we all have strong communication, analytic and risk management skills. Those are specifically the top three skills sought by hiring managers within information security, according to the 2017 Workforce Study. Another report, the ISACA/RSA Conference Survey for the State of Cybersecurity: Implications for 2015, identified the most common deficiency for cybersecurity professionals as the ability to understand the business, with 72.33 percent of respondents citing that gap. Sufficient technical skills came in second at a distant 46.32 percent, followed closely by communication skills at 42.16 percent.
How do we improve our recruiting – and retention – practices to attract and develop the enduring combination of skills we need for successful cybersecurity professionals? Follow these five steps as a start:
Editor’s note: Silk will be presenting on this topic in the session, “A New Rubric for IT Recruiting and Retention” at the 2019 North America CACS conference, to take place 13-15 May in Anaheim, California, USA.Category: Security Published: 1/15/2019 3:24 PM
Entrepreneurs and IT leaders frequently underestimate the true power that slow technology has to negatively impact a business. It’s tempting to wait as long as possible to upgrade or replace your team’s devices; after all, every additional month you get out of a device results in measurable cost savings for the business. But all those slow, aging devices are probably interfering with your business more than you realize.
The roots of slow technology
Slow technology comes in many forms, but always has the same characteristics in common. Processing becomes slower, making it harder for employees to complete their tasks in a timely manner, and occasionally stalls productivity altogether (like when those devices crash).
Generally speaking, there are three main influencing factors that can negatively impact a device’s speed:
The effects of slow tech
As for how that speed affects productivity, there are several areas of impact to consider:
Fixing the problem
So, what can you do to fix the problem?
Correcting, upgrading, or replacing your slow technology can be both costly and time-consuming, but it’s almost always worth the effort. Not only will your team be able to utilize more resources and work faster, they’ll be happier—and that morale will almost certainly have a positive impact on your business’s profitability. Stay proactive, and take action on slow devices before they have a chance to interfere with your work.
Many presentations by information security managers for stakeholders within their organizations include the depiction of a lifecycle in one form or another to underline that information security is not a one-off project, but a continuous activity. However, often these depictions focus on what you do (such as NIST Cybersecurity Framework: Identify – Protect – Detect – Respond – Recover) or how you do it (such as Deming cycle: Plan – Do – Check – Act).
As useful as these lifecycle models are, they often do not resonate as well as expected with the audience, because they do not give the reason why we do information security. Marketing professionals will tell you that you need to start with the why to get your message across. Only the why gives stakeholders purpose and motivates them to take action.
Below, I will present a strategic lifecycle for information security that focuses on the why. This cycle provides generic goals that can easily be adapted to the needs of any organization. It consists of the following five steps:
At this point, the cycle starts again from the beginning. For example, new and enhanced security controls are likely to further increase visibility, thereby revealing new risk information, which in turn will shift the optimal balance between risk and reward. Needless to say, the individual steps do not follow a strict chronological order, but often overlap.
This strategic lifecycle – the why of your information security program – will hopefully serve as a valuable addition to your communication toolset.Category: Security Published: 1/10/2019 3:02 PM
The new white paper, Auditing Artificial Intelligence, provides an overview of what AI is, why auditors need to be aware of AI, and how the COBIT 2019 framework relates to AI auditing.
The guidance addresses the somewhat nebulous definition of AI, as there is no agreed-upon definition even in the research community, since AI encompasses a wide swath of ground, including machine learning, deep learning (a subset of machine learning), and some types of rule-based systems. ISACA wisely takes a neutral stance regarding definitions and capabilities of AI, as fact vs. fiction is still under active investigation.
As AI implementations are still in embryonic stages of deployment for the vast majority of companies outside of Silicon Valley, and there is a lack of regulatory requirements for assuring AI, there is still no definitive and comprehensive set of auditing standards for AI. Research is progressing, however—with ISACA at the forefront, as this whitepaper and its cited papers suggest.
The shortage of specialized tech talent for implementations, the “black box” nature of AI, and the deficiency of research regarding the holistic impact of AI on organizations, are just some of the challenges confronting IT auditors who are tasked with auditing AI. Approaches for addressing the black box nature of algorithms exist, such as sensitivity analysis and the like, but these approaches are often time-consuming and best left to modeling specialists for technical evaluation. The paper makes a recommendation for bifurcating an audit of AI between model specialists and IT auditors, with IT auditors looking at the holistic process and how technology stacks integrate. The authors highlight that in small-to-medium-sized enterprises that implement AI, third-party vendor management may be one of the critical aspects of an audit. The use of vendors allows less technical users to access the AI solution; however, keep in mind that many vendor products cannot be customized.
The paper states that auditors look at the holistic risks and integration of AI into the organization, and approach AI as they have approached cybersecurity and cloud computing, with an iterative, adaptive approach focusing on the implications. Commonly, auditors mistakenly believe that they need to know the low-level details of how algorithms work before conducting an AI audit. This is not the case, and it may actually be more beneficial when auditors do not know the intricate details of how AI works, as they will be able to take a holistic, 40,000-foot view into how AI makes sense in the enterprise instead of getting caught in the weeds.
I believe that this is the area that is currently missing the most in enterprises: a holistic view of AI. Technologies now rule the roost, but no matter how impressive the technical capabilities, an AI system needs to make sense in the grand mission of the company. The technological sophistication takes a backseat, and sometimes a less technical system that is more controllable is better for organizations. Let the technologists take care of the technical details, and empower auditors to think big picture, which is where they can provide tremendous value and shine.
The paper concludes with context of how COBIT 2019 can be used to create an audit plan for AI, along with an enumeration of the nine main challenges to an effective AI audit that ISACA has identified, with similar best practice approaches to tackle these challenges.
Auditing Artificial Intelligence is arguably the most comprehensive analysis of the current state of AI auditing, governance and assurance. It is the ideal stepping-off point for beginning your governance analysis and planning an AI audit for your enterprise.Category: Audit-Assurance Published: 1/9/2019 3:05 PM
Have you ever wondered just how many ways there are to hack the human mind and just how effective each technique is? I did; so I set about collating all of the techniques for human control and influence:
I also wondered if the techniques we use in the field of cybersecurity to defend computer systems could be used to analyze and defend against the tactics designed to deceive the human mind. Was it possible to create a human hacking kill chain?
What raised my interest in this project was that I had started to notice that the techniques I learned many years ago when studying hypnotherapy – methods for planting suggestions in patients – were becoming increasingly noticeable in standard web pages.
According to the experts, 90 percent of what guides our decisions is based on something called implicit memory. This is composed of the subconscious and unconscious patterns driven by past experiences, our environment and other factors that we do not even realize we may be referencing when we make a decision. It seemed to me as though many business-savvy organizations had woken up to the power of PsyOps (psychological operations) and were now looking to use those skills to help sell as much product and advertising as possible.
The project took me much longer than I anticipated. What was supposed to be a three-month project turned into nine months of thought-provoking revelations.
Those irritating cookie permission boxes might look harmless enough, but as I collated and analyzed the tactics in use, I came to realize that most of the permission boxes were using 10 or more separate techniques just to persuade us that it was easier to click “Accept all” rather than take any other course of action:
Fuzzing as a human hacking technique was an interesting discovery. Fuzzing used to be a technique for pushing excessive and unexpected data into computer systems to check for vulnerabilities. However, because of the way the human mind operates, it is now also a social engineering technique in regular use to overwhelm the human mind with the impression that the level of expected effort to pursue what should be a reasonable and preferable option within easy reach will instead take a huge and unsatisfying amount of time to achieve. After all, there is rarely any option on the cookie permission boxes to “Proceed with minimum cookies” or “Reject all” – and continue to read the page.
The more I collated and understood about the techniques, the more I noticed how many of them had fallen into mainstream usage. They had become standard tactics for most large and successful organizations.
Subliminal imagery, the subtle use of particular language to slip suggestions straight into the reader’s subconscious, selective social proof, reverse psychology, the illusion of choice and even outright bullying … I thought I had some idea of how these tactics were in use to hack the human mind, especially through the technologies we use. But it turned out that even I had vastly underestimated the degree to which PsyOps have become the backbone of trillions of dollars of income.
Due to the amount of psychology I had to explore – and on the recommendation of my copy editor – I also had to enlist the help of a psychologist to ensure my exploration of how the human mind could be exploited (and defended) would not be too egregious to those that worked in that field.
So where did I end up with all that research? Was I able to identify indicators of human compromise and a human hacking kill chain? In short, yes.
It turns out that hacking humans, just like hacking computers, is indeed a process, or to be more precise, many different process options – all of which share some common components.
What each human hacking technique has in common is that they each need to get access to their human targets. But what was a real eye-opener was that just like the techniques of the advanced persistent threat, the most effective human hacking seeks to embed its techniques into our everyday lives and to go unnoticed for as long as possible.
I no longer look at content delivered through technology in the same way. I sit and pull apart the vast array of techniques packed into web pages and even emails, and I reduced the number of organizations I subscribe to and have increased my efforts to protect my identity.
This book has changed my life. It forced me to analyze and improve what I knew about making effective, persuasive arguments, and to recognize how the things that we do not think make a difference to the way we make life choices (but do) are exactly the items that are used to hack the human mind.
Editor’s note: Raef Meeuwisse’s new book, How to Hack a Human: Cybersecurity for the Mind, will be released on 9 January, 2019.Category: Security Published: 1/7/2019 3:08 PM
In today’s world, in order to do and sustain business, all large and small companies are required to show and prove constant compliance. The task may be somewhat easier for large companies to achieve by hiring more employees; however, small businesses do not typically have the luxury to hire more people at competitive rates with large companies.
Having worked for several small businesses over the past decade in addition to helping non-profits, I have seen several compliance challenges, pains and disruptions to business – and even fear! Simplifying work items is a big step in the right direction. Below are five practical methods that can be effective for small businesses in their quest to achieve compliance:
1. Establish a common language. Whether it is GDPR, SOX, HIPAA, or some other regulatory requirement, establishing a common language is critical. In an ideal world, a privacy and security program is unified, though that is usually not the case. Work with your Chief Privacy Officer and Chief Security Officer together to establish standard language when describing goals, action items, and writing policies and procedures. Different terminology increases confusion. Strike an appropriate balance between technical and business speak.
2. Prioritize training and education. Most practitioners will mention training and education as one of the key elements for a successful privacy or security program. Small business often conduct an annual training session for all employees to check their training requirements. However, in today’s environment, that isn’t going to cut it. Training employees, including vendors and contractors, needs to be a continuous program, and it also has to be focused. Establish focus group trainings for management, committees, developers, quality assurance team members, and business managers. Email blasts, posters and news flashes highlighting relevant incidents are helpful.
3. Do due diligence on documentation. Everyone hates documentation, whether they are business teams or developers. All businesses have something valuable to protect, including personal information, proprietary product information, employee data and more. Documenting business processes to show how that valuable information flows, what happens to it, and who has access to that information will assist in identifying compliance action items. Most of the time the information is not even required but is collected nonetheless, adding liability. Several tools can build documentation from product code and the comments in the code.
4. Don’t forget internal reviews and audits. Once a process is established, policing it is important. Audit the documentation, processes, data and information collected to ensure established controls are implemented correctly and are working, and to identify gaps that need to be remedied.
5. Continuous evaluation is needed. Once the above steps are in place, keep the loop open to allow employees to provide feedback and allow the documentation to work instead of being treated as overhead. Then, implement solutions and address gaps noticed in the audits.
All of these steps allow the controls implementation to become efficient and lean. In order to get to that point, these steps need continuous repetition to become part of the organization’s DNA.
I have shared similar thoughts in an article on LinkedIn. Nothing is a sure shot or quick fix; it takes a lot of disciplined work, training and re-evaluation to succeed.Category: Government-Regulatory Published: 1/4/2019 3:01 PM
When ISACA – then known as the Electronic Data Processing Auditors Association – was incorporated by seven Los Angeles area professionals in 1969, “there was no authoritative source of information,” according to ISACA’s first president, the late Stuart Tyrnauer. There was “no cohesive force, no place to turn to for guidance.”
Back then, Tyrnauer and his colleagues figured their grassroots association, focused on the emerging profession of electronic data processing auditing, was just of local interest. As it turned out, the interest extended well beyond southern California, unexpectedly allowing ISACA to blossom into a national organization, then an international organization, to what it has become today – a thriving global professional association with more than 220 chapters and 140,000 members. Best of all, ISACA’s remarkable story is far from complete.
The calendar has now flipped to 2019, the much-anticipated year of ISACA’s 50th anniversary celebration. We have chosen Honoring Our Past, Innovating Our Future as our 50th anniversary theme because we are committed to doing both, especially considering ISACA always has been a future-minded organization, helping its professional community navigate change and the remarkable advancements on the technology landscape. When the EDPAA conducted its first conference in 1973 – under the theme “EDP Auditing: A Coming of Age” – the focus might have been narrower than the topics explored today, but the intention of sharing knowledge and best practices with respected colleagues, and the desire to help others achieve the positive potential of technology, has always been hard-wired into our organization’s culture. Today, as effectively and securely deploying technology has increasingly become a central driver of enterprise performance, the work performed by ISACA members – and ISACA’s enduring commitment to be a trusted resource for our professional community – is all the more critical, providing an inspiring context to the 50th anniversary celebration.
This anniversary celebration is for all members of the ISACA professional community – past, present and future – to enjoy together, and the more of you that are engaged, the more meaningful it will be. There are several great ways you can participate in ISACA’s anniversary celebration throughout the year, including:
Visit ISACA50.org (and keep coming back)! Our 50th anniversary website includes a wealth of content that will both entertain you and leave you feeling even more inspired by your relationship with ISACA. From an overarching story of ISACA’s proud 50-year history to a wide array of other anniversary resources – including videos, podcasts, an ISACA history timeline and perspective on the impact of ISACA’s global chapters network – the site offers a robust platform to enjoy and participate in ISACA’s anniversary celebration. Be sure to keep coming back; content will continually be updated each week throughout the anniversary year.
Join the celebration on social media. Social media provides a terrific platform for our global professional community to celebrate together. Show us where in the world you’re celebrating by printing our anniversary celebration sign from the Participate page at isaca50.org and posting your photo, using #ISACA50.
Consider attending an ISACA conference in 2019. ISACA’s anniversary will be prominently celebrated at our 2019 conferences, including at the North America CACS conference, which will take place in May in the backyard of where the organization was founded 50 years – the Los Angeles area. Regardless of whether you have attended several ISACA conferences before or have yet to attend, what better time to learn from and connect with your ISACA colleagues than during our 50th anniversary year?
I am deeply appreciative of all the staff, volunteers and sponsors who have come together to create such a dynamic, year-long anniversary program. With the anniversary celebration serving as the backdrop, there is so much to anticipate for ISACA in 2019, including, but certainly not limited to, a new ISACA-Infosecurity conference partnership slated for November in New York, a Future of IT Audit research report and a Transforming IT Audit website, an update to the CISA certification exam content, growing global adoption of the recently introduced COBIT 2019 framework, the anticipated arrival of a new ISACA CEO, and a further build-out of the CMMI Cybermaturity Platform.
As a longtime ISACA member and the current board chair, I have never been more energized to be part of the organization than I am right now, and I’m even more excited to see what comes next in 2019 and beyond. We celebrate our past to inspire, motivate and propel us into the future. Our 50th anniversary year is a wonderful milestone in ISACA history, but I truly believe that, together, we can ensure ISACA’s best moments are still to come.Category: ISACA Published: 1/3/2019 8:59 AM
Transformation offers many key benefits, and any enterprise that would like to sustain and grow in this ever-changing, fast-paced world would be subject to the deployment of new systems. In my recent ISACA Journal article, I discuss various challenges that any enterprise might experience and how the intensity of any of those challenges would differ based on organizational dynamics and economic variables.
Here are some key points that any enterprise should consider in the deployment process:
In all the transitions and deployment projects that I have been associated with, proactive steps on these considerations have been the recipe not only for success but also for continuous improvement.
Read Rajul Kambli’s recent Journal article:
“Identifying Challenges and Mitigating Risk During Deployment,” ISACA Journal, volume 6, 2018.
As we continue the end-of-the year review on all things tech, digital ethics and the progress of artificial intelligence (AI) in people-related technologies springs to mind. People tech affects HR, recruitment and other areas that enable businesses to hire, manage and plan their key asset – people. With new suppliers coming out consistently, it is very difficult for businesses to understand which technology is ethical with regard to data, code and algorithms, versus technology that is not.
The first thing to highlight is that AI is a huge buzzword for people tech these days. However, it is abused more often than it should be, resulting in confusion for businesses that simply may not have the time to keep on top of tech or research it before buying, typically costing them huge resources. To clarify, AI has several strands, two of which are machine learning and automation. These two are significantly highest in use at the moment in people tech, whereas other forms of AI are more relevant in other sectors. As an example, autonomous cars use robotics and other relevant strands of AI.
Now, regardless of the use of AI and its specific strand, especially when it concerns algorithm-building stages, it is extremely important for every developer and tech business to not only think about “ethics” and “biases,” but to actually implement practices that would help them not only tackle their own challenges with regards to ethics and biases, but also those of their employees and users. This truly allows them to build and code for purpose-driven, value-add commercial products. Increasingly, a lot of experts are talking about this issue, from TechUK committees that I participate in, to IEEE guidelines I am part of globally. There are a lot of experts, individuals and organizations constantly talking about this important topic.
However, very little has been seen in terms of action, and so, for my part, I am “practicing what I preach.” While we are a startup, and it does add a couple hours to my time reviewing the code for new features, it is very satisfying to know that this work comes from a place of supporting users. In addition, we prioritize careful data use and management; we will strictly only use the data that helps our users with analytics (based on what our platform offers) and provides a better experience.
How can larger tech companies and software houses implement this? I believe that the larger the business, the easier it should be to have processes and resources that effectively address the desired outputs of the business vision and support customers, while also to serve as an in-house ethics and bias reviewer. This gives businesses a lot of power internally to follow guidelines drawn by governments and other organizations working actively to support this framework-building.
There is no doubt that 2019 will be a key year for growth in digitization, automation, augmented analytics and blockchain. So, I really hope that businesses stop talking about the fundamental challenges of digital and AI ethics, and start building tools and frameworks to monitor them.
About the author: Bhumika Zhaveri is a non-conventional and solutions-driven technology entrepreneur and businesswoman. As an experienced HR Technologist, she has expertise in HR and Recruitment: Technology & Programme Management for Change & Transformation. Privileged to look at challenges differently than most due to versatile life, personal and professional experiences, she is actively involved with TechUK, IEEE for data ethics, AI & digital committees and TechSheCan charter with PWC, Girls Who Code and similar organizations supporting women in stem . Currently, she is also the Tech Advisor for Resume Foundation and Bridge of Hope, while also being a founding member of Digital Anthropology.Category: Risk Management Published: 12/28/2018 3:00 PM
The offshoring industry is at a turning point. There is a growing demand to further saturate offshoring hubs with a view to increase profits. The true value of offshoring can be realized when viewed as a relationship amongst parties rather than a mere delivery model.
Success of this relationship can be seen when:
However, in the real world, it seems companies struggle to manage this relationship, with security and privacy considerations becoming all the more challenging to manage.
So, the question is, offshoring: how to get it right? Or do we plan to offshore this task as well?
Below are key considerations that, when consciously applied by the onshore and the offshore teams, will help companies achieve talent utilization, value creation and profit realization.
Key considerations for the ONSHORE team
1. Change in mindset
The current patch in the mindset of onshore professionals in which offshore teams are flooded with work requests needs to be updated. Onshore professionals need to update and mature their mindset in the pursuit of achieving low costs and high quality. The offshore team must be viewed as an extension of the team, and team members should be encouraged to ask questions and build their expertise. The vision of the firm and the engagement should unite the teams with a shared purpose when geographic distance separates them.
Change is the only constant in technology. Based on changing laws and regulations, the onshore team must be aware of the information that is being dealt within onshore locations. According to chapter 5 of the General Data Protection Regulation (GDPR), which is related to transfers of personal data to third countries or international organizations, considerations must be satisfied while processing or intending to process personal data. As such, given the global impact, it is vital for onshore professionals to update their mindset from a security/ privacy lens and carefully scan the information that can or cannot be offshored.
2. Collaborate and share knowledge with offshore teams
Onshore professionals should be encouraged to share knowledge to offshore teams to help understand the objectives of the deliverables. Having structured periodic calls/updates helps achieve efficiency on both sides of the table. Training the onshore team on how to efficiently collaborate with offshore professionals, understanding the culture of communication and work management at the offshore site, and periodic checkpoints on technical learnings will meet these goals.
A strong relation requires both parties to complement each other. In this direction, it is important to train offshore teams with technical aspects of security and privacy considerations. Training can be based on a framework (like NIST or ISO) or focused training on areas such as access control, information risk assessments, network security, and system development. As such, collaborating and sharing such knowledge will make the offshore teams informed, enabling them to make sound decisions.
3. Invest in the right technology
Large firms that embrace offshoring usually have a file-sharing/instructions-sharing mechanism connecting the onshore and offshore teams. With time, it is noted that the tool or mechanism being used seems ineffective in terms of time, usage, and perhaps intent. While long emails and Excel trackers have been a thing of the past, firms must smartly invest in research and development of proprietary tools and automation techniques.
From a security/privacy lens, companies need to consider:
1. Technology being used to share the data
2. Actual content or data being shared
Automation brings its own risks, especially related to data security and access security. Wise implementation of automation, backed by constant monitoring of security measures, helps mitigate risks. When actual content or data is being shared, special care needs to be taken when dealing with personal data.
Key considerations for the OFFSHORE team
1. Build the right team
With cheaper costs at offshoring locations, the easy option would be to hire as many professionals and then distribute work amongst them. However, building the right team that has the required skillsets, educational background, and professional interests aligning to the services provided by the firm is critical. Hiring process at offshore locations should be based on standards that align with the quality represented by the firm.
The issue of data sent offshore and the risk to its privacy has shown that current laws (HIPAA, GLBA) do not adequately cover or protect US customers when information is sent abroad for processing. Offshore teams must have subject matter experts who engage in opportunities focused on regulations and are able to drive teams with their experience. Offshore teams execute best when they are led and trained by experienced leaders within the group. Industry certifications and periodic internal workshops on information security and risk management go a long way in building the right team.
2. Invest in quality and project management:
With contractual metrics established between onshore and offshore teams, the need to rush and hand back deliverables to the onshore teams highlights a gap in the quality and project management practices. Offshore teams must check their deliverables for quality, voice opinions if they differ from those of the onshore teams, suggest innovative ways of accomplishing tasks and streamline quality processes. Offshore leadership must work with their teams to check if there are any gaps with respect to project management techniques, which affect resources or onshore stakeholders.
Low cost and high quality are traditional labels that sell offshoring. It is an investment of patience and continuous good practices to achieve high quality with offshoring teams. Techniques such as Six Sigma have been instrumental in streamlining quality requirements, and some companies have aligned Six Sigma to their security framework to derive security-driven return on investments. Offshoring teams should define, evaluate, and monitor their quality metrics, and present how they add value to onshore teams and customers.
For the modern business, there are few topics more important than data security. Without a proper appreciation for data security and all that it entails, you’ll find your business falling behind. But getting all of your employees and company stakeholders on board can prove to be a major challenge.
The importance of buy-in
Let’s say you have a big 10-gallon bucket sitting in your garage. It’s a thick, sturdy bucket that’s brand new – never been used before. And while the bucket looks like it’s in great shape, there’s a tiny hole at the very bottom. It isn’t any bigger than a pinhead, but it’s there. Guess what happens when you pour water in? Though it might take a few minutes, the water is eventually going to completely drain out of the bucket. Despite the difference in size, 10 gallons of water is no match for a tiny hole.
The same could be said of your company’s approach to data security. No matter how strong your strategy is or how many various safeguards you have in place, all it takes is one uncooperative employee or uninformed stakeholder to compromise the entire thing.
Your data security strategy is only as good as your organization’s weakest link. When you look at it through this context, the importance of stakeholder buy-in becomes clear.
How to encourage total buy-in
As with anything else, getting people to take data security seriously requires a purposeful and concerted effort. Here are some things to consider:
1. Employees are often to blame.
According to the Online Trust Alliance (OTA), roughly 91 percent of data breaches can be prevented. And though there are four major ways in which data breaches occur, employees are often to blame. They account for 30 percent of breaches (whether accidental or malicious).
“By educating on the dangers of phishing, companies can prevent these embarrassing situations from happening,” Point Park University explains. “The OTA reports that insiders can be a threat when they are feeling unhappy, moving to another company or having financial problems. Companies must realize that insider threats to data protection are a reality.”
2. Education is key.
While there are instances in which employees knowingly put the business in harm’s way, most of the time their actions are the result of a lack of education on the topic of data security. The more you commit to educating your employees, the fewer costly mistakes there will be.
You can send out emails until you’re blue in the face, but the only way to ensure employees take your instructions seriously is to hold informative presentations and meetings where you’re able to talk with everyone in a face-to-face manner.
In addition to delivering a compelling message, it’s smart to give employees something to reference. Printed booklets or brochures that explain various policies and recap different rules can serve as a nice complementary resource.
3. Give decision-makers the numbers.
With employees, you’re telling them how to act so that they can be in compliance with your data security protocol. With stakeholders that are higher up in the organization – including decision-makers and gatekeepers – you may actually have to convince them to buy into what you’re doing. And the best way to do this is by giving them the cold, hard numbers.
According to this year’s Cost of Data Breach Study conducted by Ponemon Institute, the global average cost of a data breach is up 6.4 percent from 2017 to $3.86 million. The average cost for each lost/stolen record containing confidential or sensitive information is up 4.8 percent year-over-year to $148.
Honestly, the numbers do the talking. When you use data points like these as your basis, it’s hard for stakeholders not to buy in. For even better results, tell a story around these statistics. In doing so, you appeal to both the analytical and subjective modes of decision-making.
Adding it all up
The time for taking data security lightly and tinkering with different techniques is over. There are 230,000 new pieces of malware produced every single day, while hacks occur every 39 seconds in the United States alone. You need total buy-in from all key stakeholders. If you aren’t confident that you have this, dig your heels in and make a plan.
Transport Layer Security (TLS) is a cryptographic protocol for protecting privacy and data integrity of information (logins, passwords, credit card numbers, personal correspondence etc.,) between two communicating applications. It encrypts data traveling between internet hosts, including mail servers, VPN, SIP for voice, video and messaging applications. Its current version is 1.3, following the previous version, 1.2. With TLS, browsing habits, emails and online chats can be monitored.
TLS is normally implemented on top of Transmission Control Protocol (TCP) in order to encrypt Application Layer protocols such as HTTP, FTP, SMTP and IMAP. It can also be implemented on UDP, DCCP, and SCTP protocols (such as SIP-based application use and VPN). TLS also can be used in conjunction with other standard protocols such as FTPS, DNS over TLS, etc., for securing connections. To ensure authentication in communication, TLS can be used along with X.509 Public Key Infrastructure (PKI), which is issued by a trusted third party called Certificate Authority (CA) that asserts authenticity of the public key and DNSSEC.
Working of TLS
TLS uses symmetric and asymmetric cryptography for communication. A secret key known to the sender and receiver is used for encryption and decryption in symmetric cryptography; 128/256 bit encryption is generally used in the industry. Private and public keys are used for asymmetric cryptography. Public keys are used to encrypt the data from the sender, which is decrypted with a private key of the receiver. This is advantageous over symmetric encryption, in that sharing of encryption keys need not be secure. In asymmetric encryption, the session key is generated and exchanged securely, which is used for encryption and decryption of data, after which the session key is discarded. Minimum key length should be at least 1024 bits. Due to its computation of large key length, asymmetric encryption is slow for many purposes.
TLS protocol has two layers: TLS record protocol and TLS handshake protocol. TLS record protocol provides security in connections. It has two properties, including private connection, which can use symmetric encryption. It can work without encryption, as well. The second property is connection reliability. Various higher protocols are encapsulated using TLS record protocol.
In TLS handshake protocol, before the first byte of data is transmitted/received by the application protocol, authentication of the client server and negotiation of encryption algorithm and cryptographic keys are done. It has three properties; first, the peer identity is authenticated by asymmetric, public or cryptographic keys. Second, the shared secret is made secure. Third, integrity of the negotiated communicate is assured. Connections can be terminated due to handshake failure or protocol error. TLS handshaking and interpretation of authentication certificate are done by designers/implementers who should ensure authentication on at least the server side, and confidentially and integrity of the communication channel.
Three basic key exchange modes are available in TLS 1.3:
Some of the advantages of TLS 1.3 are the simplified handshake for secure connection, and fast resumption of sessions with servers, which decreases setup latency and the number of failed connections. It does not support outdated/insecure encryption algorithms.
All US government servers should support TLS 1.3 by 1 January 2024.
Comparison of TLS 1.2 and TLS 1.3
Legacy algorithms are used
Only Authenticated Encryption with Associated Data (AEAD) algorithms are used
All handshake messages are not encrypted
All handshake messages after the ServerHello are now encrypted
Existence of superfluous message
Consistent handshake and superfluous messages are removed
Two round trip times for completing the handshake
One round trip time for completing the handshake
Higher encryption latency
Latency encryption is halved
Handshake time 300 ms
Handshake time 200 ms
No zero round trip
Has zero round trip (remembers previously visited sites so that it can send data on first message to server)
More load time
Less load time
Has obsolete and insecure features
Obsolete and insecure features are eliminated
All public exchange mechanism does not provide forward secrecy
Static RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms provide forward secrecy.
Category: Security Published: 12/20/2018 3:02 PM
Just as there are no limits to the technological advancements that our professions, and society, will embrace, the impact ISACA’s professional community can make in the coming years has boundless potential. With a 200-plus chapter network, a passionate volunteer base, the collective expertise of our 450,000-plus global professional community, the knowledge and credentialing portfolio — to name just a few of our assets — ISACA is ideally positioned for 2019, our 50th anniversary year, and beyond.
At this time of year, we can find ourselves reflecting on the past. Yet at this moment, I find myself more drawn to consider ISACA’s promising future, as our association has opened the search and application process for a new ISACA chief executive officer. This effort is being carefully managed by a Selection Committee of the Board, with committee chair Tracey Dedrick and members Chris Dimitriadis, Greg Touhill, Gabriela Reynaga, Leonard Ong and R.V. Raghu. ISACA has engaged Egon Zehnder Inc., a Zurich, Switzerland-based executive search firm, to aid in this important work.
While there are many characteristics that will be important for our new CEO to possess, at the core, we seek a strategic visionary with a deep understanding of technology and its role as a business enabler — a leader who not only understands the rapidly evolving technology landscape, but also brings a strong perspective of what our global professional community needs in this environment. This new leader will have a strong understanding of digital transformation, demonstrated ability to collaborate with a range of global stakeholders and the ability to serve as a respected external voice for the organization.
Further, the ISACA CEO also must drive operational excellence throughout the organization, and therefore have the proven ability to build, motivate and manage a high-performing professional team. One of ISACA’s great attributes is the shared sense of purpose and partnership among the Board of Directors, volunteers, ISACA staff and the entirety of our professional community. A CEO capable of strengthening those connections will ensure that ISACA and the CMMI Institute achieve, and even exceed, organizational goals, continue our growth curve, and capitalize on promising new opportunities that surface ever more often around the globe.
The ISACA Board is excited about the search; we are confident the opportunity will attract a bevy of excellent candidates. We take very seriously our responsibility, this unique occasion to select a new CEO who we will entrust as the essential, excellently qualified executive leader. This individual will be charged with bringing strategic vision to life, to inspire a world-class professional staff, and to serve ISACA’s current and future professional community in the most dynamic industry — information technology.
Where once we were known for electronic data processing audit and controls, today’s ISACA is the professional home and hub for risk, governance, privacy, information and cyber security — still audit and controls — and more. Our professionals are at the forefront of artificial intelligence, blockchain, Internet of Things, quantum computing, and many other current and future technologies that recalibrate our personal and work lives daily. The everyday work performed by ISACA’s professional community is critical to move organizations, institutions, businesses and society forward securely and responsibly.
As we begin the search for ISACA’s next CEO in earnest, we know our rigorous selection process will lead to the arrival of an engaging, vibrant leader who will embrace our challenges, seize high-impact new opportunities, and take ISACA to new heights. Our new CEO will come aboard at a truly special juncture for ISACA, a time when we are honoring our past, celebrating our 50th anniversary, and innovating for that future 50 years, and more, to come.
Interested parties may supply credentials for consideration via electronic mail to firstname.lastname@example.org.Category: ISACA Published: 12/19/2018 9:02 AM
In the wake of the high-profile information security breaches that have made headlines over the past few years, leaders in the security field have been coaching organizations to make 2 fundamental changes in the way they have traditionally handled breaches. First, instead of focusing solely on impenetrability, organizations should accept that breaches are going to happen and place greater focus on detection and management. Second, organizations should be prompt and transparent when it comes to notifying impacted stakeholders about the impact of a breach instead of, well, doing the opposite.
These 2 pieces of organization-level advice can, and should, also be applied to individuals in the context of security awareness training, which was the topic of our recent Journal article.
In a 2017 blog post for NS Tech, Steven J. Murdoch and Angela Sasse write, “Companies often tell employees not to click on links or open attachments in suspicious emails. The problem with this advice is that…for many employees their job consists almost entirely of opening attachments from strangers, and clicking on links in emails. Even a moderately well targeted phishing email will almost certainly succeed in getting some employees to click on it.”
From a training perspective, of course organizations should educate their employees to help them avoid risky behaviors that could threaten security. But organizations should also reassure their employees that they understand that employees cannot do their jobs without encountering some type of security risk. Assuring employees that the organization expects them to encounter threats sooner or later empowers employees to take the appropriate action when that time comes.
Read Randy Pierson, Kevin Alvero, and Wade Cassels’ recent Journal article:
“A Heightened Sense of Awareness: What the Internal Auditor Should Know About Information Security Awareness Training,” ISACA Journal, volume 6, 2018.
The European Union’s General Data Protection Regulation (GDPR) commanded the attention of the business community throughout 2018. Thought leadership gatherings such as ISACA conferences and webinars attempted to answer questions like, “What does it take to comply?” and “What will enforcement look like?”
Answers were largely speculative, and the actual enforcement processes associated with the regulation are only now taking shape. We can, however, look back at 2018 and make some observations about what has been accomplished, the drivers of compliance activities, and the work left to be done.
At six months past the implementation deadline, many organizations have harvested the low-hanging GDPR fruit. Privacy policies have been updated, cookie notices added to websites, and mechanisms have been deployed to support opt-in, opt-out, and data subject requests. Those using third-parties to process data, or those who are the third-party, have defined commitments and expectations regarding personal Information. Training programs have been rolled out to educate about GDPR-related issues. Accomplishing these items has allowed organizations to mark a significant part of their GDPR checklist as complete and have a reasonable story to tell in case of an incident.
The desire to comply with GDPR and avoid any potential fines motivated much of this activity. Since GDPR, the regulatory landscape has continued to change and evolve. A proliferation of privacy and data breach regulations (such as the California Consumer Privacy Act, Brazil’s new data privacy regulation, etc.) has refocused the discussion from a single regulation to an overall issue of data privacy and business process. As recently explained by a business executive, “There is no way we can fund a new project to comply with each privacy and security regulation that comes along, so we must address these issues at a higher, more efficient level.” These conversations about compliance costs and efficiencies are driving the next wave of privacy-related projects.
Having addressed the basics, many of our clients now seek to reduce costs and lower their overall compliance risks. This often involves a deeper look at the role of data within business processes. Good information governance requires such things as accurate data and process maps, defined data lifecycles, security protections for data, and incident response plans. The ever-increasing risks related to compliance in a complex regulatory environment, and the standard benefits of good data governance, are causing many organizations to revisit some of these governance program elements. While 2018 saw a heavy focus on GDPR, 2019 may be a year of transformational governance projects as companies seek to reduce costs and compliance complexity by more precisely directing their use, management and protection of data.
The impact of GDPR has been significant, with more official guidance and enforcement decisions on the horizon. But the bigger story may be the pressures exerted on business processes by the combination of multiple data privacy and breach regulations, changing consumer expectations, and related B-to-B obligations. The next year may demonstrate how organizations are choosing to comply with GDPR while addressing these additional pressures.Category: Privacy Published: 12/17/2018 3:00 PM
Members of ISACA’s US Public Policy Working Group recently gathered on Capitol Hill in Washington, D.C., to listen to inspiring speakers and to advocate for issues important to ISACA constituents, drawing from their personal experiences and professional backgrounds.
Over the course of a productive day, these ISACA volunteers met with Congressional members and staff leaders from seven districts from California, Illinois, New York, Texas and Virginia—states from where ISACA’s participants hailed. Key topics discussed included the National Institute of Standards and Technology (NIST) Reauthorization Bill (H.R. 6229), the value of authoring and introducing legislation focused on the future of IT audit, and the importance of certifications in preparing the workforce for cybersecurity jobs and closing the skills gap.
The participants expressed the importance of supporting H.R. 6229, as it would not only reauthorize NIST, but also strengthen research and development programs related to cybersecurity, artificial intelligence (AI), internet of things (IoT), and quantum computing and increase opportunities within the cybersecurity profession.
ISACA’s US Public Policy Working Group recently came together from across the country to engage in advocacy efforts on Capitol Hill.
Additionally, as some of the Public Policy Working Group had worked or currently work within government, they could also personally speak to the challenges of managing several audits throughout any given year in addition to the rest of their workload. They emphasized that improving and streamlining standards for audits would not only help make the process more efficient and deliver more meaningful results, but also incorporate emerging technologies such as AI that are currently not factored into most audits.
“As a member of the ISACA US Public Policy Working Group, I appreciated the opportunity to visit Capitol Hill to discuss legislative initiatives that impact my profession,” said Howard Duck, CISSP, CISM, CISA, PCIP, past president of the ISACA Sacramento chapter. “Joining other ISACA members in these discussions was interesting and informative for me.”
Another ISACA volunteer, Kyle Foley, CISA, CGEIT, CRISC, PMP, agreed. “Meeting with Congressional staff in the House and Senate to discuss ISACA's mission and information security issues, such as the NIST reauthorization legislation and our ‘One-Audit’ initiative, was fun, interesting, and rewarding.”
Joel Creswell, Ph.D., Legislative Assistant to Congressman Daniel Lipinski (IL-03), who kicked off the advocacy day by speaking to the group on Rep. Lipinski’s work in the research and development and science and engineering spaces, as well as on initiatives related to AI, quantum computing and cybersecurity education, noted that IT audits were a focal point of the roundtable discussion with ISACA the day before.
Another common issue that causes concern to both ISACA members and Congressional staff was the challenge in building a strong cybersecurity workforce and addressing existing skills gaps.
Nick Leiserson, Legislative Director for Congressman Jim Langevin (RI-02), spoke to the group mid-day and provided highlights from this year, such as the creation of the Cybersecurity and Infrastructure Security Agency, as well as a preview of what ISACA’s professional community might expect to see come out of the work of the 116th Congress.
During ISACA’s advocacy day, participants discussed key issues such as supporting the NIST Reauthorization Bill, envisioning legislation around the future of IT audit and closing the skills gap with certifications.
The experience was not only an opportunity to raise important issues, but also ended up being a milestone for the ISACA volunteers who participated. It was the first time each of them had been involved in such an advocacy day—and it was an experience they found to be very positive.
"ISACA continues to exceed my expectations, and today’s advocacy event was no exception,” said Angel Contreras, CISA, CDFM, senior manager, technology risk at EY. “Being able to meet with policymakers—having open discussions on the key cyber and audit challenges with the common goal of making progress to secure our enterprises—was a memorable experience that embodies what ISACA is all about."
Added ISACA volunteer Kevin McDonald, CISSP, CISA, CRISC, CBCP, PMP, senior program manager at Copper River Enterprise Services, “This is a prime example of ISACA’s support for the industry and proactive approach to supporting the next generation challenges in audit and technology.”Category: ISACA Published: 12/14/2018 3:02 PM
In October 2018, Bloomberg Businessweek sent shivers through the business and intelligence community when it published an astonishing report that claimed that Chinese spies had exploited vulnerabilities in the US technology supply chain, infiltrating computer networks of almost 30 prominent US companies, including Apple, Amazon.com Inc., a major bank, and government contractors.
These claims were indeed alarming, but not surprising. Since the infamous 2013 Target hack, in which hackers exploited security weaknesses at one of its little-known suppliers and exfiltrated millions of payment card details, cybersecurity experts have been warning that expanding supplier networks would exponentially increase digital touch points, providing several softer avenues for threat actors to exploit and access high-value systems.
There is no dearth of high-profile examples. For instance, back in 2017, cyber threat actors compromised the Ukrainian software firm MeDoc and implanted NotPetya – a highly destructive malware – deeply within its software update. Like the mythical Trojan Horse, NotPetya easily exploited the trusted software package, circumvented layers of security defences and crippled critical operations of high-profile enterprises, such as pharmaceutical giant Merck, shipping firm Maersk, and Ukrainian electric utilities Kyivenergo, to name but a few.
It’s certainly hard to argue with the benefits of business partnering, given the decades of studies demonstrating that well-thought alliances can enable an enterprise to focus on its competitive advantages, as well as measurably boost its bottom line. But at the same time, the raging demand for transfer of utilities, goods and data, combined with the rapid intersection of cyber espionage and geopolitics, also has substantially complicated the cyber risk equation. Cyber threats exploiting weak supply chains are on the rise, like sea levels. The stakes are also invariably higher, threatening global peace and undermining the benefits of globalization and open markets.
While tightening cyber risk assurance within complex supply chains is certainly challenging, it’s not impossible. In the section below, we provide three practical recommendations for business leaders to maximize the value of outsource relationships, while minimizing associated risks.
Have the right security clauses
Underpinning any robust supplier security assurance program is formally documented and legally enforceable security contractual clauses. During the contract negotiation phase, business leaders must have a clear understanding of cyber risks associated with each relationship, and ensure appropriate clauses are agreed upon from the outset and baked into contracts. At a minimum, high-risk suppliers must:
The significance of getting this right from the outset is hard to overstate. Requesting security assurance reports later into a relationship is complex, and without legally enforceable clauses, suppliers will likely push back, leaving an enterprise with no recourse in the event of disputes or systemic control breakdowns. This too, however, has its challenges. For instance, large cloud service providers will unlikely agree to a “right to audit clause” with a medium-sized corporate customer. This comes down to leverage. Hence, it’s important to set realistic expectations upfront, as well as ensure that security contractual requirements are reviewed and signed off by the legal team and business owners.
Limit vendor remote access to the network
As we learned from the Target breach, suppliers with remote access to the enterprise network can present soft avenues for threat actors to exploit and gain access to the enterprise network, escalate privileges and cause substantial harm. To manage this risk, the enterprise must adopt the least privilege principle, only giving remote access when there is no other cost-effective way for the vendor to deliver their services. Such access must be restricted to specifically segmented zones, channelled via secure virtual private networks and protected via multi-factor authentication. Furthermore, an up-to-date list of all vendors with access to the network, including their respective access rights, must be maintained and validated frequently, at least quarterly.
Segment suppliers based on risk
The basic risk management principles also apply to managing supplier related cyber-risk: the rigor of assurance process should be commensurate with the criticality of business process, and the potential impacts should the outsourced business process be compromised. For instance, suppliers that handle high-value payment processes, handle volumes of customer personally identifiable data, manage critical infrastructure or underpin most profitable business lines require tighter governance as compared to those that handle ancillary services, such as administrative tasks. Taking a risk-based approach maximizes the value of the security assurance budget, as well as reduces needless audits on suppliers. It also reduces noise, enabling limited security resources to focus on supplier arrangements that present the highest level of risk instead of spreading thin across all supplier arrangements, each of varying level of significance.
The benefits of outsourcing are vast, but business leaders can no longer afford to enter into these alliances blindly. Cyber resilience is no longer a nice-to-have, but a top business imperative with far-reaching consequences on brand perception, customer retention, margin, regulatory compliance, and more importantly, business survival.
About the authors
Phil Zongo is the author of The Five Anchors of Cyber Resilience, an Amazon best-selling book that strips away the complexity of cyber security and provides practical guidance to business executives. His is also the 2016 – 17 winner of the ISACA’s Michael Cangemi Best Book / Article Award. Zongo is the Founder and CEO of CISO Advisory, a consultancy firm that helps enterprises build high-impact and cost-effective cyber resilience strategies.
Rohini Kuttysankaran Nair is an experienced project manager with more than a decade experience helping large enterprises deliver complex digital transformation programs. She now leveraging her strong technical background and project governance skills to help enterprises deliver business aligned cyber resilience uplift programs. She is based in Sydney, Australia.Category: Audit-Assurance Published: 12/13/2018 3:05 PM
Gartner’s recent list of top tech trends for 2019 included immersive experiences, which they described as follows:
“Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience."
Below, I explore some of the anticipated themes related to VR/AR that will play a role in the coming year and beyond:
• Global AR & VR product revenues are expected to grow from US $3.8 billion in 2017 to US $56.4 billion in 2022, a 71 percent compound annual growth rate. This includes enterprise and consumer segments (ARtillry Intelligence).
• The patterns of investment and development in the different sectors in which VR/AR are applicable – or potentially applicable – show the increasing applicability of this technology beyond the games and entertainment fields that saw its birth in the 1990s; 38 percent of respondents, for example, believe VR growth in the enterprise sector has been “strong” or “very strong” for example, with an equivalent figure of 43 percent for AR (The XR Industry Survey 2018).
I love COBIT. Why? To begin with, COBIT is useful and usable. Secondly, the newly updated framework combines community knowledge and flexibility.
The What Is COBIT and What Is It Not section from COBIT 2019 Framework: Introduction and Methodology is very clear, and demonstrates how useful and usable the updated version of COBIT will be.
COBIT users know that COBIT in its last two versions utilized the components (formerly enablers) to plan, build and maintain a governance system. They were and are principles, policies and procedures, processes, organizational structures, information flows, culture and behaviors, skills, and infrastructure.
We can find these components in all organizations, and work with them to fix some problems or weaknesses in order to improve the current and future maturity of their governance system and, thus, create value for relevant stakeholders. These “magic resources” that create an appropriate solution are the first element to confirm that COBIT is usable and useful.
New design factors are the second one, and the new Design Guide was published this week. They should be considered by the enterprise to build a best-fit governance system. Not all organizations need the same solution with the same kind and quantity of resources. It is all about the best combination of needed resources to achieve expected or required benefits with a good balance or acceptable level of risks.
Not all organizations have the same strategy, goals, risk profile, I&T-related issues and threats. Compliance requirements, size and role, adoption strategy, sourcing model and implementation methods of IT are factors that we must complete soon.
Design factors influence in different ways the tailoring of the governance system of an enterprise. COBIT 2019 distinguishes three different types of impact, illustrated below.
The New COBIT 2019 Framework: Governance and Management Objectives are free for members and non-members. I believe this is a remarkable step to increase the number of COBIT followers and professional community engagement. How many students and professionals will benefit from these complimentary publications? How many of them will be influenced by COBIT 2019 and decide to initiate an IT career or improve it through a certification?
Will these new followers influence COBIT’s future design? I am sure of it.
Editor’s note: For more information about COBIT 2019 guidance, products and training, visit www.isaca.org/cobit, or view a webinar on the COBIT framework here or the Design Guide and Implementation Guide here.Category: COBIT-Governance of Enterprise IT Published: 12/11/2018 9:58 AM