British science fiction writer Arthur C. Clark famously said, "Any sufficiently advanced technology is indistinguishable from magic.” This seems to apply today like never before, especially with the rise of the Internet of Things (IoT). The day is not far off when your appliances are talking to your automobile, and your car is picking up your groceries after it has dropped off your kids at school and you at the office. While this may sound like science fiction (probably something Clark might have written) or something you see on the sliver screen, the underlying technology referred to as the Internet of Things has been around for a while and is already being used by a variety of players across industry sectors with scores of startups and established players betting big on this technology. Research predicts that there will be 31 billion connected devices by 2020 with spending on IoT expected to be close to US $3 trillion dollars. Economic impacts are expected to be in the trillions with benefits, opportunities, threats and disruption galore.
Enterprises are betting big on IoT with use cases expected to deliver value across the supply chain and to customers like never before. This large-scale investment is not without its downsides, with experts predicting Himalayan challenges in securing devices apart from powering the billions of sensors and handling the resulting e-waste. Forrester research found that IoT security will remain a top issue, and we should expect to see more attacks like the Mirai botnet attack, which looks like it came straight out of a B-grade Hollywood flick, “Attack of the Killer Camera” or something like that. Data from a Symantec survey points to a 600% increase in IoT attacks in 2017 over the previous year. One way to evaluate and possibly secure IoT could be to carry out robust audits, and I was fortunate enough to collaborate with Ian Cooke, CISA, CRISC, CGEIT, COBIT Assessor and Implementer, CFE, CPTE, DipFM, ITIL Foundation, Six Sigma Green Belt, who authors a great column on IS audit basics for ISACA. In our volume 5 column, we apply ISACA’s IT audit framework to auditing IoT.
Read Ian Cooke R. V. Raghu’s recent Journal article:
“IS Auditing Basics: Auditing the IoT,” ISACA Journal, volume 5, 2018.
The last two years have taught us that conventional wisdom and knowledge around privacy and security needs a makeover, in particular as it relates to the EU’s GDPR and the California Consumer Privacy Act. Data controllers and businesses, the entities responsible for what happens to personal data under GDPR and CCPA, respectively, are subject to new obligations that place significant organizational risk squarely on their shoulders. Though compliance issues can come from many places, one often-overlooked impact is managing processor/third-party risk.
Third parties (aka processors in the GDPR or information recipients in California law) are critical to organizational operations, from cloud hosting to payroll administration and processing. They hold customer, partner, employee, and confidential data that is the lifeblood of organizations, and we can’t run without them. While many third parties strive to be good stewards of their customers’ data, we find ourselves in a time where trust and good-faith efforts aren’t going to pass muster anymore.
Under the GDPR, CCPA, and other regulations, controllers need to hold their vendors contractually responsible in regards to specific obligations for how data is handled through data processing agreements and other measures, and as always, “trust but verify” that the vendor is acting accordingly. By extension, this includes our vendors’ partners as well, when fourth parties are involved.
Along with contractual measures, controllers need to assess, test and review a vendor’s ability to adequately safeguard the data they are transferring through product, personnel, and organizational protection mechanisms. This also requires that they pass the same data protection expectations downstream.
All of this due diligence should, at all times, be centrally documented and maintained. In the event of an incident or breach, controllers must be able to demonstrate a reasonable and defensible process for vetting third parties, including providing results of their assessments of vendors' practices and commitments to data protection, to help mitigate risks of liability. This also includes identifying potential risks of doing business with a particular vendor, taking actions to mitigate those risks, and continually managing vendors based on the scope and sensitivity of the data they process.
Now, chances are your organization has already taken steps to ensure proper actions are taken. For organizations looking for continual process improvement (CPI) and formal action plans, here’s a sample Vendor Risk Management lifecycle to consider:
This lifecycle is a roadmap to operational Vendor Risk Management that includes:
Organizations should also incorporate privacy/security by design into vendor onboarding practices by integrating with procurements processes to take advantage of work being done today. This could include an early screening to determine if further privacy and security due diligence will be required – based on what services are being rendered – and how they’re delivered.
Editor’s note: For more resources related to GDPR, visit www.isaca.org/gdpr.Category: Privacy Published: 9/17/2018 3:08 PM
Stakes are increasing when it comes to leveraging technology to define and deliver new value. The CEO and the executive team leaders are reeling with the challenges of identifying and implementing new digital business models while also wrestling with making smart capital investments to develop and mature organizational capabilities that enable agility and rapid response to new market opportunities. At the same time, board directors are in a quandary, attempting to make sense of the digital landscape, and to obtain assurance that their CEO and executive team leaders are enabling the right culture, acquiring and nurturing the right talent, validating that the technology investments are prudent and reasonable, and effectively capitalizing on business opportunities while mitigating security concerns that pose significant risks to the company’s financial position and reputation.
Many refer to this point of time as the era of “digital disruption” for “digital transformation.” For me, these phrases seem somewhat of a misnomer. Taking a more macro and holistic look at this period, and reflecting on past history as a means to understand where we are and where we are headed, perhaps what we’re really witnessing is a revival of classic laissez faire economics. Market forces are being reshaped by technology in ways never previously imaginable. The pace of technology-driven innovation is far exceeding the ability of government and regulatory entities to put corresponding consumer protections in place, even as organizations struggle to recalibrate their information and technology governance and security to adjust to business opportunities appearing and vanishing in much shorter cycles. What’s really at stake today is the longer-term survivability of enterprises as we know them, coupled with the coming of inconceivable shifts in jobs and how people will work. And we find ourselves merely at the tip of the digital economy iceberg.
Dr. Peter Weill, director of MIT’s Center for Information Systems Research in Cambridge, Mass., says that, “in a digital economy, the whole company is responsible for generating value from digital investments.” To address this challenge, his research identified three key components on which enterprises must focus. First, there is the strategic, which is envisioning how the company will operate in the future. Second, there is oversight, which is making sure the major investments and organizational change is on track. Third, and of critical importance, is the defensive, which is effectively meeting the challenges of security, privacy, and compliance on an ongoing basis.
Key to meeting the aforementioned challenges? People, of course. No wonder that in Gartner’s recently released list of barriers to becoming a successful digital business, talent emerges as among the most significant. Not surprisingly, many organizations still follow the same hiring protocols they did 10 years ago. While arguably some criteria for new hires haven’t change, such as having a strong work ethic, a knack for problem-solving, good time management skills, and a thirst for continuous learning, there needs to be increased focus on recruiting those who demonstrate that they are digitally savvy or are grasping the need to prioritize growing their skills in this area. This means understanding how new and emerging technologies can be deployed, how to harness big data and statistical analysis to shape new approaches to product development and deployment, and applied knowledge of technologies that are or will shape the future of business, including the likes of cloud computing, AI and machine learning, blockchain, augmented reality, and perhaps even the promise of quantum computing. These attributes, along with a propensity to be comfortable with risk and uncertainty, should most importantly enable hiring managers to see whether candidates exhibit the right chemistry to fit into the corporate culture. Simply stated, traditional organizational hiring practices must be modernized to cultivate the right talent in order to successfully meet the challenges of the digital economy.
So, let’s not be fooled into thinking we’re okay because our company ship has yet to hit that digital economy iceberg. This iceberg runs long and spikes just beneath the surface. Navigating around it calls for “all hands on deck.” Traversing these choppy seas without incident means establishing and maturing the capabilities our organizations will need to turn on the dime when things matter most. The only way the CEO and executive teams can become confident is if the right talent is in place. Similarly, the only way for boards to obtain the assurance that the corporate ships are in good hands is to be convinced that the CEO and executive teams have established the right culture with the right people, and that they are effectively addressing the strategic, oversight and defensive components necessary to generate value from digital investments. As Peter Weill notes, “How good are you at each of these will predict your likely success in the digital economy.” I could not agree more. We find ourselves in exciting times---perhaps just as exciting as those who were paving the way of laissez faire economics back in the 18th century.
Editor’s note: This article originally published in CSO.Category: Security Published: 9/13/2018 3:06 PM
Governance, risk and compliance professionals shared ideas and gathered insights on how their roles are evolving in light of enterprises’ digital transformation efforts, evolving trends in innovation, and growing regulatory and security risks recently at the sold-out 2018 GRC Conference in Nashville, Tennessee, USA.
It’s time to challenge conventions
Keynote speaker Luke Williams, author, professor of marketing at the NYU Stern School of Business and founder of the W.R. Berkley Innovation Labs, told a packed opening session audience that organizations seldom take the time to question the underlying reasons why existing practices and procedures were put in place, stifling opportunities for innovation.
Williams said enterprises are often “paralyzed by possibility” with an abundance of incremental ideas for improvement, but tend to lack the unconventional, bold strategy options capable of delivering a major impact. Eventually, he said, organizations that lack a forward-looking openness to change will be overtaken by competitors.
Artificial intelligence brings great potential – and risks
While artificial intelligence and machine learning are gaining traction – and generating plenty of buzz along the way – organizations face difficult decisions in knowing where and when to introduce AI. In a session on the ethical considerations related to AI, co-presenters Kirsten Lloyd and Josh Elliot highlighted an extensive list of powerfully compelling uses for AI, such as advancing new medical treatments, preventing cyberattacks, improving energy efficiency and increasing crop yields. They also encouraged organizations to create an ethical review board and the position of chief ethics officer to deal with the related risks.
ISACA board Chair and closing day keynote presenter Rob Clyde implored the audience to focus on safeguards to prevent unintentional harm from AI projects and services.
Audit and governance professionals must actively address cyber risk
The volume and complexity of today’s cyber threats demand that GRC professionals, along with internal auditors, support their colleagues who are in cybersecurity roles and work to provide assurance to ensure organizations are prepared to navigate cyber threats.
In a session on advancing IT audit capabilities in cybersecurity, co-presenters David Dunn and Jon Coughlin noted that the traditional belief that a good internal auditor can audit anything is being challenged by the growing cyber threat landscape, and that standard controls might be insufficient. Internal audit functions must deepen their skills across a range of cybersecurity frameworks.
In the conference’s final keynote, Deloitte Managing Director Theresa Grafenstine called cyber risk a top priority for GRC professionals. When organizations fail to adequately address the risk, said the former Inspector General for the US House of Representatives, it is generally due to a lack of knowledge and resources, rather than not recognizing its importance.
Compliance must become more adaptive
A combination of new regulatory requirements, such as the General Data Protection Regulation (GDPR), and a flurry of emerging technologies being deployed to enable digital transformation call for the recalibration of compliance policies and procedures.
Session presenter Ralph Villanueva encouraged compliance professionals to understand – rather than memorize – the intent of frameworks they are implementing to have a more strategic understanding of how those frameworks best align with enterprise goals. He said compliance professionals also must anticipate how emerging technologies might impact the organization’s compliance protocols going forward.
Security measurement must be improved
While more organizations are recognizing the importance of areas such as risk management and information and cyber security, it can be difficult to quantify the effectiveness of the related investments – a major concern for the C-Suite. Session presenter Brian Contos said organizations need to develop more sophisticated security metrics beyond performing vulnerability scans and patching. Contos addressed several platforms capable of removing guesswork and assumptions from the security equation, while potentially freeing up resources by phasing out outdated tools that no longer serve their intended purpose.
The next GRC Conference will take place 12-14 August 2019 in Fort Lauderdale, Florida.Category: COBIT-Governance of Enterprise IT Published: 9/12/2018 3:04 PM
As digital business hastens the speed of application development and gives way to complex, interconnected software systems (think Internet of Things, microservices and APIs), we need to address that penetration testing, although thorough, is slow and expensive. On average, it takes eight months to identify and understand the cyber and regulatory risks associated with any new software, according to research from security company Sonatype.
Software development trends are compounding the issue in that software is being built and released faster (see the “Agile Manifesto”), but the tools and people resources to address security risk are not keeping pace.
Trends such as DevOps that require security teams to deliver deep integration and the automation of security tooling drove us, in conjunction with Centre for Secure Information Technologies at Queen’s University Belfast, to ask the question, “What is the path to self-securing software?”
Penetration testers and tools will only scan the website they can observe; there could be many aspects missing from the testing scope. However, what is really interesting is that in reality, the CODE contains everything that the website can do (functionality, data, etc.).
We were interested to discover if there a way to scan code to automatically understand WHAT it is. For example, is it a website or desktop application? Does it allow the user to enter financial info or personal details? If it does, where is that info stored? This information can be used to drive other testing tools or penetration testing by informing them of what the code is, the associated functionality, data types, etc. In essence, this information can automatically inform the scope and focus of security testing.
We looked at source code parsing technology, and how, by using it, we can determine what a web application actually is/does. Antlr was deemed to be a popular tool in this area, allowing us to build a tool that scanned website source code and provided us with a digital understanding of the website. We could then use that data to drive automated security tools.
The result? We were able automatically understand the attack surface of a website by scanning the code. We could then use that intelligence to further drive manual, commercial or other open source testing, facilitating continuous, and automated, security testing of developing code. Since the orchestration and execution of security testing was automated, it could easily be wrapped into development teams’ daily (or weekly) processes, flagging security issues long before the system was deployed externally.
We believe that the tool we created (and have further developed at Uleska) is addressing the “pressing need to orchestrate tools and automate testing in a continuous delivery pipeline and facilitate AST at scale, as well as improve context and prioritization for remediation efforts” that Gartner has identified for so-called ASTO (Application Security Testing Orchestration) tools that are coming onto the market.Category: Security Published: 9/11/2018 3:00 PM
One of my favorite, if not my favorite, novels is Let the Great World Spin by Colum McCann. The book is centered around Philippe Petit's 1974 high-wire walk between the Twin Towers of the World Trade Center. There is a poignant scene in the book when a mother who has just lost her son, a solider, looks out her window, sees the walk in progress and reacts with disgust—how dare he risk his life in that manner—my son is dead! However, from Petit’s point of view, this is what makes him feel alive (as his TED Talk demonstrates). This is his passion, this is what he values. Value means different things to different people, depending on their perspective.
Similarly, according to James Roth, the definition of "value added" can vary considerably from one audit department to the next. For many practitioners, this phrase describes audit work that helps management improve the business, rather than assignments that simply verify compliance with policies and procedures. For others, the opposite meaning may apply.
However, despite the significant diversity in their specific practices, Roth has observed remarkable similarities in certain key areas among best practice audit departments. These audit shops form a collective profile with the following 5 value-adding characteristics:
I discuss 2 of these characteristics in my recent IS Audit Basics column in the ISACA Journal “Add Value to What Is Valued.” Specifically, (a) achieving organizational alignment by following the COBIT 5 goals cascade or, where this is not in place, mapping upward from processes to generic IT and enterprise goals that the organization can then review from a value perspective, and (b) auditing the processes that add this value horizontally across the enterprise using the generic COBIT 5-based assurance engagement approach.
Read Ian Cooke’s recent Journal article:
“Add Value to What Is Valued,” ISACA Journal, volume 4, 2018.
Healthcare has experienced significant modernization and is now closely intertwined with IT. But as the industry changes and marketplace demands evolve, new challenges emerge. Understanding how to address these challenges is paramount to the future success of healthcare organizations and their stakeholders.
Five healthcare IT challenges the industry is facing
What used to be a small intersection is now a fully developed relationship. It’s nearly impossible to understand the current or future state of healthcare without looking at IT and the role it is playing.
Even with all of the good things that are happening, there are some challenges, hurdles, and points of friction that must be dealt with and overcome. Let’s highlight a few of the more significant ones you should know about.
1. Data security
Data breaches are, unfortunately, a part of modern life. As more and more data is created and stored online, hackers will continue to go for valuable information. Because of the privacy associated with patient data, healthcare providers are often primary targets.
The challenge moving forward is for organizations to be more protective of their data, without adding unnecessary layers of bureaucracy. Better access control and simplified reporting will play a key role.
2. Network integration issues
On the business side of healthcare, there are plenty of mergers and acquisitions. Unfortunately, they often lead to network integration issues. The biggest challenge involves blind spots.
“Blind spots are areas where IT does not have complete visibility into what is happening on the network or how applications are behaving,” explains Keith Bromley of Ixia. “Mergers between IT systems for any organization, especially healthcare systems, take time. The problem is that patients and doctors do not have time to wait. Electronic medical records (EMR) must be available at all times, for all patients.”
Figuring out a way to smooth over these transitional points and prevent blind spots from occurring will be a key focus in the months and years ahead.
3. Remote patient care
The latest research suggests that 71 percent of all healthcare providers use telehealth or telemedicine tools to connect with patients. Considering that just half of healthcare providers were using telemedicine solutions and services in 2014, this represents a rather steep increase in adoption. The expectation is that close to 100 percent of providers will be using solutions like these by as early as 2021.
But there are still some distinct challenges. One such challenge is the issue of helping patients get the care they need after leaving the direct care of the healthcare provider.
“As a physician, I know that medicine is important to people’s health, but the vast majority of what determines a person’s health is not medicine, it’s the ability to take care of themselves, live well, manage disease, and give care to others outside the doctor’s office,” says Stacy Lindau, MD, who has worked closely with Rush University Medical Center to incorporate the NowPow platform to help them connect with patients after they leave.
The more sophisticated platforms like these become, the more well-rounded patient care will become.
4. HIPAA compliance
Whereas cybersecurity and strict BYOD policies are important for businesses in every industry, issues like these are even more challenging in healthcare. HIPAA laws are very strict on issues like unlawful disclosure of private patient information, and any unintentional mishaps can result in huge fines and significant reputational damage.
Having a plan in place for dealing with ransomware is crucial for healthcare organizations of all sizes. While encryption and backup storage are important, they may not be enough. Organizations that consult with cybersecurity experts specializing in HIPAA laws will see the biggest benefits.
5. Consumerization of medicine
“A big area of interest for healthcare institutions is the consumerization trend in which information is being collected and made available to mobile and web-based devices. For instance, hospitals are now embracing bring your own device (BYOD) for healthcare professionals and support the use of patient accessible Wi-Fi,” Bromley explains.
As consumerization increases, it’ll be important for healthcare organizations to choose the right technologies and use them in the appropriate ways. A failure to invest in the best solutions for the application will bog organizations down and create additional friction that hurts the patient experience (not to mention the practitioner’s experience).
Putting it all together
Healthcare innovation happens at a startling pace. From pharmaceuticals to health procedures, changes are occurring around the clock. From an administrative perspective, however, few areas are more important than successfully managing and governing the technology that enables the innovation. As IT progresses, so will the healthcare industry.
For IT professionals, understanding this relationship will help you get a firmer grasp why certain developments are taking place and what direction the industry is headed in the future.Category: Risk Management Published: 9/7/2018 2:58 PM
On 25 May 2018, the world did not stop simply because the General Data Protection Regulation (GDPR) became enforceable. For many organizations, however, the enforcement date became a distraction, an unofficial deadline. In reality, there was no finish line.
We all recall the panic-driven deluge of marketing consent emails from companies this past summer – some we engage with, many we forgot about and others we never saw. That deluge has now slowed down to a trickle.
Also, noticeably quieter are the salespeople peddling “GDPR-compliant” and “one-size-fits-all” solutions. Foreboding news headlines no longer scream about fines of up to 20 million EUR or 4% of total worldwide annual turnover for the slightest misdemeanor.
Three-plus months on from the enforcement deadline, here are a few observations and reflections on how organizations are adjusting to life under the new European privacy and data protection regime.
#1: Business as usual for some?
It would be inaccurate to say that organizations have quickly thrown off the restraints placed on them by the GDPR regarding the processing of personal data. However, it would be equally inaccurate to claim that poor data protection practices have been fully discarded and that we are now living in an era where organizations treat our personal data appropriately.
For Europeans at least, there is evidence of some change in behavior from large technology and global marketing companies, some of whom are already under scrutiny by regulators. For some other organizations, however, GDPR fatigue has begun to set in and organizational priorities are shifting from expensive programs to other hot-button enterprise risk issues.
GDPR compliance initiated a rush of activity that led to the creation of (or updates to) policies, procedures, system inventories and contracts. Some organizations brandished these new shiny documents as their evidence of being “GDPR-ready.”
However, having controls by themselves without a plan to assure that their design and operating effectiveness achieves the desired control objectives is half-hearted. Weak governance and the absence of privacy assurance programs increases the risk of a return to the past.
In reality, control effectiveness cannot be fully determined until after a designated cycle of operation. It may take at least one year before we start to see true changes in organizational attitudes toward data protection.
#2: Integrating privacy into enterprise risk management
Forward-thinking organizations saw GDPR compliance as an opportunity to return to the drawing board and, in some cases, revisit their approach toward enterprise risk management.
Far from simply fulfilling a checklist of requirements, some organizations used their GDPR compliance programs to test the alignment between their operational risk, information security, IT governance and privacy functions.
This also was an opportunity to embed privacy risk into enterprise risk management frameworks, check the health of three-lines-of-defense models, adjust risk tolerance levels and develop new key risk indicators (KRIs) to provide end-to-end assurance.
Where new privacy risk management processes (such as steering committees) have been implemented, they will need time to develop traction. In the long term, the right approach could see organizations improving the maturity of their data protection controls while also improving their overall enterprise risk posture.
#3: The “SAR-pocalypse” did not happen
It just didn’t.
Depending on who you spoke to, the increased public awareness of privacy rights enshrined in the GDPR would unleash an avalanche of data subject access requests (SARs) from incentivized or incensed data subjects.
Executives feared that customers, disgruntled employees and coordinated activists flexing their new regulation-enabled muscles would bombard their service desks with requests seeking to enforce rights of access, erasure and others.
The term 'SAR-pocalypse' (a hypothetical denial-of-service scenario caused by an organization’s inability to manage an excessive volume of SARs) was whispered in hushed tones with real concerns that failing to deal with requests within the required period could attract penalties.
In the weeks just before and after the enforcement deadline, many organizations did in fact see a sharp rise in the number of data subjects requests they received. However, many of those requests originated from people annoyed with the panic mass mailing campaigns in the weeks prior to the enforcement date. Understandably, many of the requests were for erasure and account deletion.
A retail organization I spoke with noted a higher-than-usual volume of requests in the weeks leading up to 25 May. Requests to be erased reached an all-time peak in the weeks following. However, by mid-June, those numbers had begun to drop. By the end of August, request volumes had returned to pre-25 May levels.
I am yet to hear of any organizations admitting that their service desks have toppled over due to a flood of SARs. However, organizations should not trivialize the need to keep their personal data flows up-to-date and to keep testing the effectiveness of their process for responding to SARs and other GDPR-related queries.
#4: Waiting to see what the regulators will do with penalties
‘Data Breach Scapegoats Wanted!’, wrote one satirical industry commentator on social media.
While Europe’s regulators adjust their oversight machinery to be able to effectively police the GDPR, there is a collective holding of breath by organizations waiting to see what precedents will be set with post-25 May financial penalties.
Perhaps the most high-profile data privacy related incident to hit the headlines since the GDPR enforcement deadline was the one involving the infamous Cambridge Analytica. For its part in the scandal (which preceded the 25 May enforcement date), the UK Information Commissioner’s Office (ICO) fined Facebook £500,000 (the maximum fine under the old UK Data Protection Act 1998).
Data privacy breaches continue to be reported, and post-25 May, the UK regulator has continued to take enforcement action against erring organizations. For example, British Telecommunications plc (BT) was fined £77,000 (hardly 4% of their global annual turnover) for sending nuisance emails to customers.
When scrutinized through the lens of Article 83 (“Each supervisory authority shall ensure that the imposition of administrative fines...in respect of infringements...shall in each individual case be effective, proportionate and dissuasive”), it might be a while before a “GDPR-scale” maximum penalty is imposed on any organization.
The absence of scapegoats may be because Europe’s regulators are either overwhelmed with data subject complaints or simply biding their time until they find the right opportunity to set a dissuasive precedent.
Rather than waiting for precedents and second-guessing regulators, organizations should continue to improve their incident prevention, detection and response procedures while maintaining a state of readiness for potential data breaches.
#5: After the hype, what comes next?
As the GDPR hype starts to wane, organizations should not lose sight of the wider benefits that can be derived from an improved attitude toward data protection.
For example, there will continue to be opportunities to improve data governance and unlock business insights from the personal data they lawfully process if organizations maintain their discipline around personal data collection and processing.
As informed consumers continue to exercise their enhanced consent rights under the GDPR, available inventories of user data are likely to come under pressure. By focusing on data quality (including processing data that is “adequate, relevant and limited to what is necessary”) rather than scale, organizations can improve engagement at different points within the customer journey.
The Privacy & Electronic Communications Regulations (soon to be ePrivacy Regulation) remains a hot topic and the next keenly anticipated regulation from Europe. Correctly implementing GDPR requirements should have placed most organizations in a good position to adopt the requirements within the ePrivacy regulation.
While senior executive support for GDPR remains warm, Data Protection Officers need to test their newly minted powers and ensure that their independence (including avoiding conflicts of interest with other tasks and duties) goes beyond qualities and responsibilities listed in a job description.
There is no turning back
The reality for many organizations is that GDPR program funding and resources will move elsewhere. Data privacy champions will change roles. Vendors will come and go. Applications will be developed and retired. Meanwhile, more countries and jurisdictions (like California) are likely to strengthen their own data privacy laws. The journey never ends.
Somewhere in all of this, care must be taken to avoid the slow erosion of data protection controls arising from negligence and poor governance and a return to the old ways. Seeing the GDPR not as a checklist but as an opportunity to transform corporate attitudes and embed good data protection practices will help organizations thrive under the new privacy regime in the long-term.
Editor’s note: For more GDPR insights and resources, visit www.isaca.org/gdpr.Category: Privacy Published: 9/6/2018 2:51 PM
This weekend, all ISACA lost a dedicated leader, an engaged board member, a passionate colleague and, most notably, a dear friend. Robert E Stroud, CGEIT, CRISC, 2014-2015 ISACA Board Chair, and Board Director 2015-2018, will be deeply missed.
Only 55 years old, Rob passed away Monday, 3 September 2018, after being struck by a vehicle while jogging on Long Island, New York, USA. He is survived by his devoted family: his wife of 35 years, Connie, sons Josh and Kyle, daughter-in-law Allie Elizabeth, and grandchildren Ayden, Haylee and Jeremy.
Rob brought boundless energy and enthusiasm into everything he did for ISACA—and those contributions were many. He was board chair for the 2014-2015 term, and was a driving force in the launch of ISACA’s Cybersecurity Nexus (CSX). Prior to that, he was international vice president of ISACA, member of the Strategic Advisory Council and Governance Committee, and chair of ISACA’s ISO Liaison Subcommittee. He was a COBIT champion and contributed to COBIT 4.0, 4.1 and 5, as well as numerous COBIT mapping documents. Additionally, he was involved in the creation of ISACA’s Basel II, Risk IT and Val IT guidance.
His excitement about emerging technologies and extensive knowledge of assurance, governance, cloud security and DevOps made him a highly sought-after speaker at events around the world—including ISACA’s. Rob’s technical expertise, his excitement to travel and share his knowledge around the world, and his humor and wit in delivering remarks will be greatly missed.
Rob’s dedication to the profession extended beyond ISACA. He previously served on the itSMF International Board, the board of the itSMF USA and multiple itSMF local chapters.
Additionally, he served as a member of the ITIL Update Project Board for ITIL 2011 and in various roles in the development of ITIL v3.
Rob’s high-impact career in assurance, governance and innovation leaves a lasting legacy. Rob was Chief Product Officer at XebiaLabs, where in the last year he primarily focused on DevOps scalability in the enterprise. Prior to that role, he was Principal Analyst for Forrester Research Inc., where he helped large enterprises successfully drive their DevOps transformations and guided them through organizational change.
He spent more than 15 years in multiple roles at CA Technologies, including Vice President of Strategy and Innovation, where he predicted changing trends in the domains of assurance, cybersecurity, governance security and risk. He also advised organizations on strategies to ensure maximum business value from their investments in IT-enabled business governance.
On a personal note, Rob has been my good friend and mentor. It was his inspiration and support that led me to serve on the ISACA board of directors. I have had the privilege of co-presenting with Rob many times, and frequently we have had lively discussions about new technology, cloud, DevOps and how we can help ISACA have even greater impact. The day before his passing, I was working on a DevOps presentation using slides that Rob had put together and just shared with me to use. Having collaborated with him for so many years, enjoying his advice, company, humor and zest for life, I feel like I have lost a part of me. I’m sure many of you feel the same, and we will explore a fitting way to honor his contributions and legacy. I will let you know of those opportunities as they are decided by the board in a timely fashion.
Rob was always looking forward to new trends, new challenges and new opportunities, so he could best serve his clients, his colleagues, and his friends, whether bonds were just formed or existed for decades. His exuberance lit up the room wherever he went, and he was truly a guiding light and progressive proponent for the association and our professional community.
Rob’s enduring spirit of innovation will continue to influence ISACA and our global family for years to come.
Thank you, Rob. You are gone too soon. We miss you.Category: ISACA Published: 9/4/2018 12:22 PM
The EU General Data Protection Regulation (GDPR) outlines measures required to protect personal data and how an enterprise moves, uses and stores that data. My recent ISACA Journal article, “Protection From GDPR Penalties With an MFT Strategy,” discusses why a robust managed file transfer (MFT) and integration platform is useful for organizations looking to comply with GDPR and other data protection measures.
Here are some key steps for implementing an MFT solution to meet increasingly stringent data demands:
Read Dave Brunswick’s recent Journal article:
“Protection From GDPR Penalties With an MFT Strategy,” ISACA Journal, volume 4, 2018.
Of all the certifications represented annually in the Global Knowledge IT Skills and Salary Report, ISACA is more prominent in our top-paying certifications list than any others. This year, ISACA occupies five spots in the top 20, including three in the top six worldwide.
ISACA is associated with two important truths for business technology professionals:
ISACA’s certifications in cybersecurity and governance produce the highest salaries. This is in line with our overall salary data, as governance ranks second and security fifth in average global salaries by category.
Here’s a list of the five top-paying ISACA certifications for 2018 (average salaries are for North America):
Average salary: $117,544
CGEIT is the top-paying certification in the United States and ranks third worldwide ($92,821). Its North American salary is 34% higher than the average for all certified professionals. This certification is designed for individuals who manage, advise or provide assurance services around enterprise IT governance.
Tenure is among the reasons CGEIT-certified professionals typically have higher salaries. To take the exam, an individual needs at least five years of experience in at least three of the five domains the certification covers, including at least one year in the IT governance framework area.
Average salary: $107,968
CRISC ranks sixth in North America and second worldwide in average salary. Its average salary is 23% higher than the average for certified professionals. CRISC is a risk management and security credential designed for IT professionals, project managers and others whose job it is to identify and manage IT and business risks through information systems controls.
Globally, six security certifications made our top-20 list, with CRISC trailing only CISSP in average salary. Cybersecurity positions in general pay well, with the average among North American respondents at $101,083, which is more than $13,000 above the average.
Related training: CRISC - Certified in Risk and Information Systems Control Prep Course
Average salary: $105,926
CISM ranks seventh in North American salary and sixth globally. It’s aimed at information security management professionals, focusing on security strategy and assessing the systems and policies in place. To take the exam, certification candidates are required to have at least five years of experience in IS, with at least three as a security manager.
It’s now common that many government agencies require their IS and IT professionals to have a CISM certification.
Related training: CISM - Certified Information Security Manager Prep Course
Average salary: $102,112
This premier governance credential has a North American salary that tops $100,000 and a worldwide salary that ranks 11th overall ($77,300). COBIT 5 provides a comprehensive framework that assists enterprises in achieving their objectives for the governance and management of enterprise IT.
ISACA’s governance credentials (COBIT 5 Foundation and CGEIT) are two main reasons why governance certifications have the second highest average salary globally ($84,420).
Related training: COBIT 5 Foundation
Average salary: $97,117
CISA ranks 13th in the US and globally in average salary. It’s also the most popular certification amongst our survey respondents, with 1,923 CISA-certified professionals. The CISA is perfect for individuals whose job responsibilities include auditing, monitoring, controlling and assessing IT and business systems. The exam tests the ability to manage vulnerabilities.
Originating in 1978 and now in its 40th year, CISA is ISACA’s oldest certification. It requires at least five years of experience in information systems auditing, control or security.
Check out these additional Global Knowledge resources to learn more:
GDPR: An acronym and a buzzword that has set many of us into “alert mode.” Since it was set in motion more than two years ago, thousands of people worked hard to ensure their organizations were prepared by the set enforcement deadline of 25 May, 2018, and continue doing so. But among the good guys and gals, there were also some “louche” (a French adjective that means “shady” characters, and was used in CNIL’s video on GDPR. These are people who had no ethical problems in providing misleading guidance and wrong answers to the many questions concerning GDPR).
Unfortunately, Poland was among those countries where this phenomenon grew to be a danger to the whole idea of protection of personal data. Here are just a few examples of the consequences of the created havoc:
These situations were widely described and discussed on the internet in Poland, raising concern. To counteract this, in June this year, the Minister of Digital Affairs empowered Mr. Maciej Kawecki, the Director of the Department of Data Management at the Ministry, to create a special task force to deal with the worst absurdities. Mr. Kawecki is a top data protection specialist who is coordinating the work done in Poland to adapt Polish law to GDPR. The mission is very challenging; there are about 800 regulations that need to be revised. In the next few weeks, the Polish Parliament will debate the first package of legislative changes.
Mr. Kawecki posted a call for volunteers to work in the group. This proved to be a very sought-after, widely appreciated initiative, and the response was huge. From the several hundred candidates, 93 people were picked to work in five groups on issues concerning specific topics: health, education, finance/telecomms, public administration and general issues.
I had the pleasure to be selected to be a member of the education team. We come from a mix of different professions and different involvement in day-to-day school activities. This creates additional value as we have different perspectives and experience that enable us as a team to take a much broader look at GDPR issues.
In the first stage, we were asked to compile replies to seven especially pressing questions concerning schools. We came to the conclusion that each question should have two answers:
We already have noted our first success. Part of our work has been used in the GDPR guide for schools, just published by the Ministry of Education together with the Polish supervisory authority.
Creating a GDPR task force by the Ministry of Digital Affairs is a highly recommended approach. It gives the opportunity for data protection professionals to get involved in supporting GDPR compliance at the national level. It also creates opportunities for an exchange of knowledge and experience between practitioners and government officials in charge of developing regulations and recommendations. The Ministry intends to continue using our group to obtain practical and up-to-date information on issues and problems concerning GDPR implementation and to develop appropriate guidelines. This also gives us the opportunity to share our ideas and thoughts with our peers and to disseminate best GDPR practices to stakeholders both in the public and private sectors.
A good example of the usefulness of guidelines developed by official organizations are the “Guidelines on the protection of personal data in IT governance and IT management of EU institutions” published by the European Data Protection Supervisor (EDPS). These good practices are based on ISACA’s COBIT 5 and describe the data protection aspects related to the processing of personal data. With just a few minor changes that basically come down to replacing “EU institutions” with “data controllers,” this document can easily serve large and small organizations from the public and private sector in the European Union and outside in their efforts to achieve GDPR compliance.Category: Privacy Published: 8/31/2018 3:07 PM
Cyberincidents involving ransomware are a common occurrence lately. Hardly a week goes by without hearing about an incident in the news. Some involve an organization paying a ransom to get access to files, and others involve enterprises deciding not to pay and dealing with sometimes costly and protracted recovery processes. Paying a ransom, as tempting as it might be to regain access to files, creates a societal negative externality.
Negative externality is a term used by economists to describe a condition in which a third party suffers a cost as a result of a transaction. One common example is a factory dumping toxic waste into a river: A third party, people who live downstream from the river, are harmed from the economic exchange between factory owners and those who buy the goods the factory produces. A technology example, and the primary focus of my ISACA Journal article, titled “The Downstream Effects of Cyberextortion,” is paying ransomware. There are 2 parties in the transaction—the cybercriminal and the victim, and every time a victim pays a ransomware demand, cybercriminals are emboldened, enriched and encouraged. Paying the ransom creates more future victims, therefore creating a negative externality.
Common advice is often “Never pay!” This might be good guidance if one wishes to improve the overall computer security ecosystem, but is this good advice for the small community hospital that does not have good backups and where lives may be at stake? This is the question—and decision—that I analyze in the article. Thinking about this problem as a series of decisions helps frame the problem, identify risk and identify opportunities in which cybersecurity professionals can disrupt or influence the decision. If one is faced with this kind of problem, the decision flow can be broken down into these 3 high-level steps:
I also briefly touch on the nudge theory. Nudge theory has been explored in the field of behavioral economics and describes ways that actors can be nudged into good decisions without government interference, coercion, etc. I believe the nudge theory can be very effective in helping solve the ransomware problem. Some possibilities are:
Let us continue the discussion in the comments section. Do you find this type of decision analysis useful? Can it help solve common cybersecurity problems? How would you nudge people to make better decisions?
Read Tony Martin-Vegue’s recent Journal article:
“The Downstream Effects of Cyberextortion,” ISACA Journal, volume 4, 2018
If you are a netizen, you must have already noticed how certain ads pop up while you are surfing videos on YouTube. Most of the times, these advertisements have close connections to the products and brands you have been searching recently. However, this is not the case always! Finding fake ads of reputed brands like Mercedes-Benz and Waitrose is not uncommon at all. According to reports from The Times of London, several reputed brands have found their advertisements among objectionable and explicit content.
Why should you care about online ad fraud?
If you are an advertiser, this should be a cause of concern for you. According to a recent study, over 20% of the clicks you are getting on your ads can be from bots and tricksters. Censoring the internet and running the entire web without advertisement is impossible. In short, good content and commendable user experience require sponsorship.
Sadly, advertisers are pouring money into digital ads, but they are not receiving the returns they expect. The advent of various smart devices may have expanded the scope of viewing content, but they have done little to ensure that the content is genuine.
According to the Association of National Advertisers, entrepreneurs are wasting over $7 billion on online adverts people do not see. The experts expect the numbers to grow beyond $335.5 billion in the next two years. When companies are ready to spend billions on online advertisements, it is understandable why malicious activities are always around the corner, waiting.
We have seen the likes of Meth-bot that cost the ad industry around $5 million per day. They used bots to mimic human data, created over 250,000 individual domains. These new sites had a resemblance to big fish like ESPN and Vogue.
Digital ad fraud is a serious concern for advertisers and users, too. While the fraudsters use bots to mimic human behavior, trace cursor movements, and hack social media accounts, they fake their geo-location data to avoid detection. As a result, along with regular display ads, the premium online video advertisements are also taking a hit. Digital fraudsters are messing up analytical data, upturning the KPIs and disrupting online campaigns of many of the more reputable brands in the world.
Blockchain as a potential solution to online fraud
Is there any current technology that can prevent pixel stuffing, ad stacking, search ad frauds and affiliate ad frauds? Experts say that it’s possible. They believe that advertisers can prevent similar frauds by turning to blockchain. We are not talking about cryptocurrencies, but the decentralized open-source ledgers.
A fusion of existing ad technology and blockchain can give advertisers the power to keep an eye on each impression and eliminate the fear of fraud. Leading advertising research firms like Interactive Advertising Bureau’s Tech Lab and Data & Marketing Associations already are working on creating a blockchain solution that can help advertisers detect and prevent fraudulent activities. However, the wide variety of online ad frauds make the task of developing a uniform system difficult.
Below are the major use cases of blockchain that can be implemented to prevent online ad frauds:
Ethereum-based ready solutions – Several startups and advertising research companies have been working on blockchain systems that can stop bots and impostors. Ethereum is the best-known blockchain right after Bitcoin. Instead of a central ad server, it offers a decentralized system to advertisers to monitor the activity of their partners. Google, Amazon, Twitter, YouTube, Facebook, and Snapchat have adopted similar history-proof, decentralized ledgers.
Blockchain counterattack – This mechanism adopted by the Ads.txt DApp allows publishers and content owners to list the authorized sellers of their inventory in a .txt file. This file is served from within the root path of their domain’s web server.
Blockchain-based exchange for traders – A combination of the financial matching engine and the latest blockchain technology allows advertisers to enable transparent transactions. It is a NASDAQ Inc. initiative that aims to provide advertisers and publishers a completely secure platform that supports buying, selling and re-trading advertising contracts.
In the digital era, online ads are an important channel for brands to use to reach out to their target audience. Ad fraud not only puts a hole in the pocket of the brands but also harms the end users, who need reliable information to make the right decisions. With the ability to impart transparency to the system and trace an online asset, blockchain can surely help reduce, if not completely stop digital fraud.Category: Risk Management Published: 8/29/2018 3:01 PM
Performance evaluation of an organization’s risk management system ensures that the risk management process remains continually relevant to the organization’s business strategies and objectives. Organizations should adopt a risk metrics program to formally carry out performance evaluation. An effective risk metrics program helps in setting risk management goals (also known as benchmarks), identifying weaknesses, determining trends to better utilize resources and determining progress against the benchmarks.
My recent ISACA Journal article:
The key steps in the risk management metrics program are:
Start the metrics program with a small number (e.g., 6) metrics and add new metrics progressively as the risk management and information security maturity of your organization improves.
Rama Lingeswara Satyanarayana Tammineedi’s is currently working with Tata Consultancy Services.
Read Rama Lingeswara Satyanarayana Tammineedi’s recent Journal article:
“Integrating KRIs and KPIs for Effective Technology Risk Management,” ISACA Journal, volume 4, 2018.
After decades of presentations and prayers, security has finally become a business imperative for executives and boards alike. Business leaders are speaking publicly about championing security investments, as it’s important for shareholder value and future expectations. In fact, evidence-based security effectiveness measures are finding their way into annual reports (10-Ks), committee charters, and corporate governance documents.
Because of the spotlight that is on security, your business leaders are demanding security effectiveness evidence from you. This evidence is similar to the data-driven measurements and KPIs seen in other strategic business units such as shareholder return, client assets, financial performance, client satisfaction, and loss-absorbing resources.
Your leaders are making decisions predicated on these non-security measures every day to increase value for their shareholders, address stakeholder requirements, and mitigate business risks. Security is simply another variable in the business risk equation. In fact, your security program isn’t about security risk in and of itself, but rather, the financial, brand, and operational risk from security incidents.
One area where the need for security effectiveness evidence is profusely obvious is around rationalization. For example, many auditors no longer ask, “Do you have security tools in place to mitigate risk?” because the answer is always, “Yes, but we need more tools, training, and people anyhow.” Now auditors are asking for rationalization in terms of, “Can you prove, with quantitative measures, that our security tools are adding value? And can you supply proof regarding the necessity for future security investment?”
This evidence-based, rationalization methodology, often characterized as security instrumentation, aligns with the reality that your organization has finite resources to invest in security and that all investments need to be prioritized. Every dollar invested in security is a dollar not applied to other imperatives.
Measuring your security effectiveness: where you’ve been
The sad truth is that most security effectiveness measures are assumption-based instead of evidence-based. Because of a lack of ongoing security instrumentation, you assume your tools and configurations are doing what is needed and incident response capabilities are a well-choreographed integration of people, processes, and technologies. You know that assumption-based security is flawed. But historically, you haven’t had a way to empirically measure security effectiveness. You get some value from penetration testing, the endless march of scan-patch-scan, surveys, and return on security investment calculations, but these approaches don’t truly measure your security effectiveness. As a result, your business leaders are relying on incomplete and/or inaccurate data to make their decisions.
Where you need to be
You need to know if your security tools are working as intended. Once they are, you can optimize those tools to get the most value, rationalize, and prioritize where greater investment is required, and retire tools no longer needed. Then you can monitor for environmental drift so that when a tool is no longer working as needed, you are alerted to the drift and how to fix it. Finally, from a leadership perspective, your team can consider security effectiveness measures when calculating the business risks.
How to get there
By safely testing your actual, production security tools with security instrumentation solutions, not scanning for vulnerabilities, not looking for unpatched systems, and not launching exploits on target assets, but actually testing the efficacy of the security tools protecting your assets, you can start measuring security effectiveness of individual tools as well as security effectiveness overall. When gaps are discovered, you can use prescriptive instrumentation recommendations to address those gaps. Then you can apply configuration assurance to retest the security tools to validate that the prescriptive changes implemented resulted in the desired outcome. Once you have your security tools in a known good state, automated testing can continue validation in perpetuity, alerting you when there is environmental drift.
The end result of security instrumentation is security effectiveness that can be measured, managed, improved, and communicated in an automated way. Your security teams are armed with evidence-based data that can be used to instrument security tools, prioritize future investments, and retire redundant tools. This newfound ability to communicate security effectiveness and trends based on actual proof allows your decision-makers to incorporate security effectives measures when making business decisions.Category: Security Published: 8/27/2018 2:52 PM
Emerging technologies and the pace of innovation are reshaping the banking/financial industry and operating models, while influencing the shape and dynamics of the broader financial services ecosystem.
Banks have adopted new technologies to varying degrees. Most banks use elements of cloud computing, a key technology that reduces the costs of rolling out and scaling the online and mobile banking capabilities that digital era consumers expect. Many institutions also are gradually implementing elements of big data and analytics as well as robotic process automation (RPA) to strengthen controls and reduce costs. Other technologies, such as distributed ledger technology and the Internet of Things (IoT), are only in the early stages of commercialization by banks.
Respondents to ISACA’s 2018 Digital Transformation Barometer identify financial/banking as the industry showing the most leadership in adopting emerging technologies. Banks are undergoing a fundamental transformation resulting from a range of technological innovations. Six technologies are currently most prominent in financial innovation: cloud computing, big data and analytics, artificial intelligence (AI)/machine learning, RPA, distributed ledger technology and the IoT. These technologies are at different stages of maturity, and some have the potential to significantly change the industry in the coming years.
The questions that pops up is: How rapidly is the pace of change accelerating for financial services industry firms, and how are leaders planning to navigate their firms into the future?
To answer these questions, it’s important to first consider that there are some regional and national differences in competitive market structure, regulatory environments, and the global scale of the industry that influence outcomes. Even though the larger G7 economies (Canada, France, Germany, Italy, Japan, the United States and the United Kingdom) are still dominant, in terms of size (assets) and number of transactions, other countries, especially from the large emerging markets, are catching up steadily as well. The growing, emerging economies have been able to more easily implement modern core technology platforms because of the relative absence of legacy investment and integration with 40-year-old systems often found in firms in the G7.
New technologies are allowing banks to re-examine their business and operating models, and determine which functions and capabilities should be retained internally vs. obtained externally. Banks are able to benefit from technological advances made by other organizations in several key areas (such as customer reporting, risk analytics as a service, blockchain) by entering into strategic partnerships with these entities.
Technological innovations also are enabling banks to virtualize more of their banking operations and shift non-critical functions (for example, managed treasury and cash services, white label call centers) to business partners — allowing firms to increase their focus on core services and improve efficiency, while maintaining robust oversight and controls.
We also need to understand that there is a growing customer expectation of what “great” service looks like that often is shaped by “single best user experiences.” The optionality, transparency and affordability of products and services offered by prominent digital era companies have set a new baseline for banking customers’ expectations of convenience, simplicity and customer engagement.
Further, machine learning and advanced analytics are enhancing risk monitoring, controls and risk mitigation across the banking industry. Banks are able to leverage expanded internal and market data and advanced analytics to better understand key customer and financial transaction related-risk factors.
The shift toward digital platforms allows banks to interact more closely with customers, and quickly design and deliver relevant services. Digitizing end-to-end business processes further enables banks to achieve scale and become more efficient, resilient and transparent. As a result, banks are better able to quickly respond to changing customer needs, market dynamics and regulatory expectations.
Maintaining an appropriate balance in regulating and supervising banks as they innovate is not a new challenge. Key examples of impactful, organic incorporation of technological innovations into banking include, among others, the advent of call centers and the shift from paper to electronic/digital books and records. Banks determine the precise design and use of each technological innovation based on customer needs, opportunities to enhance customer value, compliance with regulatory requirements and supervisory expectations, their business models, risk tolerances and other market factors. Banks rely on their first (business), second (risk management) and third (internal audit) lines of defense to maintain compliance. The banking industry’s long and successful track record of safely implementing technological innovations speaks to the effectiveness of its regulatory engagement model.
Policymakers and regulators continue to actively monitor developments within the banking sector, including those that are technology-related, so that emerging, potential risks are appropriately addressed.
To date, banks have safely implemented many beneficial technologies without adverse repercussions to institutions or the broader financial system. Nevertheless, implementing technological innovations, particularly emerging technologies, will always have some element of risk, given the heuristic nature of innovation and new activities and services.
Going forward, digital transformation has the potential to continue to significantly transform the financial services industry and benefit society. It can replace individual banks’ legacy systems, enhance processes, improve efficiencies and strengthen controls. Digital transformation also can provide opportunities for the creation of new products and services that benefit customers. Ultimately, technological innovations hold great promise for the identification of new customers and the provision of financial services to the unbanked or underbanked community in a safe and sound manner.Category: Risk Management Published: 9/19/2018 7:30 AM
The second installment of ISACA’s Digital Transformation Barometer research underscores the ascent of artificial intelligence as a technology with growing potential – and how urgently enterprises must rise to the occasion of addressing the related risk and security implications.
In the 2018 Digital Transformation Barometer, global respondents rank AI/machine learning/cognitive technology as the second-most transformative technology for organizations, finishing just behind big data. While big data also was the top choice in the 2017 version of this annual research, the gap between big data and AI shrunk from 18 points to 3, reflecting a growing realization that AI technology is on the verge of profoundly reshaping many aspects of society.
Already, AI and machine learning hold significant sway in our daily lives, ranging from the way our flights are piloted to matters of simple convenience, such as how photographs are tagged on Facebook. Larger impact is on the way. AI and machine learning are being explored to set medical breakthroughs in motion, improve farmers’ crop yields and help law enforcement identify missing people, among a wide range of promising applications on the horizon. As new uses continue to be developed and refined, there will be increased need for enterprises to safely and securely deploy AI. On this front, there is much work to be done.
Only 40 percent of Digital Transformation Barometer respondents express confidence that their organizations can accurately assess the security of systems that are based on AI and machine learning, a statistic that is concerning enough today but will grow considerably more problematic in the near future if enterprises don’t make the needed investments in well-trained staffs capable of putting the needed safeguards in place. As AI evolves – consider the likely proliferation of self-driving vehicles, or AI systems designed to reduce urban traffic – it will become imperative that enterprises can provide assurance that the AI will not take action that puts people in harm’s way.
Contending with malicious uses of AI will be one of the central challenges for our professional community, as a concerning report from a range of global researchers accentuated. The Digital Transformation Barometer research shows that potential instances of social engineering, data poisoning and political propaganda are among the malicious AI attacks that need to be accounted for in the short-term, but even more concerning possibilities loom, such as the activation of autonomous weapons, driving home the urgency of bolstering AI security capabilities. In many cases, the solution to keeping AI in check will be tapping into AI technology that enables security innovations.
Whether thinking about AI or other emerging technologies, practitioners should look for opportunities to expand their knowledge base and explore ways for their enterprises to leverage new technology to connect with customers in new and potentially more impactful ways. More than 4 in 5 respondents (83%) indicate their organizations have no plans to accept cryptocurrency in the future, while the majority of respondents (53%) consider public cloud to be high risk, reflecting mindsets more tethered to the status quo than embracing opportunities to fuel innovation. Not every new technology is the right fit for every organization, but enterprise leaders owe it to their stakeholders to ensure they are actively exploring promising technologies and determining how technology can be securely leveraged to drive the innovation needed to compete in today’s digital economy.
Change is difficult for organizations, which traditionally are structured with stability, rather than innovation, in mind. However, as technology plays an increasingly prominent role in our daily lives, customers increasingly are expecting dynamic, swift-to-market, technology-driven solutions. To be able to deliver, organizations must prioritize investing in the security capabilities needed to enable effective and responsible digital transformation.Category: Risk Management Published: 9/19/2018 7:30 AM
Cloud security is on everyone’s minds these days. You can’t go a day without reading about an organization either planning its move to the cloud or actively deploying a cloud-based architecture. A great example is the latest news about the US Department of Defense and its ongoing move to the cloud.
The US government is leading the charge by encouraging the private sector to provide secure cloud service offerings that enable federal agencies to adopt the cloud-first policy (established by the Office of Management and Budget in 2016) using FedRAMP. FedRAMP is a US government-wide approach for security assessment, authorization and continuous monitoring for cloud products and services. It sets a high bar for compliance with standards that ensure effective risk management of cloud systems used by the federal government.
There is even some chatter now about efforts to establish FedRAMP as a law, in an effort to encourage agencies to adopt the cloud at a more rapid pace. The delay in adoption is by no small measure related to the complexity, the intensive resource requirements of the current FedRAMP processes and finding providers that are FedRAMP-certified.
One of the main considerations to the adoption of FedRAMP on a wider scale is the difficulty for the industry, Third Party Assessment Organization (3PAO) and Cloud Service Providers (CSP) to determine what the profitability model is for engaging in the FedRAMP program.
Establishing such metrics can offer key drivers for industry adoption, perhaps by allowing CSPs to determine how offering FedRAMP-accredited IaaS/SaaS/PaaS can be truly beneficial and profitable for the company’s bottom line, at the same time allowing the agencies to determine the cost effectiveness of a move to the cloud.
While achieving FedRAMP accreditation has many challenges (as TalaTek learned over the past 18 months during deployment of its own cloud-based solution), there are clear benefits for the federal agencies and the industry to work with a FedRAMP-authorized service providers. At a high level, these include an established trust in the effectiveness of implemented controls and improvement of data protection measures.
Despite the many challenges for adoption, I am a big believer in the benefits outweighing the challenges of the FedRAMP program, especially in the long run, after the kinks are ironed out and the program maturity improves through increased adoption of both government and private industry.
The FedRAMP program provides significant value by increasing protection of data in a consistent and efficient manner – a key need among government organizations and especially among information sharing agencies – by providing these key benefits:
By providing a unified, government-wide framework for managing risk, FedRAMP overcomes the downside of duplication of effort and inefficiency associated with existing federal assessment and authorization processes.
When considering a move to the cloud and the level of security that is necessary, we should all take risk management seriously and invest in skill development and knowledge, as well as in adapting the processes for the 21st century and getting ready for the reality of the dominance of the cloud in our near future. FedRAMP provides the roadmap for any organization to achieve these goals.Category: Cloud Computing Published: 8/24/2018 2:43 PM
Editor’s note: James Lyne, a cybersecurity expert and global head of security research at Sophos, will deliver the opening keynote address at the 2018 CSX Europe conference, to take place 29-31 October in London, UK. Lyne visited with ISACA Now to discuss major challenges faced by the cybersecurity industry as well as which characteristics best position cybersecurity practitioners for success. The following is a transcript of the interview, edited for length and clarity.
ISACA Now: You describe yourself as a “massive geek.” What are some characteristics that earn you that esteemed distinction?
I’ve always loved technology and breaking things apart to understand how it all works. From a young age, I was dangerous with a soldering iron and enjoyed meddling, or building neat devices, like my first FM bug transmitter at 13! I take great joy in my geeky pursuits from programming to malware reversing and gaming. Geek culture is fantastic fun!
ISACA Now: What has surprised you most about the way the threat landscape has evolved over the past year or two?
Some trends continue for a very long time, and certain tactics are undeniably a staple of cybercriminals – for example, the old but eminently practical use of phishing. I think the most interesting shift over the last couple of years is cybercriminals’ diversification related to how they monetize you. Stealing data for fraud is not news, but the transition to ransomware that allows cybercriminals to profit from the fact you care about your data was quite a change – even more when they started using coin-mining malware to leverage your hardware to make money. That was an interesting shift from just stealing data and is brilliant in the ubiquitousness of its application.
ISACA Now: What is it going to take to attract more people to the cybersecurity profession?
One of the biggest issues here has been how under-advertised the profession has been. I’ve talked to countless winners of competitions and games that were organized to find talent and nearly every one of them said “I didn’t know I would be any good” or “You can do this for a job?” Even now, we do an OK job of advertising certain parts of the profession, like penetration testing, but fail to show how impactful and exciting other domains are. The other obstacle to overcome is how people first transition to industry. There are a vast number of roles that require five years’ experience and a tiny number that allow someone to start at an entry level to get said experience.
ISACA Now: What are the most important traits that would position somebody for success as a cybersecurity practitioner?
Most cybersecurity disciplines require extensive learning, and so a passion and drive to learn more about the topic is critical. Many security disciplines also require problem-solving and plenty of persistence. The drive to want to understand how a specific piece of code works and how to attack it, or to produce a tool that solves a new problem, is somewhat crucial. As much as this is a job that pays the bills, I do find that many of the more successful practitioners truly care about making things more secure and improving technology resilience and safety for all of us.
ISACA Now: How do you envision AI having the biggest impact on cybersecurity practitioners in the next 3-5 years?
Many are quick to anthropomorphise AI and assume much more significant ramifications than are likely in the short term. Even still, there are interesting short-term ramifications to security due to AI. As machine learning and AI are deployed and enhanced to solve business problems, they are potentially notable targets for attackers. For example, their data sets could be poisoned to change outcomes. At the moment these systems need to be protected just like conventional systems, but as they develop, perhaps new security issues and controls will be required. There is a lot of uncertainty in this fast-moving space, and both opportunity and looming threat.
More positively, there already are interesting applications of machine learning and AI occurring within the security domain, such as analyzing substantial data sets and identifying opportunities for better threat prevention. The cybersecurity industry has leveraged expert systems for a long time to extend the capabilities of human researchers and, as this technology develops, it will likely have even greater impact.Category: Security Published: 8/22/2018 2:53 PM