ISACA

Five Common Privacy Problems in an Era of Smart Devices

ISACA Now Blog - 2020年01月27日 08:33:54
Body:

I gave an Internet of Things (IoT) security and privacy keynote half a dozen times throughout the world last year, along with as many executive presentations. These presentations described the lack of security and privacy engineering within the devices themselves and related contributing factors. Throughout the recent holiday season, news broadcasts and publications warned about new IoT breaches, often resulting from insufficient data security controls being engineered into the devices, hacking into the data transmitted through the smart devices and misusing access to associated data in IoT devices. Several news reports throughout the past year also warned of vulnerabilities of IoT devices by nation-state hacking, along with many activities from cyber criminals.

As we mark Data Privacy Day today, it is worth taking a long, hard look at some common information security and privacy risks that exist within and related to IoT devices that have allowed privacy breaches and data security incidents to occur. Here are five common problem areas for IoT security and privacy:

1. Most smart devices do not have security or privacy controls built in to protect sensitive data transmissions. Those comparatively few that do have controls typically do not have them set to be secured by default, and as a result those using them do not set security controls, mistakenly believing that since they were advertised as having security built in that security was turned on by default. The users then unwittingly leave themselves wide open to unauthorized access.

For the hundreds of IoT device and app developers I’ve spoken with and done assessments for in the past several years, I have not found any smart device creator that had all of the following security and privacy features built into their device, and enabled by default:

  1. Strong encryption for data in storage and in transit
  2. Multi-factor authentication
  3. Activity logging
  4. Device management user interfaces

2. Device vendors and manufacturers are using and sharing your data collected through their devices and apps. Data is widely shared not only throughout the vendor business units, but also with downstream third parties, many of which the device users would be surprised to know about. A few examples include cloud sites for other smart devices, government agencies, insurance companies, law enforcement, data aggregators, data banks, social media sites and others. Once data leaves the device, the device user has basically lost all control over how that data will be used and shared.

3. Most smart devices have listening turned on by default. They have to listen to be able to “hear” the trigger words to get them to interact. Some devices, such as smart speakers, have been found to not only be listening all the time but also keeping the recordings of all that is said and can be heard. This despite vendor claims that the devices listen and have the associated conversations in the vicinity recorded and stored in the vendor’s clouds, only after the trigger word is spoken. We also know that vendors have large teams of humans who have the job of listening to the types of conversations taking place.

4. Devices are accessible through online connections. A large number of popular IoT devices, including many that are purchased to improve physical security, actually have no authentication or encryption, and can be easily found through tools such as Shodan, allowing potential attackers to establish a direct connection to these devices while bypassing any firewall restrictions. Many devices also have vulnerabilities that allow for unauthorized peeking by cyberstalkers.

5. Smart device builders/sellers have horrible privacy notices that are vague and usually tell you how you do not have rights to control your own data. I’ve reviewed dozens of privacy notices on smart device sites. Some are getting better now that GDPR and CCPA are in effect. However, in those instances, the site often indicates the protections only apply to California and EU residents. A couple of examples:

a. As the privacy notice reads, only California residents have the right to access their personal data if they use a Philips Hue smart lightbulb.
b. The Ecobee Smart Thermostat also gives such personal data access only to California residents.

If you are from some other US state, like Iowa, where I live, then based on how their privacy notice is written, it looks like you’re out of luck if you want to see the personal data they have about you, and if you want all the other rights they are giving to California residents. The same goes for those outside of the US. Well then, I won’t be buying any of Philips smart lightbulbs or Ecobee smart thermostats under their current privacy notices. But how many others will? As long as smart devices, and the providers of apps used with smart devices, are not penalized for having substandard privacy notices, they will continue this privacy-poor practice.

It is time to take action to get these risks mitigated to acceptably low levels, and also to meet the many existing and emerging legal requirements for privacy and data security controls.

Speaking of privacy practices …

As I was writing this article, I received an email from Fitbit (I’ve never subscribed to their messages, and I have never owned a Fitbit). It contained the following images:

As I looked at these stats, I wondered many things, including:

  • Can all those steps be broken down and attributed to specific individuals?
  • Can all the locations for the Fitbit users’ activities be tracked for each individual?
  • Can the specific times of activities be associated with each individual?
  • Can all this information be shared, without the knowledge of the individuals, with others, such as law enforcement, insurance companies, employers, and others?

I already knew the answer to all these questions was yes. Of course.

I would love to see a research company, or maybe even a university or an association such as ISACA, track and document, within some type of directory, the smart devices that have:

  1. Independent validation that they have privacy and security design and data handling practices in place, and
  2. Privacy policies that not only are easy to understand, but also reflect the organization’s actual practices, and meet all legal compliance requirements.

Is it too much to ask smart device businesses to build security and privacy controls into their devices, and to give consumers accurate information about their privacy practices within posted privacy notices? It seems like it must currently be too much to ask because I couldn’t find examples during my admittedly brief (approximately four hours) search online of any smart device privacy notice that fit these reasonable privacy ideals.

My hope for 2020: to find at least 10 smart devices, from 10 different device building businesses, that address all the previously outlined privacy protections and practices. The time is long overdue for these billions of IoT devices with privacy and security vulnerabilities to be fixed.

Category: Privacy Published: 1/28/2020 11:03 AM
カテゴリー: ISACA

CCPA’s Do Not Sell: It’s Here, But What Does It Mean?

ISACA Now Blog - 2020年01月22日 08:18:08
Body:

So, the California Consumer Privacy Act (CCPA) went into effect – and, the world didn’t burn. Companies have many issues to contend with, but one in particular has presented challenges to businesses that sell personal information. "Do not sell my personal information" requests (or opt-out requests), and confusion around what these really are, have many business leaders scratching their heads.

What is the CCPA Do Not Sell Requirement?
The CCPA provides several rights to California residents, including the right to opt-out of the sale of personal information. Specifically, California residents have the right to direct businesses to stop selling their personal information.

Businesses that sell personal information and do not qualify for an exemption for the opt-out right must take several different actions to comply with the CCPA.

More specific instructions are as follows:

1. A business must provide notice to consumers that it sells consumers’ personal information to third parties and that consumers have the right to opt-out of such sales.

2. The business’s website must post a “do not sell my personal information” link that takes consumers to a web page where they can exercise the right to opt-out of the sale of their personal information.

3. The business must provide this link on its homepage and any page that collects personal information, or on its application’s platform or download page.

4. Users must be able to submit opt-out requests without having to create an account.

5. The business must inform consumers of their right to opt-out and provide the “do not sell” link in its online privacy policy or any other California-specific description of rights.

6. The business must respect the consumer’s decision for at least 12 months. After this time, the business can ask the consumer to authorize the sale of personal information.

7. The business must train individuals responsible for handling customer rights inquiries and processing consumer rights requests.

Like many rules with the CCPA, this individual rule may seem easy to comprehend, but it poses a lot of challenges for businesses and consumers alike. These challenges include knowing exactly what personal information your business collects and sells, knowing what information belongs to which consumer, navigating and targeting information that lives in decentralized systems, and having a system in place to process opt-out requests.

Does My Business Need to Comply with CCPA Do Not Sell?
Not every business is impacted by the CCPA, but any business that collects and sells the personal information of California residents (including those without a physical presence in the state) needs to have a process to comply with the “do not sell my personal information right.”

If your business generates over US$25 million in revenue, collects information of more than 50,000 California residents a year, or derives 50% or more of its annual revenue from selling the personal information of California residents, then the CCPA will impact your business.

What Does “Sell” Mean?
According to the CCPA, selling is: “selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by the business to another business or a third party for monetary or other valuable consideration.”

Because the CCPA does not clearly define “valuable consideration,” this leaves some gray area for businesses to interpret.

How Can Your Business Comply with the CCPA “Do Not Sell” Rule?
New and evolving digital marketing properties and practices pose unique compliance challenges to businesses with respect to the “do not sell” requirements. In particular, businesses need to do the following:

  • Determine exactly what personal information they are collecting about each of their consumers and whether they are sharing or selling that personal information, or a part thereof, to third parties.
  • Clearly notify consumers of their right to direct businesses to stop selling their personal information and inform them how to do so.
  • Provide ways for consumers to direct businesses to not sell their personal information, including posting a “Do Not Sell My Personal Information” link on their websites. For example, the proposed CCPA regulations issued by the California Attorney General (AG) require, at a minimum, an interactive webform for submitting requests. Other acceptable methods include, among others, an email address and a toll-free phone number.
  • Establish procedures for responding to and fulfilling opt-out requests, as well as training personnel who handle such requests. For instance, businesses may consider automating the opt-out request process.
  • Maintain records of opt-out processes and details on the fulfillment or rejection of opt-out requests to demonstrate CCPA compliance and accountability.

What If I Need to Sell Personal Information?
If you’re a publisher or a blog that relies on ad support, this section of the law applies to you. If you need to sell personal information, make sure you are perfectly clear about what information you sell and why you sell it. Being more transparent about your selling practices may lead to fewer consumers who exercise their opt-out rights.

Author’s note: For more CCPA resources from OneTrust, visit www.onetrust.com/ccpa-compliance.

Category: Privacy Published: 1/23/2020 11:31 AM
カテゴリー: ISACA

Complacency Presents a Glaring Career Risk

ISACA Now Blog - 2020年01月16日 01:47:51
Body:

Editor’s note: Alison Levine, First American Women's Everest Expedition Team Captain and a New York Times bestselling author of “On the Edge,” will be the opening keynote speaker at ISACA’s 2020 North America CACS conference, to take place 12-14 May in Baltimore, Maryland, USA. Levine draws upon her background in extreme adventuring to convey insights on leadership and overcoming difficult challenges. She recently visited with ISACA Now to provide her perspective on navigating fear and professional challenges. The following is a transcript, edited for length and clarity:

ISACA Now: What was the toughest aspect of the American Women’s Everest Expedition?
It’s hard to narrow it down to just one thing. There are so many things that were really tough: the weather, the effects of extreme altitude, the homesickness, the doubt that kept crawling back into my mind that said, “Don't be silly – you can’t do this!” And I had to just keep saying, “SHUT UP. Yes I can!”

ISACA Now: What are the main life lessons that stick with you from your extreme adventuring background?
You don’t have to be the fastest or the strongest to get to the top of a mountain; you just have to be absolutely relentless about putting one foot in front of the other.

ISACA Now: The title of your keynote is “Fear is OK, But Complacency Will Kill You.” What is the fine line for when fear is constructive as opposed to paralyzing?
Fear is a normal, human emotion, and when you feel fear, it’s a good thing because it means you are alert, aware, and you are processing what is going on around you. Fear is only dangerous when it paralyzes you. COMPLACENCY is what puts you at risk. If you do not move, you are not going to survive.

ISACA Now: How is the fast-evolving technology landscape changing what it means to be a leader today?
Leaders today need to be able to pivot and change direction quickly since the business landscape is constantly shifting and changing. You have to be able to take action based on the situation at the time, and not based on a plan that you came up with at some point in the past. Plans are outdated as soon as they are finished. 

ISACA Now: What do you consider to be some overlooked characteristics of excellent leaders?

Often people equate quietness with weakness or apathy, when in reality, the ability to stay quiet and still is actually a display of strength.

Category: ISACA Published: 1/16/2020 12:26 PM
カテゴリー: ISACA

Another Buzzword Demystified: Zero-Trust Architecture

ISACA Now Blog - 2020年01月14日 09:04:54
Body:

I recently attended a security conference with multiple speakers covering a wide variety of topics – one of the topics, “Zero-Trust Architecture” (ZTA), was being addressed by one of the vendors, and I decided to sit-in to listen. A few minutes into the session, two facts became blaringly apparent – the speaker, who shall remain nameless, 1) did not actually understand what Zero-Trust Architecture is and what it means to implement Zero-Trust, and 2) this was a sales pitch disguised as an educational seminar.

Unfortunately, presentations on this and other topics often are heavy on buzzwords that don’t actually contribute value or advance understanding. As the aforementioned session came to a close, the session transitioned into the Q&A portion – which subsequently happened to be the same time I lost more hope for our fellow cybersecurity aficionados after hearing some of the questions asked. Below are just a few of them:

  • With ZTA being a new technology, what do you believe the adoption rate will be?
  • What products do we need to buy to fully implement ZTA?
  • Where can I buy ZTA?

After walking out of the session and regaining consciousness, I decided to take a little time out of my day to bring awareness to Zero-Trust Architecture and demystify what it means. First and foremost, ZTA is NOT a new technology. As illustrated by Palo Alto’s Cyberpedia article, achieving Zero Trust is often perceived as costly and complex. However, Zero Trust is built upon your existing architecture and does not require you to rip and replace existing technology. There are no Zero Trust products. There are products that work well in Zero Trust environments and those that don't.

Zero Trust is the term for an evolving set of network security paradigms that move network defenses from wide network perimeters to narrowly focusing on individual or small groups of resources. A ZTA strategy is one in which there is no implicit trust granted to systems based on their physical or network location (i.e., local area networks vs. the internet). In layman’s terms, the basic principles of zero-trust are:

  • Assume the network is always hostile
  • External AND internal threats are always present
  • Internal networks are not sufficient to equally trusted
  • Every device, user, and network flow MUST be proven
  • You must log and inspect ALL traffic

These security principles are a stark contrast to what most organizations currently implement, which is perimeter-based security, which adopts the following basic security principles:

  • Internal access is trusted
  • External access is untrusted

The major shortcomings of perimeter-based security are that:

  • Inside access is not always friendly
  • Modern attacks are inside-out, rather than outside-in
  • Trusted systems bring attackers in
  • Internal access is more loosely regulated

Most organizations go a step further and implement logical segmentation, such as separating different organizational components within their own subnets, implementing a demilitarized zone (DMZ), Web Application Firewall (WAF) and more. However, this approach is starting to show its age as the foundation of perimeter-based security primarily follows “trust and verify,” which is fundamentally different from ZTA’s paradigm shift of “verify, and then trust.”

Another fundamental concept that pairs well with ZTA is Trust Over Time (TOT), which essentially boils down to the notion that risk to systems and assets increase over time and need to be refreshed, due to deviations from the baseline. To reduce the operational risk over time, activities such as rotating credentials and replacing certificates will limit compromise and reuse.

ZTA is essentially asking us to authenticate and encrypt all traffic – end-to-end. Everywhere and anywhere. For ZTA to be implemented properly, encryption cannot simply be perimeter-based. Encryption is required at either the device or application. Endpoints should be configured to drop anything not encrypted. This is quite a tall order and has the potential to interrupt or completely break an operational process or technical mechanism depending on the implementation and environment. Justin Henderson from the SANS Institute does a great job going into further detail in his SEC 530 webcast seminar and provides further examples of leveraging your current technology stack to implement ZTA.

In summary, achieving Zero-Trust does not require adoption of any new technologies. It’s simply a new approach to cybersecurity to “never trust, always verify,” or to eliminate any and all trust, as opposed to the more common perimeter-based security approach that assumes user identities have not been compromised, all human actors are responsible and can be trusted. The concept of trusting anything internal to our networks is fundamentally flawed as evidenced by all the data breaches in the news, with most of the breaches caused by misuse of privileged credentials.

Category: Security Published: 1/14/2020 3:47 PM
カテゴリー: ISACA

Using AI as a Defensive Tool

Journal Author Blog Posts - 2020年01月13日 23:42:47
Body:

In a previous Journal article, I wrote about artificial intelligence (AI) and talked about the massive amount of digital data that are being accumulated, how new digitally oriented technology is affecting us, the sources of online data (e.g., personal, private), how data are used and how a career in AI can be useful to those interested in developing the skills to use AI.

In my most recent Journal article, I look at AI from an information security and privacy perspective. The article outlines AI concerns, threats and risk factors as a way of understanding AI as a cyberthreat. Once we have an understanding of the threat, we discuss ways to protect the cyberdata (and personal privacy). Preventive measures, protective controls, and detective practices and tools are presented to help understand how to manage the threat by using AI and other countermeasures.

The intent of this article is twofold. The first aim is to enlighten the security and privacy community that we need to use AI for good, and the article also aims to provide ideas and advice to those who have the means to use AI. By not using the speed of AI as a monitoring and protective tool, computing devices on the Internet (both directly connected and wireless) are becoming targets of exploitation by extremely fast and self-directed bots and botnets.

AI can be programmed as a defensive tool that can respond in real-time to find and prevent ransomware, find malware and stop cyberattacks, find the originating location of cyberthreats (and software), compile evidence of criminal intent (and implementation) for use by courts of law, and more. We need a tool that can stop malware from infecting the computing devices connected to the Internet, and in turn deprive the criminals a source of income.

I encourage you to read the article and share any insights, knowledge and ideas you may have on using AI to combat cyberthreats to security and privacy, and malicious and criminal activities.

Read Larry G. Wlosinski’s recent Journal article:
Understanding and Managing the Artificial Intelligence Threat,” ISACA Journal, volume 1, 2020.

Published: 1/13/2020 11:30 AM BlogAuthor: Larry G. Wlosinski, CISA, CISM, CRISC, CAP, CBCP, CCSP, CDP, CIPM, CISSP, ITIL v3, PMP PostMonth: 1 PostYear: 2,020
カテゴリー: ISACA

Storing for the Future: How Data Centers Will Advance in 2020

ISACA Now Blog - 2020年01月10日 04:50:23
Body:

The idea that data is an incredibly valuable resource in the modern business landscape isn’t new—but best practices for managing that data seem to change almost by the year. More than ever, enterprises leverage data centers to do their work, and savvy executives will be looking ahead in 2020 and beyond to learn how data can be managed more effectively.

Let’s consider three key questions here.

How will the advancement of AI improve the efficiency of data center technology?
Increasingly, artificial intelligence is being “baked in” to products from the get-go. A popular example of this concept would be IoT appliances—think a refrigerator that’s able to identify the items on its shelves, automatically facilitate restock orders and report on its own functioning and maintenance needs. Data center hardware can similarly benefit from AI:

  • Collecting Operational Data: IoT-empowered data centers keep track of their own systems on a more granular level, making it easy to compare actual performance with expected baselines. Data points might include temperature, battery functioning, data retrieval times and power usage.
  • Descriptive Analytics: Purpose-built analytics suites convert reams of data into useful insights—for customers and manufacturers alike.
  • Optimizing Efficiencies: AI can automatically regulate resource usage to save energy during low-usage periods, and take action when higher usage threatens to cause costly downtime.
  • Factoring in Context: There’s also the outside world to consider. By factoring in important contextual data, such as weather (which impacts cooling in each facility) and holidays (think Cyber Monday usage spikes), AI can tailor its functioning to adapt on the fly.
  • Detecting Malfunctions: Identify significant anomalies and take action to solve issues—before they become critical.
  • Anticipating Equipment Failure: AIs can project when components are likely to fail or fall below an acceptable level of efficiency. Having a clear understanding of a given piece of equipment’s natural lifecycle means having plans in place to keep things humming along.

In 2014 Google was famously able to reduce cooling costs in its data centers by a whopping 40% when it allowed its DeepMind AI to optimize functioning. Now the company is even manufacturing its own custom chipsets to squeeze out greater efficiencies and reduce the overall number of data centers the company relies on for highly resource-intensive functions like speech recognition.

How will data centers be affected by 5G wireless becoming the eventual standard?
By the end of 2020, 5G will be well on its way to becoming the new standard. This has big implications for applications like driverless vehicles, which are too data-intensive to function properly with current-generation 4G connectivity. It’s believed 5G wireless will be able to support speeds up to 100GB/s, roughly 100 times faster than 4G—which is expected to cause a permanent spike in data usage.

You may recall talk in 2016 about the internet entering the so-called Zettabyte Era when global IP traffic first exceeded one zettabyte. According to Cisco Systems, which coined the original term, 5G will bring about the Mobile Zettabyte Era. Considering the fact that the internet already consumes roughly 10% of the energy expended on Earth each year, this has massive implications for data centers.

On the one hand, the anticipated increased demand on data center hardware is already helping to spark a construction gold rush (see the next question for more on that)—a development that will benefit companies that can afford to build at hyper-scale for interconnectivity.

On the other hand, 5G offers so many potential benefits to enterprises (such as improved power efficiency; dynamic resource allocation; and massively improved support for IoT applications), that overall business for data centers should be thriving.

Will the surge in cloud data center construction make the idea of an on-premise data center obsolete for enterprises?
Data center construction is big business, with cloud companies spending over US$150 billion on new construction in the first half of 2019 alone. Does this spell doom for the on-premise server farm?

Gartner Research VP David Cappuccio certainly thinks so. In a blog post called “The Data Center is Dead,” the veteran infrastructure researcher asserts his belief that by 2025 no less than 80% of enterprises will have shut down their on-premise data centers. The crux of his argument is that most of the advantages of traditional data centers have evaporated thanks to technological advancements—notably faster data transfer and the greater operational efficiencies at hyper-scale that mammoth server farms enable.

The real tipping, though, is at the Edge.

Edge data centers are located close to customers’ physical locations, reducing latency. This improves service for more intensive needs like gaming, streaming and cloud computing. Having local nodes allows larger distributed cloud networks to also offer consistent enterprise-quality performance, even outside of high-tier regions like New York and San Francisco.

Altogether, most of the key advantages of on-premise data centers have been obviated, and those that remain have been relegated to niche functions. Today’s IT decision-makers now look for solutions based on their general business needs, such as the specific requirements for data centers in healthcare, as an example, rather than trying to force the solution to fit into their existing data architecture.

This agility helps enterprises more easily hunt for efficiencies, which will remain the hallmark of a successful company in 2020.

Category: Cloud Computing Published: 1/10/2020 11:31 AM
カテゴリー: ISACA

In the New Year, Don’t Fall Back Into the Same Bad Cybersecurity Habits

ISACA Now Blog - 2020年01月07日 02:26:08
Body:

Around this time each year, many people aim to follow through on their New Year’s resolutions with the hope of finally being able to break that bad habit, which can prove trickier than we would like. Unfortunately, the same often holds true in our approach to cybersecurity. Despite repetitive cybersecurity reminders, time and time again, we fall back into old habits. However, the new year seems like the perfect time to try to convince you that those bad cybersecurity habits might not be so hard to break after all. Below are several patterns to break that can make a big difference.

Using Weak Passwords
123456, iloveyou and qwerty continued to be used as passwords in 2019 and, no surprise here, they continued to show up in data breaches. Consider using a password manager to make it easier to remember those really long, complex passwords you are going to be coming up with as part of your resolution. In addition, start enabling two-factor authentication as much as possible – yes, even for that random app you decided to try “just once.” If you already do this personally, encourage your company to start implementing new policies or revamping those old policies to match updated password recommendations.

Insufficient Vigilance with Phishing Emails
Fake attachments were on the rise in 2019 due to email filters only scanning the body of an email for phishing links, while social media networks and Office 365 became larger targets for phishing emails because of the amount and value of the information contained within them. To start off 2020, promote awareness of phishing email red flags with a fun graphic or create a regular test schedule for email phishing campaigns. For your personal benefit, take a free phishing IQ test to make sure you stay on top of your game.

Accessing Free or Public Wi-Fi
We continue to use free and public Wi-Fi because, well, it’s convenient. We use it on our phones to check social media, and employees continue to use it on their laptops to access work on the go. One of these next tips might just be the easiest New Year’s Resolution you’ve ever made: turn off AirDrop and file sharing, log out of sites when you leave them, and change your device settings to not automatically connect to available Wi-Fi networks. For those that may need to access confidential information, make sure you use VPN and install updates for apps and the operating system as soon as possible.

The best thing you can do to ring in 2020 is to continue educating your company and the people around you about cybersecurity best practices. Human error continues to be the biggest weakness in cybersecurity, but you never know when a New Year’s resolution might actually stick.

Category: Security Published: 1/7/2020 2:59 PM
カテゴリー: ISACA

Who Will Harness AI More Effectively in the New Decade: Cybercriminals or Cybersecurity Professionals?

ISACA Now Blog - 2020年01月03日 00:01:26
Body:

We know artificial intelligence will loom large in the new decade, and we know cybersecurity will be critically important as well. How those two forces intersect sets up as one of the most fascinating – and consequential – dynamics that will shape society’s well-being in the 2020s.

According to ISACA’s new Next Decade of Tech: Envisioning the 2020s research, cybersecurity is the area in which AI has the potential to have the most positive societal impact in the new decade, with areas such as healthcare, scientific research, customer service and manufacturing also among the top responses offered by the 5,000-plus global survey respondents. If that proves to the case, it will represent a giant step forward for security practitioners and the enterprises that they help to protect. The threat landscape has become too expansive and too sophisticated for most organizations to handle relying exclusively upon traditional approaches. There is no shortage of ways in which tapping AI can enhance enterprises’ security capabilities, and the applications are particularly promising when it comes to putting the vast security insights available from big data to good use. Leveraging these insights will prove vitally important across the spectrum of security teams’ responsibilities, allowing them to better identify threats and pinpoint anomalies that might otherwise have escaped human practitioners’ notice.

The increasing integration of AI and machine learning into cybersecurity is especially important because the well-document cybersecurity skills gap does not appear to be abating. In ISACA’s Next Decade of Tech research, only 18 percent of respondents expect the shortage of qualified cybersecurity practitioners to be mostly or entirely filled over the next decade, and the majority anticipate the gap will either widen or stay the same. Given that it routinely takes organizations several months or longer to fill open cybersecurity roles today, and the increasingly challenging threat landscape, this ongoing skills gap brings into focus how critical it will be for organizations to incorporate AI into their security tools and techniques. None of this is to say that we should give up on working to address the human skills gap, as people analyzing AI, providing appropriate direction around AI solutions and communicating security risks to executive leadership will all be as necessary as ever in the next decade. Rather, we must explore all avenues to bring more skilled professionals into the field, including a concerted push to address the underrepresentation of women in the security workforce.

The Security Battle of the Next Decade: AI vs. AI
There is little question that AI and machine learning will increasingly be deployed by enterprise security teams in the 2020s, and it will not be long before heavy reliance on AI for security purposes becomes mainstream. What is less certain, though, is if enterprises will become more adept at using AI than the cyber adversaries that they are attempting to thwart. Unfortunately, cybercriminals are also well aware of the impact that AI can make, and they often prove to be ahead of the curve compared to the security teams who often are spread thin protecting all of their organizations’ digital assets. The potential use cases for malicious AI in a security context are often dire. The ISACA survey results list attacks on critical infrastructure as the leading cause for concern from malicious AI attacks in the next decade, with other possibilities – such as social engineering attacks, data poisoning and AI attacks targeting the healthcare sector – also creating worrisome scenarios. With AI presenting potent ways in which to sharpen existing attack types and opening the door to devising entirely new forms of attacks, an already formidable threat environment is sure to become even more perilous due to AI-driven advancements that cybercriminals will be eager to embrace.

As we transition to a new decade, there is no more meaningful question on the security landscape than who will harness AI more effectively: cybersecurity professionals or cybercriminals. Enterprises should be actively exploring how AI can present new avenues to strengthen their cybersecurity teams while also putting in place the needed governance and risk management frameworks to be sure that AI deployments are implemented responsibly. Considering how prominently AI will factor into cybersecurity in the 2020s, organizations also will need to invest substantially in training to equip practitioners with the knowledge of how AI-based security tools work and the context of how they can be best applied to the current threat landscape. In a decade of security that will boil down to AI-driven threats versus AI-bolstered security, taking these measures provides the best opportunity for security practitioners to rise to the considerable challenge. And in the medium-to-long term, look for the discussion on AI to shift to our ability to control it, as well as AI’s ability to protect or attack based on well-defined and regulated ethics.

Editor’s note: This article originally appeared in CSO.

Category: Security Published: 1/3/2020 7:49 AM
カテゴリー: ISACA

Innovating Yourself as an IS Auditor

Journal Author Blog Posts - 2019年12月31日 01:39:29
Body:

As new technologies are developed, we have to stay up to date with them. More so than almost any other practitioner interfacing with information technology, auditors have to work hard at continual education. It is not just the technology, though. We are also seeing orders of magnitude more data. More data to process means we have to be more efficient at sifting through those data to ensure we can protect our organizations. So how do we stay up with what is current?

First and foremost, we need to use technology for our benefit when we can. Data is a big deal, but as it has exploded, it is a big deal for just about everyone. That means companies are investing a lot of capital in developing systems to handle the reams and reams of information we have at our fingertips. These systems are able to spot trends and exceptions both. Why should these solutions be limited just to the folks doing financial forecasting? We can use them, too. That is a key attitude for us to take: When technology helps us, we have to come up to speed on it and leverage it for all its worth.

Second, speaking of learning new technology, we are being exposed to new ideas, new protocols and new standards all the time. We have to set aside the time to understand all of these new things. It is not practical to try to learn any of them in great detail. However, we have to understand them well enough to understand what they provide, where they have issues and what they should actually be used for. If we are relying on what we learned just 5 years ago, some of our knowledge is already out of date.

Finally, we have to understand that with the changes we have in technology, whole disciplines may be completely upended. I can remember a time when organizations were on the Internet and firewalls were a very uncommon thing. Now we are in an era where we know the firewall is not enough. These concepts are more abstract than a protocol definition. However, it is just as important that we stay up-to-date in these concepts as well.

All of this adds up to continually innovating yourself to maintain your knowledge and skills. The good news is that if you keep up, you will never be bored. Technology is changing at a break neck pace. There is always something new to learn and pick apart!

Read K. Brian Kelley's recent Journal article:

"Innovation Governance: Innovate Yourself—Using Innovation to Overcome Auditing Challenges," ISACA Journal, volume 6, 2019.

Category: Audit-Assurance Published: 1/1/2020 8:20 AM BlogAuthor: K. Brian Kelley, CISA, CSPO, MCSE, Security+ PostMonth: 1 PostYear: 2,020
カテゴリー: ISACA

Key Steps to Ensuring CISO Effectiveness

ISACA Now Blog - 2019年12月31日 00:47:23
Body:

In the classic movie “The Wizard of Oz,” protagonist Dorothy Gale leaves Kansas and enters a new world, the land of Oz. While Oz is unfamiliar and unlike anything Dorothy has encountered before, she is able to navigate fairly well because she has a roadmap – the Yellow Brick Road. CISOs are not as fortunate as Dorothy. For CISOs, the expectations may be clear (from operational oversight to organizational politics to managing talent), but a roadmap to being effective in meeting those expectations is notably absent.

Given the timeliness of the topic of CISO effectiveness, the Security Leaders’ Summit at the 2019 Infosecurity ISACA North America Expo and Conference delved into recommendations that may help CISOs navigate challenges they may experience along their career paths. In his presentation, "CISO Leadership: Navigating Cybersecurity Leadership Challenges," Todd Fitzgerald with CISO Spotlight, LLC shared tactical as well as strategic approaches that may help CISOs create a roadmap to effectiveness. Tactically, Fitzgerald recommends that CISOs:

  • Focus on where data is and how to protect it
  • Help the enterprise gain competitive advantage by using technology such as AI, machine learning and cybersecurity analytics.

Strategically, Fitzgerald shared that if an enterprise has the philosophy that cybersecurity is everyone’s responsibility, all departments should map their roles to cybersecurity. In return, CISOs can ask what they can do to help departments ensure cyber health for the enterprise. As CISOs partner across their enterprises to gain competitive advantage through technology, Prasant Vadlamudi, director, technology GRC, Adobe, advised CISOs to remain cognizant of stakeholders’ expectations regarding use of emerging technology, particularly when taxpayer funds are involved.

Continuing with the strategic approaches that CISOs may use to navigate a roadmap to effectiveness, in his presentation, “CISOs in the Boardroom,” Vivek Shivananda, president, CyberSecurity Solutions, Galvanize, offered the recommendation that CISOs remain mindful of the board’s concerns: business interruption, reputational damage and breach of customer information. He continued to share that two different dashboards can be useful for CISOs: an internal dashboard that is more technically focused and a second dashboard that is more focused on business impact. In looking at metrics, Shivananda recommended that CISOs acknowledge and address the challenges of identifying what metrics to focus on, deciding how to address the data needs of many stakeholders, and reconciling when data exists from multiple sources.

In looking at the challenges CISOs face as enterprises gauge the CISO’s effectiveness, data was a recurring topic covered during the summit. Recommendations for CISOs on how to address these data-related challenges included knowing where data is located in order to best protect the data, and leveraging the data as the basis of dashboards that meet internal needs as well as board expectations. Beyond data, strategic recommendations covered at the summit included positioning cybersecurity as everyone’s responsibility and remaining mindful of the board’s concerns. These recommendations are not the visible Yellow Brick Road that Dorothy Gale had to guide her journey in the Land of Oz, but they do provide a roadmap that CISOs can use to navigate a path to effectiveness.

Category: Security Published: 1/1/2020 7:25 AM
カテゴリー: ISACA

Leveraging Emerging Technology for Better Audits

Journal Author Blog Posts - 2019年12月24日 02:42:10
Body:

My first role post-graduation was working as a financial statement auditor. We used tick mark pencils on printed workpapers, and we manually footed (recalculated) balances. On my second engagement, I begged my manager to let me use annotation in PDF and Excel to expedite the process. He believed in me, and we accomplished the same level of quality in half the time it took the year prior.

We used the time savings to dive deeper into more meaningful work and, as an independent auditor, we accomplished something rare: true value-add feedback for the client. At the end of the project, I had spent the same amount of time as my predecessor, but I was able to accomplish so much more.

Fast forward, and we are now facing the same exact situation with analytics, artificial intelligence (AI) and robotic process automation (RPA). While there continues to be resistance to these solutions and fear among the general population, it will not replace us; it will empower us.

AI and the other tools often mentioned in the same breath are enablers; they will allow us to reduce time spent on remedial tasks that do not add value or do not require critical thinking to accomplish. But they are not a magic bullet—they must be implemented intelligently and with a strong understanding of return on investment.

Unlike switching from a paper-based audit to leveraging the tools on my enterprise-issued laptop, there is a significant cost associated with these new tools, and one that must be evaluated against the efficiencies that will be gained upon implementation.

As nice as it is to eliminate the repetitive and tedious task of matching change tickets to changes within enterprise resource planning, it only takes 20-40 hours a year to test this process on average, and while we have yet to reach economies of scale with some of these solutions, the automation of testing such a process can be expensive. While it is feasible, it may not be the best use of resources for an organization. Just like any advancement in our profession, we must be strategic and practical, harnessing the power of AI where we will see the best return on our investment.

Read Jake Nix's recent Journal article:

"The Intelligent Audit," ISACA Journal, volume 6, 2019.

Category: Audit-Assurance Published: 12/23/2019 2:42 PM BlogAuthor: Jake Nix, CISA, CPA PostMonth: 12 PostYear: 2,019
カテゴリー: ISACA

Five Revealing Security Incidents of 2019, and What We Can Learn from Them

ISACA Now Blog - 2019年12月24日 02:16:05
Body:

Every year has its share of security gaffes, breaches, and hacker “shenanigans.” As we enter into the new year, it is inevitable that we will see articles in the mainstream and trade press recapping the worst of them.

There are two reasons why these lists are so prevalent. The first is human nature: fear gets attention. Just like a product vendor using FUD (fear, uncertainty, doubt) to boost sales, so too can fear drive journalistic readership. So, it’s natural that the trade media would cover this. If we’re honest about it, there’s probably also an element of schadenfreude. High-stakes roles like assurance, governance, risk, and security are hard – and stressful. There’s an element of “thank goodness it wasn’t us” that happens to practitioners when reading about a breach that happened to some company other than our own.

All this is to be expected of course, but at some level when I see the inevitable year-end “breach recaps,” I feel like we’re missing an opportunity. Why? Because focus on the outcome alone leaves out an important part of the discussion – specifically, the lessons learned that inform how we can improve.

I’ll give you an example of what I mean. Say that I told you that the death toll from the Black Plague in the 14th century was about 50 million. Horrible, right? But does that give you any information about how to prevent disease? Measures to treat or diagnose them? Information about carriers or transmission vectors? No. Sure, a statistic like that is attention-grabbing ... but other than for a very small segment of practitioners (such as those doing pathogen statistical analysis), it doesn’t foster future disease prevention. If you spell out that the reasons for the rapid spread of the disease during this time period were (simplified of course) related to hygiene/sanitation, population density, and maritime trade, well that starts to tell you something that can inform prevention.

My point is, it’s useful to look at the worst/scariest events of the year to the extent that we draw out the lessons learned and takeaways that inform future efforts. With this in mind, and as a counterpoint to lists you might see in other venues, we’ve put together a list of five security “events” from this year that we think contain useful lessons. These aren’t the biggest breaches, or the scariest, or those with the biggest financial impact. Instead, these are the events that carry important lessons it behooves practitioners to learn from.

#1 – Facebook Account Data Compromised
The first one of these relates to the discovery of 540 million user records (about 150GB worth) from Facebook found exposed on the internet via several third-party companies. The root cause? Improperly secured S3 buckets. There are three important lessons here. The first and most obvious is the lesson about securing – and validating – permissions on cloud storage buckets (this is a big one). But there are other lessons beyond this. First, the importance of software (“application”) security. Tools like application threat modeling as we know can find and flag potential application design, configuration, or implementation issues early. Therefore, it remains an important tool in our security arsenal. Another lesson? The third-party angle – specifically, liability. In this case, the collection was facilitated by third parties and not Facebook itself; but yet, who is highlighted in the headlines? Facebook itself. Never forget that if you have the primary customer relationship, you ultimately wind up holding the bag.

#2 – Facebook Plaintext Passwords
The second one I’ve chosen to highlight is also from Facebook – in this case, the storage of Instagram passwords in the clear. Note that I’m not intentionally picking on Facebook by including them twice; in fact, this one is actually a “success story” for them (at least from a certain point of view). Specifically, in this case, a “routine security review” found passwords stored in plaintext. While analyzing and remediating that, they found more situations of passwords stored in log files as well. So, what’s the lesson? The main one I’d highlight is the value of technical assurance efforts, specifically the value of specifically validating cryptography use. These are areas often overlooked but that can have real, tangible security value. “Stuff happens” – and no company can ever do anything 100% perfectly all the time; but having a mechanism to find and fix those issues when they happen can mean the difference between minor egg on the face and major catastrophe.

#3 – Source Control Shenanigans
Next up, attackers downloading Git repositories, scrubbing them, and holding the source code for ransom. This event itself isn’t all that interesting from a tradecraft perspective: the vector here was run-of-the-mill account compromise (e.g., leaked or stolen passwords, API keys, etc.) What makes this interesting is the fact that it targets source control specifically. In days gone by, vetting source control platforms (both the configuration as well as whether they contain secrets like cryptographic keys or passwords) took a lot of time and occupied quite a bit of practitioner attention. As platforms become standardized and Git becomes ubiquitous, attention from practitioners can waver. Don’t let that happen. Staying vigilant about source control is still important – even in the GitHub era when everything is centralized and standardized.

#4 – Malware Fully Loaded
Germany’s BSI (Bundesamt für Sicherheit in der Informationstechnik) warned about Android smartphones coming “out of the box” with malware embedded (in this case, embedded in firmware). This isn’t the first time that we’ve seen phones (or other products for that matter) ship with malware pre-installed. It’s not even the first time we’ve seen BSI warning about stuff like this. It is, however, a great example of information sharing and the value of government keeping citizens’ information secured. The BSI is specifically chartered with warning people about security issues in technology products (section 7 of the BSI Act) and investigating products (see section 7a). The lesson? There’s a role that governments can play in ensuring the security of products sold within its jurisdiction, and that role can be highly effective.

#5 – Disgruntled
Last up, a dismissed IT staff member of a transportation service company was jailed for targeting and sabotaging his former employer’s AWS systems. Long story short, after he was let go back in 2016, he used an administrative account to begin systematically sabotaging and disabling AWS assets of his former employer; as a result, the company lost a few key customer contracts. There are a few lessons here. First, once again note the earlier lesson about securing cloud assets. Beyond hammering on that again, though, this event also highlights internal threats and privileged account use. We would all do well to maintain awareness of internal threats. As technology becomes more prevalent and more business-critical, a rogue or disgruntled employee (even former employee) has the potential to do significant damage. Likewise, cloud can make privileged accounts more complicated since it can add account types we didn’t have before (in addition to root and Administrator users, we now also have cloud administrator accounts to keep track of). Continued management, monitoring and protection of these accounts is important.

Category: Security Published: 12/26/2019 2:46 PM
カテゴリー: ISACA

Connecting COBIT 2019 to the NIST Cybersecurity Framework

ISACA Now Blog - 2019年12月24日 01:02:41
Body:

Among the most exciting projects I’ve worked on has been the integration of NIST’s Cybersecurity Framework with COBIT. Now, with the update of that project to COBIT 2019, entities around the globe will have a fresh and agile methodology for improving cybersecurity! The NIST CSF provides a model based on five functions: IDENTIFY important information & technology (I&T) and what threatens it; discuss and analyze how best to PROTECT I&T; determine how best to DETECT issues; RESPOND quickly and effectively; and, achieve organizational plans to RECOVER well. One challenge is that NIST decided not to provide detailed implementation guidance but prefers to let industry factors influence how the CSF is used. At times, what NIST publishes as agile guidance gets adopted as rigid, prescriptive criteria instead, so I can understand the hesitation to provide even an example recipe. But, how to apply this useful framework in a way that’s meaningful for my enterprise? Enter COBIT 2019!

ISACA’s new guide to Implementing the NIST Cybersecurity Framework with COBIT 2019 provides a method for using COBIT 2019’s processes to gain the benefits of the NIST CSF. COBIT is stakeholder-driven in that it begins with asking, “How do/should information & technology (I&T) bring value to those (e.g., owners, partners, customers) that have a stake in the organization’s success?” The important follow-up question to that is to ask, “How do I balance achievement of that value while optimizing both risk and resource considerations?” The new implementation guide steps the reader through COBIT 2019’s seven phases, showing how the NIST CSF steps and relevant COBIT activities work together to understand objectives, current state, risk implications, desired state and an action plan to get there and stay ahead. Notably, the guide describes COBIT’s updated features like Design Factors (added to bring agile customization) and Focus Areas (areas of governance and management that merit a particular bit of attention for this particular entity). In the same way, these updates help COBIT users create a flexible but meaningful model for enterprise governance and management of enterprise I&T, using COBIT and NIST CSF together to provide a way to plan and achieve a cybersecurity action plan and keep it up to date.

It is interesting that, in showing how to use COBIT for cybersecurity, colleagues have shared that the process helps them better understand COBIT itself. Some that might benefit from COBIT don’t initially grasp some of its use of terms like “stakeholder objectives” and “intrinsic and contextual elements of information quality criteria.” But when they step back and take a look at the common-sense approach COBIT brings, they understand that organizations don’t just want to “go through the motions” – they benefit from identifying what will best contribute to the organization’s success and how to get there. Thus, it makes sense to figure out our objectives for success based on what’s important to our stakeholders. It makes sense to combine COBIT’s proven governance and management methods and performance measurement activities to ensure successful achievement of those objectives. And, so, it makes sense to apply the lessons learned in COBIT’s 20-plus year history to govern and manage cybersecurity as an important element of stakeholders’ success.

ISACA will be offering courses in how to achieve that success, including a credential on using COBIT 2019 with NIST CSF. I’ve been fortunate to teach previous versions of that course, and the diverse ways that students use COBIT and NIST CSF are a testament to the value of these two frameworks, and the benefits of using the two together. I look forward to hearing how it helps you gain those benefits.

Category: COBIT-Governance of Enterprise IT Published: 12/23/2019 10:44 AM
カテゴリー: ISACA

Government Officials Must Become Better Attuned to Data Privacy Regulations

ISACA Now Blog - 2019年12月19日 06:54:13
Body:

Data privacy and security is more important than ever before. Despite existing policies, the number of data breaches is on the rise and unencrypted personal information is getting into the wrong hands.

In 2016, the EU adopted the General Data Protection Regulation (GDPR) to combat the problem of data security. Since then, other data protection laws have gone into effect and businesses all over the world have adopted stricter standards for collecting and storing data. It seems logical to assume the US government would be equally concerned with data privacy, but a recent problem with its drone surveillance program says otherwise.

Drone Surveillance Requires Privacy Compliance
The US government has been using drones for surveillance for quite some time.Pogo.com reported on a research study that found at least 910 state and local public safety agencies have purchased drones – 599 being law enforcement agencies.

Knowing the privacy implications of drone surveillance, you would think government agencies would be on top of data privacy and security regulations, but that’s not the case. In 2018, we learned that the US Customs and Border Protection (CBP) officials were using drones to collect data (images and videos) without considering privacy implications.

An audit conducted by the Office of Inspector General revealed that CBP officials failed to perform a privacy threshold analysis for the Intelligence, Surveillance, and Reconnaissance Systems used to collect data because they were “unaware of the requirement.” A privacy assessment would have determined whether the systems contained data requiring safeguards under privacy laws, regulations and Department of Homeland Security policy.

The drone surveillance program also failed at managing IT security controls that put the actual drones at risk.

Lack of Awareness is Problematic
The stories coming from officials are in conflict. One official claims nobody told him a privacy assessment was required. Another official told the team a privacy analysis was unnecessary since the drone surveillance system didn’t store personally identifiable information.

While it might be true that officials were unaware of the privacy requirements for collecting data, the inadequate oversight is inexcusable.

Somebody should have initiated a communication from the top down, informing the entire team of the privacy safeguard requirements. Unfortunately, the entire project lacked responsibility and accountability. There was no management in place. Nobody was deemed responsible for funding and maintenance.

The main problem, pointed out by CSO Online, is that the drone surveillance systems were never added to CBP’s IT inventory, which created the privacy oversight. Program officials admitted:

“These information security deficiencies occurred because CBP did not establish an effective program structure, including the leadership, expertise, staff, training, and guidance needed to manage ISR Systems effectively. As a result, ISR Systems and mission operations were at increased risk of compromise by trusted insiders and external sources.”

If the government can’t be counted on to protect the privacy of data collected without our consent, that’s not going to sit well with the public.

Dropping the ball on data privacy is out of character for the CBP. The CBP is normally on top of its game and does not let anything slip through the cracks. It sets up extremely detailed processes for everything it manages. For example, CBP takes extreme precautions when letting travelers in and out of the US.

Official-esta.com describes the complex ESTA approval process, noting that: “when you apply for an ESTA online, the system instantaneously crosschecks the biographic information supplied by applicants against multiple databases, including the TSDB (Terrorist Screening Database), records of lost and stolen passports, the SLTD (INTERPOL’S Stolen and Lost Travel Documents database), any previous Visa Waiver Program refusals, visa revocations, expedited removals, as well as records from Public Health departments, including the CDCP (Centers for Disease Control and Preventions) to check for individuals suffering from a communicable disease which constitutes a threat to public health.”

It seems strange that the same attention to detail was not applied to the drone surveillance program.

Government Officials Need Education
It’s possible that the CBP officials involved in the drone surveillance program were just misinformed or not informed at all. This situation highlights the importance of strict oversight wherever data privacy is concerned. Hopefully, the lesson has been learned and new protocols are in place to ensure the oversight shortcomings don’t happen again.

Category: Privacy Published: 12/19/2019 10:56 AM
カテゴリー: ISACA

Addressing the Challenges of New Privacy Laws

Journal Author Blog Posts - 2019年12月17日 00:26:04
Body:

US State of California Senate Bill 327 Information Privacy: Connected Devices (SB 327) goes into effect January 2020. What does that mean for you? Even if your organization does not develop Internet of Things (IoT) devices, SB 327 is worth following. It is in a unique situation because of its scope and breadth, not only for privacy and security, but also for how privacy-based laws are enforced and regulated.

Think of it as representing new territory in privacy. We are now seeing the social responsibility lawmakers are taking on by legislating privacy and security requirements, and while no one can say that is a bad thing, how are lawmakers deciding what goes into these laws?

Moreover, when it comes to regulation, it is not clear how SB 327 will be enforced. It is not always evident what lawmakers intended with some of their stipulations. In fact, its guidance is to have organizations use “reasonable security features” to protect IoT devices. This means that only time will tell to what degree SB 327 will be regulated because there is no precedence for enforcing the law yet.

Where do we begin with addressing the vague requirements for providing Internet-connected devices with security and privacy controls? Without more information about how the law is intended to be enforced, we can only start with best practices such as:

  • The Open Web Application Security Project (OWASP) IoT Top 10
  • The UK Government's Code of Practice for Consumer IoT
  • The European Union Agency for Cybersecurity (ENISA) recommendations

Frameworks like these are a great place to start, but even with these privacy and security practices, the onus is on organizations to build their own secure development programs. Existing frameworks are helpful for building a taxonomy of security vulnerabilities, but this is not an easy task to undertake, and you will likely need your own team of security professionals if you want to do it yourself.

The importance of SB 327 has been unclear, and its social and industry impact remains to be seen. In an age where technology is moving faster than lawmakers can legislate it, a law with the capacity to extend beyond its jurisdiction could become the first of many that help shape the way we use the devices we rely on every day.

Read Nathanael Mohammed and Farbod Foomany's recent Journal article:

"Building Security Into IoT Devices," ISACA Journal, volume 6, 2019.

Category: Privacy Published: 12/16/2019 2:59 PM BlogAuthor: Farbod H. Foomany, Ph.D., CISSP and Nathanael Mohammed PostMonth: 12 PostYear: 2,019
カテゴリー: ISACA

Artificial Intelligence: A Damocles Sword?

ISACA Now Blog - 2019年12月13日 04:07:00
Body:

“Artificial intelligence (AI) is proving to be a double-edged sword. While this can be said of most new technologies, both sides of the AI blade are far sharper, and neither is well understood.” - McKinsey Quarterly April 2019

In Greek mythology, the courtier Damocles was forced to sit beneath a sword suspended by a single hair to emphasize the instability of kings’ fortunes. Thus, the expression “the sword of Damocles” to mean an ever-present danger.

To use this idiom metaphorically, the users of artificial intelligence are like kings, due to the amazing and incredible functionalities brought in by this cutting-edge technology, but have a sword hanging on their head due to the perils of such highly scalable nature.

Artificial Intelligence: Meaning and Significance
To quote a formal definition, AI is “the art of creating machines that perform functions that require intelligence when performed by people.” - Kurzweil 1990.

However, intelligence is a more elusive concept. Though we know that humans require intelligence to solve their day-to day-problems, it is not clear that the techniques used by computers to solve those very problems endow them with human-like intelligence. In fact, computers use approaches that are very different from that of those used by humans. To illustrate, chess-playing computers used their immense speed to evaluate millions of positions per second – a strategy unable to be used by a human champion. Computers also have used specialized techniques to arrive at the consumer’s choice of products after sifting through huge data, identifying biometric, speech and facial recognition patterns.

Having said that, humans use their emotions to arrive at better decisions, which a computer (at least at present) is incapable of doing. Still, by developing sophisticated techniques, AI researchers are able to solve many important problems, and the solutions are used in many applications. In health and medical disciplines, AI is able to contribute and provide advanced solutions, by yielding groundbreaking insights.  AI techniques have already become ubiquitous and new applications are found every day. Per the April 2019 McKinsey Quarterly Report, AI could deliver additional global economic output of $13 trillion per year by 2030.

AI Risk and Potential Remediating Measures
Along with all the aforementioned positive outcomes, AI brings in innumerable risks of different types, potentially ranging from minor embarrassments to those highly catastrophic in nature, potentially endangering humankind. Let us enumerate and detail some of the risks known to be brought on by AI:

1. Lack of Complete Knowledge of the Intricacies of AI
AI is a recent phenomenon in the business world and many leaders are not knowledgeable about potential risk factors, even though they are forced to embrace it due to market and competitive pressures. The consequences could be anything from a minor mistake in decision-making to loss of customer data leading to privacy violations. The remediating measures are to involve and make everybody in the enterprise accountable and also to have board-level visibility in addition to having a thorough risk assessment done before embarking on AI initiatives.

2. Data Protection
The huge amount of data which are predominantly unstructured and are taken from various sources such as web, social media, mobile devices, sensors, and the Internet of Things is not easy to protect from loss or leakage, leading to regulatory violations. A strong end-to-end process needs to be built, with robust access control mechanisms and with a clear description of need-to know-privileges.

3. Technological Interfaces
AI mainly works on interfaces where many windows are available for data feeds coming from various sources. Care should be taken to ensure that the data flow, business logic and their associated algorithm are all accurate to avoid costly mishaps and embarrassment.

4. Security
This is a big issue, as evidenced by ISACA’s Digital Transformation Barometer, which shows that 60 percent of industry practitioners lack confidence in their organization’s ability to accurately assess the security of systems based on AI and machine learning. AI works on a huge scale of operations, so every precaution is to be taken to ensure the perimeter is secured. All aspects of logical, physical and application security needs to be looked into with more rigor than would otherwise be warranted.

5. Human Errors and Malicious Actions
Protect AI from humans and humans from AI. Insider threats like that of disgruntled employees injecting malware or wrong coding could spell disastrous outcomes or even lead to catastrophic events like the destruction of critical infrastructure.  Proper monitoring of activities, segregation of duties, and effective communication and counseling from top management are good suggested measures.

The deployment of AI may lead to discrimination and displacement within the workforce, and also could result in loss of lives for those who need to work with AI machines. This could be effectively remediated by upskilling and placing humans in vantage points of supply chains whereby they play an important role in sustaining customer relationships. To prevent workplace perils related to AI, rigorous checking of scripts and installation of fail-safe mechanisms, such as overriding the systems, will be helpful.

6. Proper Transfer of Knowledge and Atrophy Risk
The intelligence required by humans to solve a problem is transferred to machines through programs, so that it will resolve the same problem at a much larger scale with great speed and accuracy. Therefore, care should be taken so that no representative data or logic is left out or erroneously pronounced, lest it result in poor outcomes and decisions with losses to the business.

Because a skilled human will cede tasks to be executed by machines, such skills in humans could be eroded over time, resulting in atrophy. This could be partly remediated by keeping an up-to-date manual on such critical skills, including disaster recovery mechanisms.

Disclaimer: The views expressed in this article are of the author’s views and does not represent that of the organization or of the professional bodies to which he is associated. 

Category: Risk Management Published: 12/13/2019 2:58 PM
カテゴリー: ISACA

Who Am I? CRISC Equips Professionals and Organizations with a Valuable Identity

ISACA Now Blog - 2019年12月10日 07:09:47
Body:

As a risk practitioner, have you ever tried to describe what you do for a living to a family member or a friend? If so, you’ve likely experienced their acquiescent and politely confused reaction as you articulate concepts like risk assessments, controls, tests, tolerance, appetite, key risk indicators, governance and a host of other tactics that are commonly executed as part of a practitioner’s day-to-day responsibilities. At the conclusion of your pride-filled intellectual description, you feel like you did a great job explaining what you do, when your conversational partner replies with, “Wow, that sounds awesome! So, what do you actually do?” Uncertain about how to respond, you begin to retrace your words only to realize that internally, you are asking yourself that very same question, combined now with an unclear perspective about your professional identity. You ponder, “What DO I do, and, who am I as a professional?”

Over the past 20 years, I’ve observed a plight all too common among risk practitioners wherein there is an enthusiastic rigor to schedule tasks, complete action plans, provide reporting/updates and declare that risks have been mitigated, when the most certain of questions is to follow: “So, what risk did we eliminate/reduce and how does that add value to our organization?” The enduring effort to complete tasks and assignments by the risk practitioner propagates and reinforces an illusion of risk management, because work, in the form of tasks and actions, was completed.

Reality strikes! In absence of utilizing an industry framework with principles, common taxonomy and structured objectives to clearly articulate how issues, losses and events are being prevented or reduced, the risk practitioner’s reputation, brand, self-esteem and identity progressively deteriorates. I’ve equipped hundreds of professionals with the training and tools provided by the CRISC certification and the outcome is nearly always the same, where CRISC training/certification served as a catalytic fuel energizing the risk practitioner’s identity while at the same time accelerating organizational maturity in the direction of a value-driven, risk intelligent culture. Here is how:

Individuals Identify Themselves as Competent and Confident Practitioners

  • A Strong Foundation: They learn the basics, they speak a common language and they use a proven methodological approach
  • A Community of the Like-Minded: They are part of a formally recognized community of professionals
  • A Distinction: They have made it through the studies and requirements necessary to obtain the CRISC distinction
  • Unlocking Strategic, Big-Picture Thinking: Their competencies become habits, freeing up their mind to think more broadly with intriguing inquisition
  • Clearly Articulating Value: Labeling/linking value and purpose effectively with executives, second/third line and examiners

Organizations Evolve to a Risk Intelligent, Value-Driven Ecosystem, Fueled by Trained Practitioners

  • Organic Neural Networking Within the Company: Team members formed their own think/brain tanks resulting in multiple innovations/enhancements within the first few months after CRISC training
  • Advancing and Benchmarking Industry Expertise: Team members developed external relationships within and across ISACA chapters to anticipate opportunities, prevent issues/events, and design better controls
  • Organic Employee Development Ripple Effect: Coaching took on a natural form, where CRISC candidates willingly encouraged, coached and mentored others

When you were asked about what you do for a living, it would have been so much easier to reply with something like: “I prevent bad things from happening to our customers/company. When I do my job well, my customers are safe and secure, and my company’s brand becomes stronger.”

With CRISC as an enabler, your employees will grow, develop and identify as professionals, and your organization will become enmeshed in a risk culture that is strong, resilient and organically intelligent.

Editor’s note: To find out more about the custom training program opportunities offered through ISACA, visit ISACA’s enterprise training page.

Category: Risk Management Published: 12/10/2019 2:48 PM
カテゴリー: ISACA

AI Practitioners: Our Future Is in Your Hands

Journal Author Blog Posts - 2019年12月10日 00:09:24
Body:

Imagine it is sometime in the 22nd century and that the future you is preparing for a complex surgical procedure at the local robot-run hospital, where it has become commonplace for robots to perform sophisticated, repeatable tasks, such as heart surgery, on human patients. This is the first time a robot is tackling a septal myotomy on a human, on you no less. It is still one of the most complicated medical procedures in the world almost 160 years after it was first performed, and it still takes up to 6 grueling hours for a human doctor to do, all the while nothing but a machine keeps you alive.

In the days leading up to the procedure, the chief robot doctor of the facility, Dr. Ava—named after a character in a cult classic film made more than a century before—and all but indistinguishable from a human except for the odd irregular whirring sound occurring whenever she looked up toward the sky, sat you down to share the nature of some of the quite considerable risk factors involved in the procedure. At one point, your eyes wandered to see a few framed diplomas hanging on the wall, including one from the renowned C-3P0 institute, from where Dr. Ava must have learned her diplomacy and her disarmingly reassuring doctor’s bedside manner.

Your eyes are then drawn to one from the Isaac Asimov Institute, named after one of the most famous 20th century scientists and author of the evergreen 1950's classic I, Robot. Recalling his works, you become distracted by thoughts of the 3 laws of robotics, how robots learn and whether they are sufficiently equipped to handle the variability that all too often occurs in complex medical procedures.

It is then that you begin to think about the quality of data required for a robot to learn, especially one performing something as delicate as heart surgery. Quite simply, even a small amount of bad data could mean death on the operating table under a robot; 1 micrometer too far to the right could be all it takes. You then become lost in flashbacks of a century before, from those holographic history “books” or holobooks you so enjoy interacting with, a time when AI practitioners were barely aware—some actively choosing to remain ignorant even—of the fact that data could be a kind of evil beyond their wildest dreams, a state of affairs that caused the nightmare on Earth otherwise known as the Blackening of the late 2030s.

The Blackening was a downstream outcome of the big data hype of a time near the start of the 21st century. It was a time of almost unconstrained data fusion, analytics, machine learning (ML) and robotics by many self-proclaimed “experts” using the primitive technology of the time to increase efficiencies and to supposedly better serve humankind. Little did they want to know that dirty data do to an algorithm what poison does to a man. It kills, sometimes slowly. 

Furthermore, those holobooks taught you about a time around the mid 2010s when many humans had raised concerns about the future of human work and how robots would take over the world. Oh, how that crowd would chant “I told you so” if they were alive today. Warnings were sounded over the need to assess the quality of data for artificial intelligence (AI), including by that budding author Pearce, but the dirty data poison from decades of negligence, ignorance and technological debt leached into our robot helpers, ultimately leading them to run amok against us in scenes akin to that classic fiction Westworld. But alas, the siren’s call of power and profit was too strong. As a species, we did not actually think we would survive much beyond the middle of the 21st century. We were doomed, but there was a kind of h… 

A faint whirring sound from Dr. Ava gently brings you back, and you ponder the Global Artificial Intelligence Act (GAIA) of 2078 and how it gave a new impetus to human life on our pale blue dot. In particular, it required that all production AI instances be able to demonstrate the quality of the data used. Not only that, it required strict evidence of from where the data came, how they were transported and how they were transformed. It required that the data used in AI be described in unambiguous human terms to ensure that data would only be used as intended. In essence, it required data to be tested and to ensure that controls were put in place to prevent poor data from contaminating the combined consciousness of humans and machines. After this demanding mental journey, you found yourself easing into a greater sense of peace and relaxation, a state vital for the success of the medical procedure to come. By the way, the cost of noncompliance with GAIA? Exile to that cold, barren martian moon Deimos.

So to all of you AI practitioners living back there in 2019, please make sure you read my recent Journal article to understand why data intended for AI should be the subject of critical assessment and data audits. Preventing the Blackening of the late 2030s is all in your hands.

Read Guy Pearce's recent Journal article:

"Data Auditing: Building Trust in Artificial Intelligence," ISACA Journal, volume 6, 2019.

Category: Security Published: 12/9/2019 1:02 PM BlogAuthor: Guy Pearce, CGEIT PostMonth: 12 PostYear: 2,019
カテゴリー: ISACA

When Everything Old is New Again: How to Audit Artificial Intelligence for Racial Bias

ISACA Now Blog - 2019年12月06日 02:40:07
Body:

You may not know it, but artificial intelligence (AI) has already touched you in some meaningful way. Whether approving a loan, moving your resume along in the hiring process, or suggesting items for your online shopping chart, AI touches all of us – and in some cases, with much more serious consequences than just putting another item in your chart.

As this technology becomes more widespread, we are discovering that maybe it’s more human than we would like. AI algorithms have been found to have racial bias when used to make decisions about the allocation of health care, criminal sentencing and policing. In its speed and efficiency, AI has amplified and put a spotlight on the human biases that have been woven into and become part of the Black Box. (For a deeper dive into AI and racial bias, read the book, Automating Inequality, Weapons of Math Destruction, and Algorithms of Oppression: How Search Engines Reinforce Racism.)

As auditors, what is the best approach toward AI? Where and how can we bring the most value to our organizations as they design and implement the use of AI? Auditors need to be part of the design process to help establish clear governance principals and clearly documented processes for the use of AI by their organizations and its business partners. Because AI is not static, it is forever learning. Auditors need to take an agile approach to continuous auditing of the implementation and impact of AI to provide assurance and safeguards against racial bias.

Design and Governance: “In Approaching the New, Don’t Throw the Past Away”
In the United States, we like to think that the impact of slavery ended with the Civil War. It didn’t. We also want to believe that the landmark US Supreme Court case of Brown vs. Board of Education gave everyone access to the same education. It didn’t. Title VII of the Civil Rights Act of 1964 was passed to stop employment discrimination. It didn’t. Nonetheless, these “old” concepts of fairness and equality are still valid and are what is needed to be incorporated into the new AI; first, at the design and governance level; and then at the operational level. As the auditor, you should be asking what are the organization’s governance principles regarding the use of AI? A starting place may be to suggest that your organization adopt the OECD Principles on AI.

Do these principles apply only to the organization or also to its third parties and other business partners? How do these principals align with the organization’s values and code of conduct? What risks are associated with the use of AI that are not aligned with these principles? Conducting impact assessments to help create bias impact statements can help build out these principals. (See Model Behavior: Mitigating Bias in Public Sector Machine Learning Applications for eight specific questions that auditors can ask to help in the design phase to reduce bias in AI). Other resources to consider are After a Year of Tech Scandals, Our 10 Recommendations for AI, Algorithmic Bias Detection and Mitigation Best Practices and Policies to Reduce Consumer Harms, and
Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability.

Implementation and Impact: “Put it on Backwards When Forward Fails”
The greatest challenge with auditing AI is the very nature of AI itself – we don’t fully understand how the Black Box works. Further, the decisions it made yesterday may not be the same today. When looking at implementation and impact, a few frameworks have emerged (See ISACA's Auditing Artificial Intelligence and the IIA’s Artificial Intelligence Auditing Framework: Practical Applications, Part A & Part B.) To see how others have approached this challenge, looking at the numerous research projects in the public sector can be helpful. Regardless of the methodology used, because AI is always learning, an agile approach that provides for continuous auditing will be required to provide assurance against racial bias.

Editor’s note: For a forward-looking view of AI in the next decade, see ISACA’s Next Decade of Tech: Envisioning the 2020s research.

Category: Audit-Assurance Published: 12/6/2019 2:24 PM
カテゴリー: ISACA

How Big Data Aids Cybersecurity

ISACA Now Blog - 2019年11月27日 06:42:27
Body:

The increasing reliance on big data and the interconnection of devices through the Internet of Things (IoT) has created a broader scope for hackers to exploit. Now both small and large businesses have an even wider surface to work on protecting. Yet, all it takes is one new trick for an attacker to penetrate even the most sophisticated firewalls in a matter of seconds. The good news is that while, on the one hand, increased reliance on big data puts businesses at risk of cyberattacks, if used well, the same data can be used to enhance cybersecurity.

How Big Data Is Helping Cybersecurity
We are so used to the idea of protecting data that using it to bolster cybersecurity might not be top of mind. However, it's not only sensible, but also incredibly effective. According to the results of a study conducted by Bowie University, 84% of businesses using big data successfully managed to block cyber-attacks. What was their secret? Three words: big data analytics.

Big data analytics refers to the process of analyzing or assessing large, varied volumes of data that is often unexploited by regular analytics programs. The data can either be unstructured or semi-structured, and in some cases, it could be a mix of both. Initially, the aim of analyzing such data was to make data-driven decisions and determine customer preferences to improve operational efficiency and enhance client satisfaction. But now, data analytics is also being used to retrieve important information from big data, with the sole aim of strengthening cybersecurity. This is done by analyzing historical data to come up with better security threat controls.

By combining big data analytics and machine learning, businesses are now able to perform a thorough analysis of past and existing data and identify what's “normal.” Based on the results, they then use machine learning to strengthen their cybersecurity parameters so they can receive alerts whenever there's a deviation in the normal sequence of things, and consequently, thwart cybersecurity threats.

For instance, if big data analytics on past and existing data show that all employees log in an entity’s system at 8 in the morning and log off at 5 in the evening, the business will mark this as the standard and expected sequence of things. They will, therefore, come up with a way to prevent and get alerts any time there’s an attempted login before 8 a.m. or 5 p.m. This, in turn, can prevent potential hacks from happening. In a nutshell, carrying out a thorough analysis of historical data helps an organization identify its network’s regular patterns, so it can come up with solutions to detect and prevent deviations in real-time.

The Analysis of Current and Historical Data for Threat Visualization
By analyzing big data, businesses can foresee future attacks and come up with effective measures to prevent them. For instance, if a company is already a victim, carrying out a thorough analysis of the data of the events leading to the attack helps it identify the patterns followed by the hackers before they gained successful entry into the network. They can then use machine learning to formulate a solution that will ensure such a thing doesn't happen again.

Alternatively, if a business has never been attacked, it can use current and historical industry data to identify strategies used by hackers to attack other entities. Based on what it comes up with, it can then visualize what steps similar attackers would take to penetrate its system, and consequently, come up with a solution before they do.

While it’s true cyber-criminals do target big data while formulating their attacks, organizations can use the same data against them through data analytics and machine learning.

Category: Security Published: 11/27/2019 2:47 PM
カテゴリー: ISACA

ページ