Insider risk indicators thwart potential threats | TechTarget (2024)

Feature

By paying attention to risk indicators, enterprises can tell the difference between insider threat and insider risk to prevent falling victim at the hands of one of their own.

Insider risk indicators thwart potential threats | TechTarget (1)

By

  • Sharon Shea,Executive Editor

Published: 30 Dec 2020

Insider threats aren't just the subject of Joe Payne's book or the target of products created by the company where he is CEO; he has firsthand experience with one.

Five days after she left security software company Code42, one of Payne's employees downloaded the entire contents of her laptop, including payroll data and employee Social Security numbers, onto an external hard drive -- a prime insider threat indicator. Fortunately, the act was caught by the company's own product. When confronted, the well-liked HR employee said she was only trying to copy her contact list.

"Not only was I thinking how embarrassing it would have been to be breached, but it also reinforced to me that every company has really important data," Payne said.

That data comes in many forms, he added: Every salesperson has access to Salesforce data, for example, just as every marketing person has access to the company's entire marketing database. Likewise, every HR employee has access to sensitive employee data, and engineers have access to the company's source code.

This experience -- combined with eye-opening statistics that two-thirds of all breaches are caused by insiders and that only 10% of security budgets are allocated to address insider threats -- led Payne and his co-authors to write Inside Jobs to help others from falling victim to insider risks.

Here, Payne shares key insights from the book, including how to identify insider risk indicators, how his company's file activity product counters such threats and more. For more information on the types of insider risks, read an excerpt of Chapter 3 of Inside Jobs.

What is exacerbating the security threats presented by insiders?

Insider risk indicators thwart potential threats | TechTarget (2)Joe Payne

Joe Payne: The whole concept for the book came out of the idea that the world was changing technologically and culturally in ways the security community hadn't adjusted to.

Technologically, new tools to help people share and collaborate are being rolled out: Slack, OneDrive, Box, Google Drive. They're fantastic tools, but traditional security software that addresses the issues of insider risk and data loss was written to block sharing and collaboration. Now, you have the CIO and CEO saying, 'Share, work together, collaborate!' and the CISO saying, 'Sharing and collaboration is bad because that creates risk!' -- a major disconnect on the security side from what the business is trying to do.

Culturally, we're seeing changes around where people work at their jobs. They're working from Starbucks, working from home, on the road or in hotels. Now, with the pandemic, this point is being proven even more.

Beyond where people work, we're also seeing cultural changes around how long people stay at their jobs. Young people stay an average of three years; older people [stay] an average of four years. And, when people switch jobs, they stay in their same industry for the most part, essentially going to work at a competitor.

The combination of new collaboration software, employees working from everywhere and people changing jobs a lot has created the perfect storm for insider risk.

Internal activity by your own employees is typically not a threat, but a risk.

You call them internal risks versus internal threats. What's the difference?

Payne: This is a topic we address in the book that we've also been discussing as an industry. Talking about insiders is different than talking about external threats. Most external threats are actual threats -- if somebody is in your network that doesn't belong, for example. Malware, phishing, spam, ransomware -- all of them are literally threats.

Internal activity by your own employees is typically not a threat, but a risk. We look for what we call 'insider risk indicators.' We're careful not to call employees 'threats' because they might not actually be threats at all; they just might be an indicator that something needs following up.

What are some examples of risk indicators?

Payne: An employee working at the wrong hours has been an insider risk red flag for years. In the old days, they would come into the office at midnight and make a bunch of copies on the copy machine. That working off hours and on weekends mentality persists, believe it or not. Today, people are working from home, and instead of copying a bunch of files to the cloud or uploading to their Dropbox account at noon, they wait until the day's over. That action probably is just somebody doing their job eight out of 10 times. But it is an indicator of risky behavior.

Insider risk indicators thwart potential threats | TechTarget (3)Click to learn more about
Inside Jobs.

Likewise, somebody deleting a bunch of files is an indicator of risk. People planning to exfiltrate files cover their tracks by deleting the files they exfiltrated. But deleting a bunch of files might also just be somebody cleaning up their desktop, which isn't a big deal.

The biggest indicator of risk, by far, is when somebody quits. It's such a big indicator of risk that we devote an entire part of our product to people who are leaving. The fact that an employee quits doesn't mean they've done anything wrong or taken any data. But it's something to consider as risky behavior.

In the book, you wrote: 'Insider risk is a game of odds.' How do you measure those odds? When does a risk turn into a threat?

Payne: Code42 doesn't run the software for customers; rather, we build the tools that our customers operate. Our product pulls together a bunch of different risk factors. If an employee triggers one factor, their boss probably doesn't care -- and doesn't want to create a lot of noise for the security team. But, if an employee hits a bunch of risk factors, the product correlates that and creates an alert for an investigator to look into the situation.

The product does not stop employees from doing their jobs and doesn't treat them like a criminal because they exhibited risky behavior. Rather, we characterize the product as a big 'DVR' that always has rules and alerts running.

If an employee quits and worked late at night and deleted a bunch of files, for example, the product will send out an alert for further investigation. If, in looking at that data, the investigator finds the employee exfiltrated customer or employee lists or uploaded source code to Dropbox, the HR and legal teams should be brought in to have a conversation with the employee.

You also wrote that security teams 'can't have the foresight to create a policy for every possible insider risk.' How can new and previously unknown risk indicators be accommodated?

Payne: We capture data about data -- a lot of metadata, basically. That data will say, for example, this file moved to this location, was uploaded to this Gmail account, went out via public share on this cloud service, etc. Thus, we're always capturing new areas of exfiltration. For example, we added AirDrop recently per a customer's request. We also added printing -- if an employee prints something, it will capture what they printed and where.

We capture file activity and help customers look at different user behavior around it. Here's a different type of an example: Some of our customers say employees uploading resumes to job sites is a risk indicator. It's not that the employee did anything wrong -- they're not going to be stopped or reported. But it puts them in a higher category [of] risk because it suggests they're thinking about leaving.

Is there any sort of time frame to assess insider risk indicators?

Payne: Our 'DVR' goes back 90 days. We have clients asking us to extend it, and over time, we'll probably make that an option. But we found that, typically, 90 days is good. Even in the departing employee example, most will start taking data about two to three weeks before they leave.

Related Resources

Dig Deeper on Risk management

Insider risk indicators thwart potential threats | TechTarget (2024)

FAQs

What are the potential indicators of insider threats? ›

These indicators may include sudden changes in an employee's behavior, such as increased secrecy, unusual working hours, or unexplained wealth. Other indicators may include unauthorized access to sensitive data, frequent attempts to access restricted areas, or unusual patterns of data transfer or file access.

Which of the following is an indicator of an insider threat? ›

Significant indicators of an insider threat include unusual login behavior, unauthorized access to applications, abnormal employee behavior, and instances of privilege escalation.

What is a threat indicator? ›

Threat Indicators demonstrate an attack by: Specific observable patterns. Additional information intended to represent objects and behaviors of interest in a cyber-security context.

What are the 3 major motivations for insider threats? ›

But there are many motivators for insider threats: sabotage, fraud, espionage, reputation damage or professional gain. Insider threats are not limited to exfiltrating or stealing information, any action taken by an “insider” that could negatively impact an organization falls into the insider threat category.

Which of the following is not considered a potential insider threat indicator? ›

Unusual work hours or access patterns, unauthorized access to sensitive information, and expressing dissatisfaction with the organization are all potential indicators of insider threats. However, frequent software updates are not typically considered an insider threat indicator.

What is the most common form of insider threat? ›

The most common insider threat is typically attributed to employees misusing their access privileges within an organization. This can include unauthorized access attempts, data theft, or using sensitive information for personal gain.

What are two of the three types of insider threats? ›

To do this, it is necessary to first understand how insider threats manifest, and a good place to start is examining the three types of insider threats that organisations face: negligent insiders, complacent insiders, and malicious insiders.

Which of the following best describes an insider threat? ›

An insider threat is anyone with authorized access who uses that access to wittingly or unwittingly cause harm to an organization and its resources including information, personnel, and facilities.

Which of the following is not an insider threat? ›

These users do not need sophisticated malware or tools to access data because they are trusted employees, vendors, contractors, and executives. Any attack that originates from an untrusted, external, and unknown source is not considered an insider threat.

What are the four types of threats? ›

Threats can be classified in four categories: direct, indirect, veiled, or conditional.

What are the indicators of a potential insider threat? ›

Insider threat is a severe and growing threat in organizations of all sizes. There are clear warning signs of an insider threat, such as unusual login behavior, unauthorized access to applications, abnormal employee behavior, and privilege escalation.

What are potential risk indicators? ›

What are potential risk indicators (PRI)? Individuals at risk of becoming insider threats, and those who ultimately cause significant harm, often exhibit warning signs, or indicators. PRI include a wide range of individual predispositions, stressors, choices, actions, and behaviors.

Which of the following is an example of an insider threat? ›

Examples include an employee who sells confidential data to a competitor or a disgruntled former contractor who introduces debilitating malware on the organization's network.

What are the indicators of malicious insider threat? ›

Malicious insider threats aim to leak sensitive data, harass company directors, sabotage corporate equipment and systems, or steal data to try and advance their careers.

Who has the potential to be an insider threat? ›

Any user with internal access to your data could be an insider threat. Vendors, contractors, and employees are all potential insider threats.

Which scenario might indicate an insider threat? ›

Explanation: Among the scenarios given, the one that might indicate a reportable insider threat in cyber awareness is: an employee accessing personal emails during lunch break. This could represent a risk as malware or phishing attempts might originate from personal emails and spread to the corporate network.

Top Articles
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated:

Views: 6423

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.