Centralized Artificial Intelligence: How to get your GenAI deployment right

If you are a security leader, you will need to be able to answer the following questions: where is your sensitive data? Who can access it? And is it safe to use? In the age of generative artificial intelligence, it is increasingly becoming a challenge to answer all three.

An October whitepaper from Concentric AI outlines the rationale. GenAI has moved from “curiosity to a central force in enterprise technology” almost overnight. The company’s autonomous data security platform provides data discovery, classification, risk monitoring and remediation, and aims to use artificial intelligence to fight.

This time last year in the UK, Deloitte warned that beyond IT, organizations were focusing GenAI deployments on parts of the business that are “uniquely important to success in their industries” – and things have only accelerated since then. In addition, Concentric AI notes how GenAI is changing the core data security process in an organization.

“The insider threat exposure has increased substantially and exfiltrating this sensitive data is no longer necessarily a proactive decision,” says Dave Matthews, Principal Solutions Engineer for EMEA at Concentric AI. “So we found that users are good at using AI-powered apps, but they never fully understand the risk of exposure, especially through certain platforms, and their decisions about which platform to use.”

Does this sound familiar? If you’re reliving the early days of enterprise mobility and bring your own device (BYOD), you’re not alone. Yet, as the whitepaper states, it’s an even bigger threat this time around. “The BYOD story shows that when convenience trumps management, businesses must adapt quickly,” the paper explains. “The difference is that GenAI doesn’t just expand the circuit, it dissolves it.”

Concentric AI’s semantic intelligence platform aims to cure security leaders’ headaches. It uses contextual AI to discover and categorize sensitive data, both in the cloud and on-premises, and can enforce category-based data loss protection (DLP) to prevent leaks to GenAI tools.

“To safely deploy GenAI, really what we need to do is make that use visible, we need to make sure we’re endorsing the right tools… and that means enforcing category-aware DLP at the application layer and also adopting an AI policy,” explains Matthews. “Have a profile that’s possibly aligned with the NIST Cyber ​​​​​​AI guidelines, so that you have policies, you have logging, you have governance that covers … not just the user usage or the data coming in, but the models that are being used.

“How are these models being used? How are these models being built and informed by the data that’s also getting there?”

Concentric AI is attending the Cyber ​​​​Security & Cloud Expo in London on February 4-5, and Matthews will be talking about how legacy DLP and governance tools “failed to live up to their promise”.

“It’s not for lack of effort,” he notes. “I don’t think anyone would slack off on data security, but we struggled to deliver because we lacked context.

“I’ll share how you can use real-world context to fully secure your data and unlock this secure and scalable adoption of GenAI,” adds Matthews. “I want people to know that with the right strategy, data security is achievable and really, with these new tools available to us, it can be transformative.”

Watch the full interview with Dave Matthews below:

Photo by Philipp Katzenberger on Unsplash

Leave a Comment