On June 12th, Salesforce announced ‘AI Cloud,’ which aims to embed generative AI capabilities throughout their market leading CRM tool in an effort to enhance productivity for all Salesforce CRM users. The announcement features eight different sections: Sales GPT, Marketing GPT, Slack GPT, Flow GPT, Service GPT, Commerce GPT, Tableau GPT, Apex GPT. These modules feature distinct use cases to spur productivity and workplace efficiency across different Salesforce properties, use cases, and users, and is a continuation of the flood of generative AI making its way into popular SaaS platforms. The end result is the same; productivity, productivity, productivity.
However, the Salesforce announcement comes with a welcome twist. A key component of the announcement from the San Francisco firm is the ‘Einstein GPT Trust Layer’ which sets out to resolve data privacy concerns that so many companies are (rightfully) concerned with as generative AI takes the world by storm. These concerns have, in some cases, led to public instances of leading enterprises forbidding the use of Chat GPT and generative AI in the workplace. Other organizations have found that employees and vendors that were contracted by the firm were inputting sensitive data into these large language models (LLMs) which posed security violations and put the firm at risk for failing audits. So, what Salesforce is trying to do, in short, is have Einstein GPT Trust Layer as a middleware of sorts to prevent LLMs from keeping sensitive customer data, and separates sensitive data from the LLM to help customers maintain data governance. The word ‘trust’ is mentioned in some form 31 times throughout the Salesforce press release, and it is clear that they are taking data privacy seriously.
Salesforce is now positioning itself to now bring the power of generative AI to deliver trusted, AI-created content across every sales, service, marketing, commerce, and IT interaction to boost productivity and efficiency. This is similar to NVIDIA’s NeMo Guardrails, in that both of these modules are trying to prevent text-generative models from retaining sensitive data. Documentation reveals that the Einstein Trust Layer sits between a Salesforce application and a text-generating model. Einstein Trust Layer can then detect when a prompt might contain sensitive information and can automatically remove it on the backend before it reaches the model. We’ll have more to say about this in the coming weeks as we evaluate the real-world implications of Einstein Trust Layer, but it’s worth recapping a few things that security teams should take into account as more and more business users become credibly empowered to create their own applications, automations, workflows, and connections, that all can transfer data to and from a huge number of corporate resources and systems.
1. Be mindful of who is creating what. In Salesforce’s documentation of this new exciting product line, they provide several examples of what people can do with AI Cloud. This includes things like sales reps quickly auto-generating personalized emails, commerce teams delivering customized experiences for purchasing, and developers auto-generating code, predicting bugs, and suggesting fixes in real-time. With all of this new power that is being placed in the hands of business users of all technical backgrounds, security teams are faced with the burden of identifying all of the things that are being built and put into production; often times without the luxury of the ‘stop and check’ mindset of the traditional software development lifecycle. Rest assured, this type of power is not just granted through Salesforce, but other tech giants like Microsoft and ServiceNow, as well as app creation platforms like Zapier, robotic process automation tools such as UiPath, and integration platform as a service providers a la Workato. As such, security teams need to maintain continuous visibility over all apps, automations, and workflows that all citizen developers create.
2. Raise awareness of sensitive data flows. While there is still more that will be uncovered (with Salesforce the first to admit this), it is worth remembering that data and applications do not exist in isolation. Even when creating applications, automations, and leveraging AI within Salesforce, it is very likely that data will flow to and from other systems and resources inside and outside of the organization. Security practitioners must remain vigilant to ensure that they remain in compliance and reduce security risks. Establishing checks across the entirety of low-code/no-code and generative AI estates to flag any time an application or business resource interacts with sensitive data is the first step, but being able to assess the risks that each resource introduced to the organization is critical to enable security teams to act accordingly and with the proper prioritization.
3. Ongoing governance. The true test for organizations that are leveraging Salesforce AI Cloud will be working to ensure that as people are able to be more productive, more quickly, that they are using these tools appropriately. The Einstein GPT Trust Layer provides a good starting point for enabling security teams to feel comfortable with the tools that their people are using, but questions still remain about how data can be transferred in and out of Salesforce when using low-code/no-code capabilities and/or generative AI, as new apps, automations, and workflows are created by business users with all sorts of technical backgrounds. Salesforce researchers offer four requirements to more safely use generative AI: Human oversight, enhanced security measures, trusted customer data, ethical use guidelines. While this is a good foundational structure, much is needed in the way of governing the usage of these platforms that ingest and process massive amounts of data throughout the organization.
The main takeaway here is that yet another major technology vendor is introducing cutting edge generative AI capabilities into their solution sets to enable business users of all technical backgrounds to get more work done. If your organization is leveraging Salesforce, and curious about how to ensure data security and integrity throughout all low-code/no-code development platforms, join us next week for our webinar which discusses the collision course of low-code/no-code and generative AI!