By Michael Brice, Founder and President, BW Cyber
We are still in the very early stages of the Artificial Intelligence (AI) zeitgeist and have little idea of the potential universe of risks – especially as it relates to data protection and access to your technology environment. Specifically, what happens to the data you input into an AI system as you use it to better enable your work activities? Nobody knows. However, it’s not a stretch to assume that the data is being collected and might actually be provided to others in the course of ‘normal’ AI activities.
In response, we strongly recommend that all asset managers have an AI policy in their employee handbook that specifically states what employees can and cannot do with AI technology. Our predisposition, which is predicated on being safe, is to prohibit any use of AI from a company computer or mobile device without the explicit prior approval of their line manager.
That may sound drastic, but it is our experience that many employees simply do not recognize the potential seriousness of the risks. For example, would allowing a team to use AI to take notes of their meetings and call be an issue? Do you need a policy against automated note taking? You bet you do! After all, where’s the information going after the notes have been consolidated for your team? The information is going to the AI’s unlimited database. And, in a worst-case scenario, AI may be able to be used to help your competitors or even to assist criminals to attack you.
Regardless, we recognize that a blanket “thou shalt not use AI” edict will be either too strict, or indeed, unworkable for some employees. If that’s the case, we recommend implementing a policy that very clearly describes what you can and cannot do with large language models and AI – most notably, preventing the release (or discussion in the example above) of PII, client data, or proprietary company information.
Most people would say this is common sense. But in our experience, it’s not; we are hearing again and again that employees are using AI for note taking at meetings, client communications, and even code development, for example. Those people with a positive disposition would say that this is an example of someone taking the initiative – using an available tool to help them do their job better. But these activities are rife with potential proprietary information that can be kept by the AI system – and therefore possibly accessed by others – forever; and it’s therefore critical that the senior leadership managing regulated entities (e.g., asset managers) implement concise and clear rules on the use of AI.
There is no ‘off the shelf’ version of an employee artificial intelligence use policy (AIUP) because each firm takes a nuanced approach to the use of certain technology systems and tools, and to risk management. An AI policy is something that you’ll need to engage your legal counsel and cybersecurity consultant on. But there are two bullet points that should always be there:
- never allow any identifiable client information to be entered into an AI system
- never allow any identifiable company sensitive information into an AI system
If you must use these systems – and I stress, you shouldn’t allow blanket usage under the ‘be careful’ banner – then at least get your employee handbook updated with an AI use policy – and make sure you train your staff on exactly what that policy means and the reason it is in place.