5 Tips for Organizations Deciding How (or Whether) to Implement AI
Source: James Edgar, Chief Information Security Officer, FLEETCOR
Long after the term Artificial Intelligence was coined in about 1956, it has taken off in the last 10 to 15 years, exerting massive impact on our day-to-day lives. From Siri to autonomous vehicles to ChatGPT, AI is opening doors to new possibilities while creating significant risks.
AI has tremendous potential to benefit companies. It can protect against cybersecurity threats by mimicking hacker activity, fixing bugs, and creating self-healing networks. AI can also help agencies enforce regulations, improve risk management, and reduce fraud. It can fill the gaps in an employee’s skill set, taking the place of hard-to-fill positions. These are objectively good things, but naturally, AI can create problems, too.
Just as it can help fill skill gaps among a company’s employees, it can also help unskilled bad actors launch their careers in ransomware, DDoS, and phishing without knowing how to code. And even the good actors using AI to write code won’t know where the code came from, whether it is well written, and whether it might open security holes.
FLEETCOR’s Approach to AI
Here are five considerations we’re taking as an organization about the role of AI for cybersecurity moving forward:
Risk appetite. Are we even ready for AI? Some companies have already said no when it comes to employees using AI. Apple, JPMorgan, Verizon, and others have instructed their employees not to use AI at all. Other companies have said they will use AI for very specific, limited purposes, leveraging some advantages while limiting the risk exposure. We’re always assessing our risk appetite with the best interest of our customers, employees, and shareholders in mind.
Getting teams on the same page. We consider the input of our corporate leaders, legal team, regulatory team, security team, and others driving the use of AI. In addition to getting consensus around our risk appetite for AI, we want to make sure everyone buys into whatever approach is determined to be the best for the business.
Defining our corporate policies. We’re very clear about what is and isn’t acceptable, and everyone must be prepared to manage those policies as the technology evolves. Also, as we develop these policies, we monitor any AI-related laws that have been passed in states or countries where we operate.
Educate and train. We are deliberate about ingraining our AI policies in our workplace culture. For example, we might circulate policies on AI to our employees to help them understand the risks of copyright infringement, data leakage, releasing trade secrets, or sharing other confidential information on an AI platform that can put FLEETCOR at risk.
A governance model. If you’re going to use AI, you must be deliberate about how it fits into your business. Releasing ChatGPT and telling employees to “go forth and conquer” can create tremendous risk and uncertain outcomes. Instead, we consider business use cases individually to decide how and what data we use to train the AI model, have specific goals and targeted outcomes defined to avoid scope creep around using AI, and how we will validate that there is no bias or hallucinations baked into the solution. Most importantly, we determine what internal body, such as a board or committee, will review our AI implementation plan and progress to provide the appropriate sign off prior to moving forward with any large scale or production usage.
The reality is there is no stopping the AI train. Businesses including FLEETCOR are focusing more and more on integrating AI into their services and overall business models to improve their offerings and enhance the customer experience. As they do, creators of AI will continue refining their solutions to address the real concerns. AI will neither bring about the collapse of humanity or find an immediate cure for cancer. The best we can do is educate ourselves on AI’s pitfalls and make a deliberate effort to leverage its benefits.