We're off to see the wizard
Is your organization ready for the incoming tsunami of AI-created insider imposters?
Everyone is focused on how artificial intelligence is coming for our jobs, or enabling chaos-creating disinformation campaigns. But when you saw the “deep fake” video of Obama in 2017, did you think: it’s only a matter of time before AI enables criminals to come for the public figures inside my own organization?
AI is growing in quality and accessibility, making it easier than ever before to leverage public information and readily available vocal and visual style clues to create undetectable false videos, voice conversations and written communications.
Earlier this year, Joe Rogan was seen hawking a “libido booster” by more than 5 million people before it was removed by the platform for being a fake.
Drake and the Weeknd allegedly recorded a song together that went viral before being removed from the streaming services for being a fake.
Even everyday people are more susceptible to scams than ever before, including the grandmother who got a call from her grandson and only discovered the conversational voice replication scam when she arrived at the SECOND bank to withdraw money and learned from the bank manager that “Another patron had gotten a similar call and learned the eerily accurate voice had been faked.”
And, what is most terrifying is that “scammers can replicate a voice from just a short audio sample, then use AI tools to hold a conversation in that voice, which ‘speaks’ whatever the imposter types.”
Remember the magical days of the early 2020s when many companies’ security training programs included noting obvious typos in emails, typosquatting, or emails that just “look fake” as the best ways to identify a phishing probe or scam? Once, I even fell for a well-placed security team “test” promising baby photos of my colleagues, which of course I HAD to see.
Those innocent days are long behind us. Now, every blog article you have ever written, every speech you ever gave introducing someone at a nonprofit, and every panel you sat on that was recorded for attendees who couldn’t be at the event are free and ready to be used to create a message from “you” using deep fake video.
Not to mention the very training ground for ChatGPT was 300 billion words from text on the internet, including your comments on stories and reviews. And, as we all continue to ask ChatGPT questions, kind of like a modern day “Magic Eight Ball,” ChatGPT is getting smarter about our individual behavior, preferences, and everyday practices that we thought only our “fur kids” were privy to. Add “speed to market,” where a video or letter can take milliseconds to be created by AI, and scammers [can] massively scale their scams [and create] plausible BS-sounding letters” that they no longer have to write themselves.
The dark underbelly of readily available AI tools like ChatGPT is now that public information is being used to create more effective frauds and schemes, especially related to eerily realistic impersonations of people that you know.
Is your company ready for the risks associated with AI and the ease of creating deep fakes related to your company insiders?
AI and surrounding technologies are both a sword and a shield
The robot shield:
Some basic technology terminology / explanation is best to start
Before diving into how technology can be a shield, especially when deployed as part of a robust audit, risk management, or security program, let’s level set on the terminology.
Artificial intelligence (AI) is “unattended automation” that learns and develops its own analytical approach as it gathers and synthesizes more and more data.
Conversely, Robotic Process Automation (RPA) is a “rules based” computer program (usually created by a person and which does not “learn” or change over time) that automates repetitive tasks in a standardized way to reduce time and labor costs, as well as decrease the likelihood of errors etc. For RPA, think of things like standard response letter creation, converting documents and uploading those documents into a system, and other monotonous and common tasks that you would rather not pay a person to do.
Where it gets really interesting is when the learning and expansion enabled by AI is used to create and expand RPA processes. Said another way, “AI helps robots perform cognitive tasks, navigate uncertainty, and resolve inconsistencies. And the more that robots can think and understand on their own, the more they can do. The faster they can do it. And the bigger the impact they can make.”
In recent years, much of the conversation has been about using AI and RPA to detect anomalies in behaviors and transactions in order to sniff out fraud and other high risk issues within an organization. In everything from extracting data from a variety of systems to analyze disparate information, comparing data against known fraud patterns, generating alerts for suspicious or high risk behavior, and even providing reporting mechanisms for decision making, the use of these tools is so effective that “when audit firms invest in AI, their audit quality goes up. There are fewer restatements, including material restatements, and fewer SEC investigations related to audits performed by AI-investing firms.”
To further halt fraud in its tracks, through “behind the scenes” data capture and analysis, companies are exploring the use of AI and analytical intelligence to detect fraud in areas like insurance claims and predicting the likelihood of loan default based on the intent of the applicant.
AI is the sharpest sword in the scabbard for bad actors.
While companies are using AI to become better at detecting fraud, bad actors are using it in new and chilling ways. For example, the ability of large language models (LLMs) like ChatGPT to recognize and mimic speech patterns not only facilitates phishing and online fraud, but can also be used more generally to mimic the speech style of specific individuals or groups. This capability can be widely abused to trick potential victims into placing their trust in the hands of criminal actors. And, AI is only getting more robust. According to Hany Farid, a professor of digital forensics at the University of California at Berkeley (in an interview with the Washington Post): “Two years ago, even a year ago, you needed a lot of audio to clone a person’s voice, now … if you have a Facebook page … or if you’ve recorded a TikTok and your voice is in there for 30 seconds, people can clone your voice.” And what if they want to fake a call from the executive? “Today’s deep fake artists only need about 3 minutes of audio to recreate a convincing fake voice call.” And with $11 and 8 minutes, anyone can create a realistic deep fake video of another person.
Have you thought about how bad actors using AI could affect your organization and how much of your insiders’ appearance, voice, and mannerisms are already available online?
How much control authority and ability to give direction resides with each individual member of your leadership team for things like authorizing transactions, directing payments, signing contracts, changing processes, engaging new vendors, and so on?
Now consider how much more susceptible that leadership team is to providing content to deep fake creators, especially when speaking at industry events, presenting on earnings calls, and being interviewed is part of their “natural habitat.”
Add in the challenges of secondary validation in a remote or hybrid workplace that often operates in a highly asynchronous fashion, with a lack of face-to-face contact and familiarity, and suddenly the AI “insider imposter” does not need to be that accurate or believable to successfully trick an employee into taking steps that the employee believes are at the request of an executive in the organization. Insider threats have always been a source of huge risk in organizations, but now anyone can pretend to be an insider with simple, cheap, and fast technology.
Is addressing AI fraud a losing battle?
The Federal Trade Commission (FTC) has an eye on the supercharged fraud opportunities available today through AI and other developing technologies, and has established the Office of Technology to try to combat the changing landscape.
A technology ethics group with members like…wait for it…Elon Musk, has called on the FTC to slow down the development of generative AI models and implement stricter government oversight. But even with FTC attention and commitment to targeting AI that violates civil rights or is deceptive, there is currently little legal recourse for organizations against the platforms that enable such sophisticated cyber crime that now does not even require expert coding skills.
What can you do now? Advice for boards, leaders and practitioners
As if Joe Rogan and the Weeknd aren’t enough to convince you that your company is vulnerable,
Consider the risk associated with a call, voice memo, or even a video chat from an executive directing someone in your organization to grant access, send money, change payment instructions, agree and accept some clickable contract, or make some process or system change that the employee is convinced is in the company’s “best interest” to complete quickly. It’s easier than ever before to create a believable imposter - are your policies and practices ready for it?
Review existing audit, security and risk plans
First off, dust off your audit, security, and risk plans and take a look at your specific fraud strategies. Do they address or identify the potential of AI-based “imposter fraud” and other more sophisticated types of fraud? In your assessment, consider the phases of fraud, including prevention, detection, and response as well as the “Fraud Triangle” for insiders, including the opportunity to commit a fraudulent act (knowingly or unknowingly), rationalization (the ability of an employee to create an acceptable reason to take the step), and pressure (including the need to meet SLAs or productivity requirements and the demands of managers and leaders).
Don’t try to become an expert on all of this yourself. Instead, ask results driven questions of your internal audit, fraud, risk, and security teams.
AI is an obtuse area to discuss, including for technology professionals. So much of AI is “black box” and difficult to explain or validate. This creates a special sort of risk associated with a lack of detailed understanding. Thus, it is important to focus on specific use cases (possible scenarios that could happen) and their results to ensure that what you believe will happen will actually happen.
Verify your controls and specific authority for certain roles
What is the most extensive “authority” in place with any one person or small number of people that could be susceptible to a deep fake?
What additional controls are in place to ensure that a single contact (phone call, video chat, and/or email - alone or in combination) does not enable an employee to act (pay an invoice, grant access, accept a contract, etc)? Are second “trusted channel” confirmations required?
What tools are currently in play in the organization?
How does the company manage and keep track of the Robotic Process Automation (RPA) tools in the organization? Are they regularly reviewed and examined?
What is the organization doing to further automate fraud detection and how is it ensuring that there aren’t silos being created between the various tools?
What other AI is planned or deployed in the organization? How is it being used? Is it being used to proactively identify potential fraud, such as through recording reports or presentations from customers?
Is there an up-to-date map of all of the system connections for the organization, including affiliates, consultants, third party providers, vendors, hosting services, etc? How often is it reviewed?
Is system access efficient for both user adds and removals? For example, if employees cannot get adequate access quickly, they are more likely to do “favors” for each other to get the job done. Conversely, if system access is not removed immediately upon employee departures or changes in role, the continued and unnecessary access creates risk (and don’t assume this could never happen in your organization, you would be surprised how common these scenarios are even in large and sophisticated companies).
How often does the organization test various “new” scenarios of intrusion and fraud?
Have the risk, audit, fraud, and security teams tested imposter scenarios involving employees in positions of authority?
And what about employees?
What tools are in place to monitor suspicious employee behavior?
Are employees regularly surveyed about fraud potential and what they are most worried about? Where do the results go? Does the board see them in detail, in summary form, or not at all?
How do you ensure employees are trained to be skeptics in an ever-changing landscape of potential threats?
How is your company “rewarding” skeptics? Said another way, when someone refuses to take action without face-to-face or other reliable confirmation, do the executives react enthusiastically and praise the skepticism, or respond in frustration that they are being bothered with a manual approval for something that has already been requested?
What insurance coverage might be available in the event the company falls victim to an AI enabled attack?
What insurance covers fraud committed through AI tools? It depends on a lot of things including: your specific policy language, whether the event is related to social engineering, computer fraud, commercial crime, funds transfer, or other insurable acts, and even how your policies are bundled together.
Do the employees responsible for the various insurance coverages regularly interact with the brokers and carriers to ensure that they understand what is covered and what isn’t? Do they evaluate the program for changing technology risks?
Does your communications strategy anticipate the potential for insider deep fakes?
Don’t forget your crisis and response plans. Remember the recent PrepOverCoffee story about the “Fast Moving Rumor?” What if that fast moving rumor was a deep fake of your CEO doing or saying something highly brand damaging?
What plans are in place in the event a “deep fake” is created of an executive that is harmful to the brand or otherwise depicts false information?
When the organization “tests” the crisis plan, is insider “deep fake” one of the scenarios?
Are there guidelines about what recordings are available on the company website or other social media?
Do your various governing organizations really play well in the sandbox?
Since the dawn of time, silos have created the most significant obstacle to success in most organizations. Sophisticated opportunities for fraud are no exception. A combined effort must be made by the board of directors, the audit committee, internal and external auditors, risk management personnel, investigators, operations personnel and others to manage the risk of fraud.
How closely are the audit, risk, fraud and business organizations working and how are their priorities set?
Are the audit, risk, fraud and business organizations adequately staffed and familiar with the changing technology landscape?
What behaviors are rewarded and what behaviors are discouraged among the teams? Do the behaviors that are encouraged align with strong skepticism and a commitment to reducing risk?
Want to share your thoughts with the community?
How is your organization dealing with the fast pace of technological complexity and the opportunity for fraud using AI?
Share a comment here.
ESPRESSO SHOTS:
Remember the short seller portion of Prep’s Fast Moving Rumor story and the incredible impact of firms like Hindenburg on market capital? Check out the latest target, Carl Icahn, who has lost $10B in value (at the time of this publication) since the Hindenburg report surfaced this week.
For more on fraud detection programs, check out:
https://fedpaymentsimprovement.org/strategic-initiatives/payments-security/fraudclassifier-model/
https://www.datavisor.com/wp-content/uploads/2021/11/AI-Fraud-Detection-Readiness-Checklist_FI.pdf
https://www.sec.gov/news/statement/munter-statement-fraud-detection-101122
If you are sick of AI and technology altogether, go listen to some nice jazz music on vinyl or admire the exceptional WICN jazz CD archive - thanks to The Set List.
Happy Star Wars Day and May the 4th be with you!