Cybersecurity
Protect your practice by recognizing AI cybersecurity threats
As artificial intelligence (AI) rapidly advances, knowledge and awareness are key.
With the rapid advancement of technology and artificial intelligence (AI), cybercriminals have developed more sophisticated methods to gain access to personal data and financial resources. Under the guise of legitimate communications from trusted sources, cybercriminals aim to trick targets into revealing sensitive information so that they can ultimately gain unauthorized access to data or funds. Helping to protect your practice and your clients requires thoughtful actions, vigilance, awareness, and attention to detail. Knowledge is the key to prevention.
A growing but avoidable threat
The threat of cybercrime has become increasingly difficult to ignore in today’s interconnected world. Complaints to the FBI’s Internet Crime Compliant Center (IC3) nearly doubled between January 2019 and December 2023. Perhaps even more concerning, the reported financial losses from suspected scams grew by over 350% from USD 3.5 billion to USD 12.5 billion during this five-year period. And because many scams go unreported, experts believe these numbers are likely underestimated.
Complaints and losses reported to the FBI’s Internet Crime Complaint Center (IC3) over the last five years
For complaints and losses over the years 2019 to 2023, the IC3 received a total of 3.79 million complaints, reporting a loss of USD 37.4 billion.
Year |
Total Complaints |
Total Losses |
2019 |
467,361 |
USD 3.5 Billion |
2020 |
791,790 |
USD 4.2 Billion |
2021 |
847,376 |
USD 6.9 Billion |
2022 |
800,944 |
USD 10.3 Billion |
2023 |
880,418 |
USD 12.5 Billion |
Total |
3.79 Million |
USD 37.4 Billion |
Source: Federal Bureau of Investigation, Internet Crime Report, 2023
As appropriate, complaints are reviewed by IC3 analysts, who apply a crime type and adjust the total loss. Crime types and losses can be variable and can evolve based upon investigative or analytical proceedings. Complainant/Entity is identified as the individual filing a complaint. Some complainants may have filed more than once, creating a possible duplicate complaint. Complaint counts represent the number of individual complaints received from each state and do not represent the number of individuals filing a complaint.
Common types of cybersecurity threats
Cybercriminals use open-source resources—such as social media and professional networking platforms—to conduct reconnaissance, gather information to appear more realistic, and perpetrate scams via a range of communicative technologies, including:
- Phishing—Cybercriminals craft emails to trick individuals into clicking malicious links, opening dangerous attachments, and submitting sensitive data into fraudulent online forms. Be especially wary if the email is unsolicited or uses urgent or threatening language to get you to act quickly. Always verify any unsolicited email through another means of communication such as a phone call or alternate email address.
- Smishing—Smishing is like phishing but uses SMS text messages. The texts often appear to come from someone you know and will usually urge you to perform an action. As with email phishing, never reply directly to a suspicious text message.
- Vishing—Cybercriminals can trick users using fraudulent phone numbers, voice-altering software, and social engineering. Seemingly innocent questions can provide useful information and are often a stepping stone to more severe attacks. Always verify the caller’s identity prior to disclosing any information.
- Quishing—Cybercriminals can use Quick Response (QR) Codes to direct unsuspecting targets to nefarious third-party websites that harvest credentials and download malware. Cybercriminals take advantage of QR Codes’ ambiguity: It’s impossible to tell if a QR Code is trustworthy by looking at it or hovering over it with your cursor. QR Codes should be treated like links; avoid scanning them when unverified or unsolicited.
Generative AI creates new risks
Generative AI is a technology that can create new content and ideas—including conversations, stories, images, computer code, audio, and videos. The increasing popularity of generative AI presents significant challenges, such as privacy risk, disinformation, and malicious use by cybercriminals to create convincing emails, social media posts, and video content.
Cybercriminals can use AI to help them emulate trusted employees or family members—making phishing emails, texts, and social media posts even more challenging to detect. As the quality of AI-generated content rapidly improves, so does the risk of falling victim to AI deepfakes. A deepfake is a highly realistic video, photo, or audio recording created or manipulated using AI. The underlying technology can replace faces, manipulate facial expressions, and synthesize faces and speech.
Real world implications
Global authorities are increasingly concerned about the sophistication of deepfake technology and the escalation of criminal applications. In February 2024, a finance employee in Hong Kong received an email, allegedly from the organization’s CFO, with instructions to execute a wire transfer. Because the message called for a secret transfer, the employee initially thought the email was a phishing attempt. However, after verifying the request via a group video call with the CFO and others, the employee ultimately made 15 transfers to five different bank accounts totaling HKD 200 million (USD 25.6 million). The cybercriminals had used deepfake technology and publicly available video and audio to impersonate their targets and simulate the video call.
Tips to identify and avoid deepfake scams
Since identifying fake images, voices, and videos is increasingly more difficult as AI and deepfake technology evolve, always err on the side of caution—particularly when the requested action is unusual or bypasses normal business processes. Here are some things to watch for to help identify manipulated videos:
- Unnatural eye movement: Do the eyes follow the person they’re talking to? Are they blinking as expected?
- Lip synching: Are the lip movements natural? Do they match the words being spoken?
- Awkward facial expressions or positioning: Is the expected emotion exhibited? Do the facial features align correctly on the face?
- Inconsistent body positioning or movement: Do the head and body align? Are movements jerky or disjointed?
- Unusual coloring or shadows: Do background colors and skin tones look natural? Are shadows in the correct location for the light source? Is the light source consistent throughout the video?
Your clients may be targeted
Unfortunately, adults over 60 are the most common targets of cyberattacks, likely due to the size of their assets. Common scams can include:
- Social Security Administration imposter scams, involving false claims that Social Security numbers have been compromised and accounts will be seized if individuals fail to act immediately.
- Tech support scams, where cybercriminals pose as computer technicians and request remote access or money transfers to “fix” a virus, malware, or hacking attempt on a target’s computer.
- Grandparent voice scams, in which scammers use AI voice-cloning technology to impersonate family (usually a grandchild) or friends in distress. They may claim to be injured or imprisoned and request urgent money transfers.
Knowledge is the key to protecting your practice
While cybercrimes have grown in number and sophistication, the red flags and similarities across scams have largely remained the same. Cybercriminals usually try to create a sense of urgency through fear or the promise of something that sounds too good to be true and often pressure their targets into acting fast. Protecting yourself and your clients requires vigilance, awareness, and attention to detail. Take your time, ask questions, and trust that if something seems off, it probably is.
202410-3917525