Written by: Wolf & Company IT Group
In a time when artificial intelligence (AI) is becoming mainstream, it is critical that we pause to evaluate the potential dangers it may introduce. This is especially true for organizations that are responsible for protecting confidential personal data.
It is not unusual for a customer or patient to contact their financial institution or healthcare provider by phone to request service or ask a question. Traditionally, the phone-based authentication process has been relatively straightforward and simple. The institution requires the customer or inquiring party to provide a combination of basic (public) and sensitive (non-public) information. For example, a customer may be asked to provide their date of birth, address, social security number, and sometimes an account number or patient ID.
Because of successful social engineering attacks over the years, many institutions have required additional pieces of information and have implemented technology controls. Although cybercriminals have found workarounds in the past, AI is making their work simpler and more efficient. As a result, it has become far more difficult for institutions to protect the private information they’ve been entrusted with.
Let’s look at the phone and voice controls that many institutions are currently using, and how they are impacted by adversarial artificial intelligence.
Security Question Circumvention
Circumventing security questions is easier by the day. Security questions are intended to validate the consumer’s identity by asking them questions that are derived from public records (out-of-wallet) and a self-selected question about “something they know” (cognitive). The theory is that only the user knows the answer. This is not so anymore. AI scraping tools can now be used to locate, mine, compile, and infer personal information using a variety of sources such as websites, social media platforms, public records, the dark web, and other data repositories. Armed with this information, cybercriminals can answer almost any question.
Multi-Factor Authentication (MFA) Circumvention
MFA typically combines multiple authentication factors (something you know, something you have, something you are). Adversarial AI can potentially exploit weaknesses in these factors.
AI can be used to automate password guessing attacks otherwise known as brute force or dictionary attacks. AI algorithms can also analyze patterns, common phrases, or personal information to guess passwords more effectively.
AI can potentially intercept or mimic communication channels used for verification such as SMS or voice calls. AI powered attacks can also potentially intercept one-time passwords (OTP) or generate convincing phishing or smishing messages to real customers to trick them into providing their verification codes.
Voice Authentication Circumvention
Voice authentication, also known as voice biometrics, is a popular technology that identifies users based on their unique vocal characteristics. It analyzes various voice features such as pitch, pronunciation, cadence, and accent to verify an individual’s identity. Voice authentication provides numerous benefits over traditional authentication methods, including convenience as users do not need to remember complex passwords or carry physical tokens. They can simply speak their passphrase or answer a voice prompt for authentication.
Unfortunately, adversarial AI voice cloning and deepfake technology can be used to impersonate vocal attributes. AI algorithms can analyze a person’s voice sample and create a synthetic voice that closely resembles the original speaker, including accents and speech patterns. This voice cloning can potentially deceive voice authentication systems.
As AI evolves, vendors are concurrently integrating countermeasures to combat voice authentication circumvention, including continuous authentication and behavioral analysis.
- Continuous Authentication: Utilizing AI technology for real-time voice analysis can help in identifying anomalies during the authentication process. Continuous monitoring allows systems to adapt and react promptly to prevent unauthorized access.
- Behavioral Analysis: Incorporating behavioral biometrics, such as voice modulation patterns, pauses, or speech habits, could add an extra layer of security. Analyzing unique speech patterns can help distinguish between genuine and synthesized voices.
Engage Your Workforce
It is important to remember that the most successful technique cybercriminals use to circumvent technology and security controls today is human manipulation – generally known as social engineering. We must continue to arm our workforce with the knowledge, skills, and best practices to protect customer and patient information. Social engineering simulations paired with whole-organization training is a cost-effective approach to enhance organizational security.
Social engineering simulations that mimic the latest tactics and techniques cybercriminals are using can be used to identify internal control weaknesses. You’ll be simultaneously testing your controls and educating your user community. Instructor-led interactive training that incorporates the results of the social engineering simulations and focuses on the latest trends and tactics being used by adversaries can have a significant impact.
With the advancement of adversarial AI, it is crucial to address the potential circumvention challenges. As we race for a fix, we must remind ourselves that there is no single silver bullet. Instead, we must continue to invest in and strengthen our environment through awareness and a layered security approach.