Insurance Faces a New Threat That Looks Like a Customer

By PYMNTS.com
Click here to read the entire article.

The never-ending battle against payments fraud is entering a more complicated phase as artificial intelligence (AI) allows criminals to replicate human activity with unsettling accuracy.

Voices can be synthesized, identities can be fabricated from fragments of real data and automated programs increasingly behave like legitimate customers moving through digital channels.

Those developments are creating new vulnerabilities across financial services and especially insurance payments, according to Kevin Ostrander, chief revenue officer at digital insurance platform One Inc.

Insurance payment systems face particular exposure because they manage both premium payments and claims disbursements. Fraud attempts may involve automated scripts testing card numbers, bots probing payment systems or criminals attempting to access policyholder accounts.

“AI is causing a great challenge around traditional identity checks by mimicking human behavior at unprecedented scale,” Ostrander said during a “What’s Next in Payments” interview with PYMNTS. “We’re seeing fraudsters that are using AI to create synthetic identities that pass basic verification processes.”

Synthetic identities often combine stolen financial information with fabricated personal details. When those identities are convincing enough to pass traditional verification checks, fraudsters can open accounts, access sensitive information or initiate payments that appear legitimate.

“There’s a hyper focus in the industry right now on measuring and detecting anomalies in behavior and data patterns that really ensure even the most sophisticated synthetic identities are flagged and checked before any basic verification process takes place,” Ostrander told PYMNTS.

Fraud That Looks Like Normal Behavior
While fabricated faces and cloned voices draw attention, Ostrander said the most troubling development is the rise of automated behavior designed to imitate legitimate customers.

“Fake faces and fake voices are alarming, but the fake normal behavior is the most concerning and rapidly growing threat,” he said.

AI systems can now reproduce browsing habits, transaction patterns and conversational responses that resemble ordinary customer activity. Because those interactions resemble authentic behavior, the signals of fraud are far less obvious.

To detect those activities, payment providers increasingly analyze how users interact with digital interfaces.

Spotting bots often requires monitoring factors such as transaction speed, mouse movements, navigation patterns and device characteristics. Combined with systems that detect automation or device spoofing, those signals can help distinguish a legitimate policyholder from an automated program attempting to impersonate one.

Click here to continue reading.