By Fintech Global
Click here to read the entire article.
Fingerprint, a leader in device intelligence for fraud prevention, has announced the addition of AI-powered recommendations to its Suspect Score solution, marking a significant step forward in adaptive fraud detection.
Static scoring models have long struggled to keep pace with increasingly dynamic, traffic-specific fraud patterns. Fraud teams frequently lack the time and resources needed to continuously analyze signal interactions and manually tune model weights to suit their unique operational needs. Fingerprint’s latest enhancement directly addresses this gap, enabling fraud teams to eliminate manual tuning, preserve valuable time and resources, and deploy fraud detection that adapts to evolving threats.
Fingerprint provides device intelligence solutions designed to help organizations identify and prevent fraud. Its platform is built around Smart Signals — actionable, real-time device intelligence insights — that deliver powerful fraud indicators to enterprise fraud and security teams. The company’s Suspect Score solution sits at the center of this offering, giving customers a consolidated fraud risk signal drawn from a broad range of device and behavioral data.
The enhanced Suspect Score introduces a production-ready machine learning (ML) system that customers can train on their own labeled fraud data. Enterprise teams can upload this data through the Fingerprint dashboard, enabling the system to intelligently analyze it alongside Smart Signals to generate optimized signal weights tailored to their specific fraud patterns. The updated solution also adjusts signal weights based on patterns observed in a customer’s fraud data to reduce false positives while maintaining accuracy. Before any changes are applied, customers receive a full preview of all recommendations, allowing them to review and approve updates with a single click — preserving complete visibility and control over their scoring configuration.
As threats continue to evolve, organizations can retrain their scoring models with up-to-date data, ensuring detection remains aligned with real-world fraud behavior. Sophisticated AI agents and bots are increasingly capable of bypassing static detection models, and the growing adoption of privacy tools such as VPNs among legitimate users has further complicated traditional signal weighting. Fingerprint’s AI-powered approach is designed to meet these challenges head-on, shifting fraud detection from a static model to a continuously adaptive one.
Click here to continue reading.
By Michelle Faverio and Emma Kikuchi; Pew Research Center
Click here to read the entire article.
Artificial intelligence (AI) has become part of everyday life for many Americans – at work, at school, in health care and beyond. As AI spreads, the public remains cautious, but somewhat open to its potential benefits.
Drawing on five years of Pew Research Center surveys, here are 13 findings about how Americans use and view AI, and where they see promise and risk.
1. Americans continue to be wary of AI’s impact on daily life.
Half of U.S. adults say the increased use of AI in daily life makes them feel more concerned than excited, according to a June 2025 survey. Just 10% say they are more excited than concerned. Another 38% say they are equally concerned and excited.
More Americans are concerned today than they were when we first asked this question in 2021. Back then, 37% said they were more concerned than excited.
In contrast, concern is lower in many of the 24 other countries we’ve polled about AI.
2. U.S. adults are generally concerned about AI’s effect on creativity and relationships but are more open to using it for data analysis.
About half of Americans said in the June survey that AI will worsen people’s ability to think creatively and form meaningful relationships with others. Far fewer said AI will make these things better.
However, Americans tend to be more open to AI playing a role in data analysis tasks such as forecasting the weather.
Click here to continue reading.
By Jennie Boden and Christopher J. Jones; CreditUnions.com
Click here to read the entire article.
Credit unions face a new regulatory obligation in 2026 — one that formalizes succession planning as a baseline expectation, not a best practice.
The National Credit Union Administration’s final succession planning rule (12 CFR Parts 701 and 741, RIN 3133-AF42) went into effect on Jan. 1, 2026. The rule requires both federal credit unions and federally insured, state-chartered credit unions to establish written succession plans.
This article describes the key things credit union leaders need to know to comply with the letter of the new rule. For our thoughts about the opportunity available to credit unions that choose to be more strategic about their compliance efforts, read, “The Opportunity For Credit Unions In NCUA’s New Succession Planning Rule.”
What The New Succession Planning Rule Says
NCUA’s newly effective succession planning rule requires federal and federally insured, state-chartered credit unions to establish a board-approved, written succession plan consistent with their size, complexity, and risk of operations. Credit unions can leverage this NCUA video series for further clarification on what is required.
The agency has also provided a succession planning template for smaller credit unions that we find too limited to be of much strategic value. We offer suggestions in the next section for how to deliver a right-sized plan that stays strategic.
Credit unions with less than $100 million in assets and minority depository institutions of all sizes may also be eligible for assistance in a variety of areas, including succession planning, through NCUA’s Small Credit Union and Minority Depository Institution Support Program.
The rule sets forth that these credit union jobs, or their equivalents, must be included in the written succession plan, at a minimum:
- Members of the board of directors.
- “Management officials” and “assistant management officials,” as those terms are defined in Appendix A of the rule, if provided for in the federal credit union’s bylaws, and, to the extent not already covered, the senior executive officers identified in § 701.14(b)(2).
- Any other personnel the board of directors deems critical given the federal credit union’s size, complexity, or risk of operations. This includes new positions that may be required due to planned changes in operations, supervisory landscape, or corporate structure.
Click here to continue reading.
By John Bruggeman, CSO Magazine
Click here to read the entire article.
Your security is only as strong as your sketchiest vendor; since 35% of breaches start with partners, it’s time to worry about their firewalls, not just yours.
Over the last four years, I’ve watched organizations get blindsided by threats that originated in a third-party network. More than 35% of data breaches are caused by a compromised vendor or partner, not by any failure in the organization’s controls. While many organizations know that the biggest threats to their security come from forces entirely outside their control, that risk is accelerating this year.
Some of those forces come from beyond their network and even far beyond their region. International conflict is influencing attacker behavior in ways that are showing up far from conflict zones. AI-driven automation is reducing the effort required to exploit systems and people. Third-party risk continues to be the most common reason well-defended organizations still suffer serious incidents.
These three factors are creating an environment that is heightening cybersecurity risk. I work with organizations that invest in security, quantify risk and take resilience seriously. Yet when something truly disruptive happens, it is rarely because a basic control was missing. Security is only as strong as the weakest link in a chain that extends far beyond an organization’s firewall — and those weak links are multiplying.
Geopolitics amplify cyber risk, particularly for OT networks
For a long time, geopolitical conflicts felt like a separate category of risk. If you did not operate in or near a conflict zone, it was easy to assume it posed little risk to your organization or your security posture. In my experience, that assumption no longer holds.
In my previous position, we had an office in Israel, so I was always alert and aware of tensions and conflicts in that area. What I see consistently is that techniques used in active geopolitical conflicts do not stay contained to that geographic area or digital environment. The techniques and tactics are tested, refined and then used by criminal groups and other threat actors. Eventually, they surface in environments that have nothing to do with the original conflict.
Click here to continue reading.
By Sergiu Gatlan, Bleeping Computer
Click here to read the entire article.
Google announced that the AI-powered Google Drive ransomware detection feature has reached general availability and is now enabled by default for all paying users.
Announced in September 2025, a beta version of this feature began rolling out to Google Workspace customers worldwide in early October.
Google Drive will immediately pause file syncing when it detects a ransomware attack, notifying users and IT admins of the breach and drastically minimizing the impact of such incidents.
While this will not prevent the files on the compromised computer from being encrypted, documents stored in Google Drive will be protected and can be quickly restored once the malware infection is resolved.
After an attack is blocked, users are also provided with detailed instructions for restoring corrupted files using the Drive restoration tool to undo ransomware changes.
“When ransomware detection is on, files are scanned for ransomware when they are synced from a desktop computer to Drive,” Google explains. “If ransomware-encrypted files are found, desktop sync is paused. The affected user gets an email alert and is notified in Drive, and an alert is created in the Google Admin console.”
“Compared to when the feature was in beta, we are now able to detect even more types of ransomware encryption and are able to do it faster. Our latest AI model is detecting 14x more infections, leading to even more comprehensive protection,” it added.
Google says the feature is now on by default for all users in organizations with business, enterprise, education, and frontline licenses, while the file restoration feature is available to all Google Workspace customers, Workspace individual subscribers, and users with personal Google accounts.
Click here to continue reading.
By Ryan Ermey, CNBC
Click here to read the entire article.
For most of my adult life, I’ve enjoyed a relatively straightforward tax situation. In most years, I merely made sure the income from my W-2 was correct and clicked through my preferred tax software’s questions to the end. No dependents, no side hustle income, no property in my name.
This past year was a little different. After years of buying stock through my company’s employee stock purchase plan, I sold the majority of my shares to begin raising funds for my upcoming wedding.
There are some relatively tricky rules around selling these shares, but the gist is, these plans allow employees to buy stock at a discount to the actual share price. So determining how much money you made (in which case you owe capital gains tax) or lost on the sale of your shares requires some calculations.
So I did what about 1 in 5 taxpayers are doing these days, per a recent survey from IPX 1031: I asked AI for help.
I did so skeptically. I’d seen enough stories about AI “hallucinations” — the industry term for when chatbots get things wrong — that I was half-expecting ChatGPT to make a mess of my taxes. Plus, it had only been three years since I’d put AI to the test on tax strategies and watched it flounder. It’s also worth noting that OpenAI’s usage policies caution against using its product to automate “high-stakes decisions in sensitive areas without human review.”
And yet, when I started chatting with the latest version of OpenAI’s large language model, I could feel my hesitation melting away. It not only answered my first question about how ESPP sales are taxed, but also broke things down into digestible bullet points and asked me if I was comfortable sharing more information.
Since I was using a corporate version of the software that does not use data to train OpenAI’s models, I uploaded the consolidated 1099 form from my brokerage firm.
“This is great — [your brokerage] actually gave us everything we need,” the bot told me. “Here’s what’s going on.”
What ChatGPT told me essentially boils down to: Your brokerage is using one number, which is being uploaded into your tax software. But you actually have to use a different number. I just had to check my last few W-2s to see that they included a certain line item.
Click here to continue reading.
By Mary K. Pratt, CSO Magazine
Click here to read the entire article.
Insiders have always posed a risk, but modern technologies, tactics, and motivations have increased the threat, likelihood, and consequences of insider-related incidents.
Insider threats are coming back in a consequential way.
According to the State of Human Risk Report from Mimecast, 42% of organizations have experienced an increase in malicious insider incidents over the past year, with 42% also reporting a rise in negligent incidents for the first time.
The report further found that organizations experienced an average of six insider-driven incidents per month at an estimated cost of $13.1 million per incident. Additionally, 66% of the 2,500 surveyed IT security and IT decision-makers expect insider-related data loss to increase over the next 12 months.
“Insider risk has become one of the most consequential and underestimated threats facing organizations today, not just because of the data loss it causes, but because attackers are increasingly exploiting insiders as a deliberate entry point to bypass perimeter defenses entirely,” Mimecast CISO Leslie Nielsen said in announcing his company’s research results.
“The data shows both careless mistakes and deliberate actions driving incidents in equal measure,” he added. “Rather than trying to manage human behavior, organizations need adaptive controls that identify high-risk actions and adjust protections in real-time, creating friction when someone accesses data they shouldn’t, regardless of whether they have valid credentials. As AI makes it easier for insiders to exfiltrate data at scale, security must meet users at the point of risk.”
The state of insider threats today as technologies, tactics, and motivations evolve
Insider threats continue to fall into two broad camps. On one side is the malicious insider who knowingly acts with the intent to harm. On the other side is a member of the organization whose impacting actions may be accidental or negligent, or in some cases manipulated by a malicious outsider.
Click here to continue reading.
By Orrick, Herrington & Sutcliffe LLP/JD Supra
On March 13, the U.S. Court of Appeals for the 10th Circuit denied a petition for rehearing en banc in a case challenging the court’s earlier decision upholding the Fed’s authority to deny master account applications to eligible state-chartered banks (previously covered by InfoBytes here). Three judges voted to grant rehearing, while the remaining five non-recused judges on the court voted to deny.
A dissent joined by one other judge argued that the DIDMCA requires the Fed to provide access to services to all eligible nonmember depository institutions, and, because access to services requires a master account, every nonmember is eligible for a master account. The dissent noted that the panel’s decision endorsing “unreviewable discretion” effectively hands the Fed a veto over states’ chartering power.
The dissent further warned that the majority’s decision does not align with the notion that, where the statute is ambiguous, it must be interpreted to avoid creating a constitutional problem. By allowing unappointed regional Fed officials with “unreviewable discretion” to deny master accounts, the dissent contended such officials wield significant authority pursuant to the laws of the United States, making them “officers” in the constitutional sense, raising serious constitutional concerns because the appointments process for such officials does not comport with procedures under the Appointments Clause.
The dissent also highlighted the case’s importance, citing its consequences for the financial services industry.
Muhammad Zulhusni, AI News
Click here to read the entire article.
AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions.
Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. The move is one of the clearer early examples of how AI is being used in core banking roles, where systems support decision-making in real time.
The platform is based on Salesforce’s Agentforce, which enables the creation of AI agents to handle tasks. It is designed to help advisers handle client queries and prepare recommendations. It can also help manage daily workflows. According to Banking Dive, the system is part of a wider push among major banks to test how AI agents can work alongside human staff.
Bank of America has been expanding its use of AI in its business. It’s said its virtual assistant Erica handles work equivalent to about 11,000 employees, while 18,000 software developers use AI coding tools that have improved productivity by around 20%.
AI agents move to financial decision-making
The approach differs from earlier deployments of AI in banking, which focused mainly on chatbots or internal productivity tools. In those cases, AI was used to answer simple questions or automate routine tasks. The newer systems are built to handle more complex work, including analysing client data.
Firms like JPMorgan, Wells Fargo, and Goldman Sachs are also testing AI tools aimed at improving productivity and helping staff in client-facing roles, though these efforts vary and are not always focused on advisor-specific AI agent systems. While each bank is taking a different approach, the common goal is to increase output without expanding headcount.
Banks report gains in how quickly advisers can access information or prepare for meetings, based on industry reporting and early deployment feedback. Yet there are ongoing concerns about accuracy and oversight, especially when AI systems are used to suggest financial decisions.
Click here to continue reading.
By PYMNTS.com
Click here to read the entire article.
The never-ending battle against payments fraud is entering a more complicated phase as artificial intelligence (AI) allows criminals to replicate human activity with unsettling accuracy.
Voices can be synthesized, identities can be fabricated from fragments of real data and automated programs increasingly behave like legitimate customers moving through digital channels.
Those developments are creating new vulnerabilities across financial services and especially insurance payments, according to Kevin Ostrander, chief revenue officer at digital insurance platform One Inc.
Insurance payment systems face particular exposure because they manage both premium payments and claims disbursements. Fraud attempts may involve automated scripts testing card numbers, bots probing payment systems or criminals attempting to access policyholder accounts.
“AI is causing a great challenge around traditional identity checks by mimicking human behavior at unprecedented scale,” Ostrander said during a “What’s Next in Payments” interview with PYMNTS. “We’re seeing fraudsters that are using AI to create synthetic identities that pass basic verification processes.”
Synthetic identities often combine stolen financial information with fabricated personal details. When those identities are convincing enough to pass traditional verification checks, fraudsters can open accounts, access sensitive information or initiate payments that appear legitimate.
“There’s a hyper focus in the industry right now on measuring and detecting anomalies in behavior and data patterns that really ensure even the most sophisticated synthetic identities are flagged and checked before any basic verification process takes place,” Ostrander told PYMNTS.
Fraud That Looks Like Normal Behavior
While fabricated faces and cloned voices draw attention, Ostrander said the most troubling development is the rise of automated behavior designed to imitate legitimate customers.
“Fake faces and fake voices are alarming, but the fake normal behavior is the most concerning and rapidly growing threat,” he said.
AI systems can now reproduce browsing habits, transaction patterns and conversational responses that resemble ordinary customer activity. Because those interactions resemble authentic behavior, the signals of fraud are far less obvious.
To detect those activities, payment providers increasingly analyze how users interact with digital interfaces.
Spotting bots often requires monitoring factors such as transaction speed, mouse movements, navigation patterns and device characteristics. Combined with systems that detect automation or device spoofing, those signals can help distinguish a legitimate policyholder from an automated program attempting to impersonate one.
Click here to continue reading.
By Emily Claus, CUSO Magazine
Click here to read the entire article.
Odds are, in the last few years you have received a spam text or email supposedly from reputable companies such as Apple and Amazon. These texts or emails were probably claiming you owed them a certain amount or that there was an issue with your account that could all be solved if you clicked on the very suspicious link they included.
These tricks have become fairly well known and easy to spot. The messages often contain misspellings or odd grammar, the URLs and links they provided only vaguely resembled the actual company they claimed to be from, and should you dare to click on the link, the website would most likely look a little off. For many, these texts were probably easy to spot as fake, delete, and move on from. No big deal.
Text scams are on the rise
Yet, despite the shortcomings in these earlier iterations of the scam, many still fell for them. The number of times I received a panicked call from my parents or grandparents because they were worried they somehow owed Apple $537.19 and weren’t sure how it could have happened and could I please help them look at their Apple transactions, is more than one might like to think. Even my younger more tech-savvy friends have fallen—or nearly fallen—prey to such scams.
In fact, according to the Federal Trade Commission, from July 2020 to June 2021, Amazon scammers alone had increased more than fivefold and managed to steal over $27 million from Americans.
These scams are not limited to Amazon either. Often, these bad actors will pose as the victim’s bank or credit union and lead them to a copy of your online banking website to trick them into sharing their credentials. While this trick is not exactly “new,” the method and execution behind it have reached a near-perfect level.
Click here to continue reading.
By Anjali Gopinadhan Nair, CSO Magazine
Click here to read the entire article.
Scary news: Hackers aren’t “breaking” your MFA anymore — they’re just riding shotgun during your login to steal the session token right out from under you.
Multi-factor authentication was supposed to be the solution. For years, security teams have told employees that MFA would keep them safe. Password stolen? No problem — attackers still need that second factor.
But adversary-in-the-middle (AiTM) phishing has changed everything. These attacks do not try to steal passwords and MFA codes separately. They capture the entire authentication flow in real time, including the session token that proves a user is logged in. The employee does everything right — checks for HTTPS, verifies the MFA prompt, avoids suspicious attachments — and still gets compromised.
This should concern every security leader. If our training, our MFA and our security awareness programs cannot protect someone who is genuinely trying to be careful, then what exactly are we promising when we tell users MFA will keep them safe?
Why this is not the phishing you trained for
Traditional phishing meant sloppy fake login pages with typos and dodgy URLs. Those pages could not handle MFA because they had no connection to the real authentication service.
Here is what changed, and I wish more security leaders understood this: modern phishing pages are not fake. They are proxies.
Tools like Evilginx sit between the user and the legitimate service — Microsoft, Google, Okta, whatever — and relay everything in real time. The employee types their password. It goes to Microsoft. Microsoft sends the MFA challenge. It flows back through the proxy to the employee’s phone. The employee approves it. The session cookie — that golden token proving authentication — passes right back through the proxy into the attacker’s hands.