AI & Digital Assets

July 26, 2024: AI & Digital Assets


What We Don’t Know About AI and What It Means for Policy

Alexander “amac” Macgillivray, Lawfare

AI’s future cost and the trajectory of its development are currently unknown. Good AI policy will take that into account.

AI is a hot topic in policy and regulatory discussions. President Biden rolled out an executive order, the Office of Management and Budget has issued guidance, there are more than 20 U.S. congressional legislative proposals, and even more in the states and internationally. Tech CEOs make broad pronouncements about the future of AI. They call AI more important than electricity or fire and say it will be used to “cure all disease,” “radically improve education,” “help us address climate change,” and “massively increase productivity.”

Lost amidst the hubris is how much is currently unknown about AI. For policymakers, two unknowns are important to keep in mind. First, no one knows how much it will cost to create future AI systems and, as a result, whether there will be only a few of those systems or whether they will be widespread. Second, no one knows the trajectory of AI development and, as a result, when and whether AI will be capable of delivering any of its potential future benefits, such as enabling cheap fusion energy, or potential risks, such as creating novel bioweapons. Unknowns about both cost and trajectory have significant implications for policymaking.

Unknowns About the Future of AI Development

Cost
The models on which generative AI systems are based are built by running algorithms over a large amount of data using huge amounts of computing power. A modern model built from scratch costs upwards of $100 million to create. The pattern of additional orders of magnitude of cost leading to additional capabilities has been robust over many years. Leading companies believe that the pattern will continue into the future of AI development, meaning that newer, more powerful models will cost billions, and soon tens of billions, of dollars to create. This has led to an abundance of fundraising for AI development and a shortage of the best chips used to create the models. If creating a competitive model will cost billions of dollars, then only a small number of entities in a tiny number of countries will be able to afford to do it. Read more 


Fraud Hammers Online Services, Drives AI Ambivalence

Masha Borak, Biometric Update

Fraud rates are spiking just like temperatures in many parts of the world.

Global identity verification companies Sumsub, AuthenticID and Okta have published data for the first half of the year (H1 2024), revealing that consumers are slowly embracing digital wallets and mobile driving licenses as they continue to worry over identity theft.

Nigeria, China, Indonesia take lead in global fraud
Nigeria, China and Indonesia are countries with the highest spikes in fraud rates, followed by Turkey and Brazil, according to Sumsub. The UK-based company compared fraud growth rates in H1 2024 to H1 2023, using worldwide client verification data. Nigeria saw a staggering, 1,091 percent growth in forced identity verification, a type of fraud in which individuals are manipulated by fraudsters into going through verification. Turkey and Vietnam followed, recording spikes in forced identity verification of over 650 percent and 500 percent respectively.

The highest spikes in identity theft were recorded in Indonesia (748 percent), Nigeria (551 percent) and Turkey (390 percent) while deepfake fraud boomed in China (964 percent), Brazil (510 percent) and Pakistan (291 percent).

In terms of industry, the highest fraud rates were recorded in the crypto sector, followed by fintech and online betting or iGaming. Nigeria again took the lead in crypto fraud, seeing a rise of over 1,000 percentage points while fintech fraud was the highest in Brazil, reaching a spike of over 900 percent. Surprisingly, the highest spike in iGaming fraud was recorded in Georgia, almost 500 percent.

US consumers more open to digital IDs
Meanwhile, in the United States, fraudulent transactions in the first half of 2024 were up 73 percent year-on-year, according to Seattle-headquartered AuthenticID. Among this, biometric authentication fraud accounted for 22 percent while ID verification fraud took up 78 percent. “What we’ve seen thus far in 2024 is that identity crime will continue to hit record highs, targeting both businesses and consumers,” says AuthenticID Founder and President Blair Cohen. Read more 


AI’s Growing Role in Finance Brings Opportunity and Risk, House Panel Finds 

PYMNTS.com

Artificial intelligence (AI) is set to revolutionize finance and housing, bringing both game-changing benefits and thorny new risks that demand vigilant oversight, a bipartisan House panel has concluded. The House Financial Services Committee’s AI Working Group, established in January by Chairman Patrick McHenry, R-N.C., and Ranking Member Maxine Waters, D-Calif., examined AI’s impact on finance through a series of roundtables with regulators, market participants and consumer advocates.

In a report released Thursday (July 18), the group highlighted AI’s potential to expand access to credit, enhance fraud detection and improve customer service. However, it also warned of challenges around data privacy, potential bias in algorithmic decision-making and the need to ensure AI systems comply with existing laws.

“As consumers and businesses increasingly look to leverage AI, it is critical that lawmakers and regulators keep pace,” McHenry said in a news release. “This report represents a bipartisan effort to understand the benefits, and potential risks, of artificial intelligence in the financial services and housing industries. It also highlights the need for proper oversight and consumer protections that address the growing number of use cases for artificial intelligence.”

The report comes as financial firms increasingly experiment with advanced AI capabilities, including generative AI systems like ChatGPT. While many institutions have used traditional machine learning models for years, newer AI technologies are opening up novel applications.

Expanding Credit Access
The AI Working Group held six roundtables to explore how the finance industry is using AI. Regulators told the group that AI could lead to bias and discrimination that might be harder to spot. They stressed that firms using AI must still follow anti-discrimination laws. The Consumer Financial Protection Bureau said if a lender can’t explain why AI denied a loan, it’s breaking the law.

According to participants in the roundtables, banks and investment firms are cautiously adopting AI, especially for public-facing tasks. Many have used machine learning for years to crunch data. Now, they’re testing newer AI to help with research, watch for market issues and improve trading. But there are risks. Too many firms using similar AI models could cause herd-like market behavior. Read more 


IDology Unveils Global Fraud Report, Reveals Growing Concerns About Generative AI’s Impact on Fraud

GBG IDology, PR Newswire

Digital channels continue to be a top target for fraudsters while generative AI becomes another tool in their arsenal

IDology, a GBG Company, today released its 2024 Global Fraud Report, confirming increasing concerns about the impact of generative AI on fraud. With rapid advancements in AI, businesses expressed heightened concerns about the evolution of familiar types of fraud—such as synthetic identity fraud (SIF) and phishing—fueled by generative AI.

Emerging Tech Fuels Familiar Fraud 
Generative AI can quickly turn out human-like text, realistic images, and even deepfake videos at scale, allowing fraudsters to create believable synthetic identities and phishing emails and texts with ease. Key findings related to generative AI include:

  • Many respondents named generative AI as the biggest fraud trend over the next 3-5 years.
  • Forty-five percent of companies are worried about generative AI’s ability to create more accurate synthetic identities. And with fraudsters leveraging generative AI, 74% are concerned about the potential for synthetic identity fraud (SIF) to increase.
  • Concern about SIF doesn’t always translate to action. Despite knowing the risk, the number of companies unsure if SIF has impacted their business or not tracking it at all has steadily increased, rising from 23% in 2021 to 39% in 2024.

Mobile and Online Fraud Remain Top Targets
While fraudsters leave no stone unturned, they continue to hedge their biggest bets on digital channels.

  • More than half (52%) of companies reported an overall increase in fraud across mobile, online, contact center, and in-person channels. Of those, the impact was felt the most in digital channels, with online and mobile accounting for 65% of the increase in fraud.
  • 70% of companies report a continued or higher investment in mobile over the next 12 months, making it mission-critical to balance convenience with strategies and solutions to ensure every customer experience is secure. Read more

July 19, 2024: AI & Digital Assets


Global Regulators Race to Tame the Wild West of AI

PYMNTS.com

FCC Proposes New Rules for AI-Generated Robocalls
FCC Chairwoman Jessica Rosenworcel on Tuesday (July 16) proposed new rules requiring the disclosure of AI use in robocalls to protect consumers from potential scams and misinformation. The proposal comes as AI tools are increasingly being leveraged for deceptive practices in telecommunications. According to the FCC, fraudsters have been using AI-generated voice cloning and other advanced techniques to create more convincing and potentially harmful robocalls.

This move is part of a broader effort by the Commission to address the challenges posed by rapidly evolving AI technologies in the communications sector, including recent actions against deepfake voice calls used for election misinformation and proposed fines for carriers involved in such practices.

“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public,” Rosenworcel said in a news release. “That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions.”

The proposed rules define AI-generated calls and mandate disclosure of AI use when obtaining consent and during each call. This move aims to help consumers identify and avoid potentially fraudulent calls. The proposal also seeks to safeguard positive applications of AI, particularly in assisting people with disabilities in using telephone networks. Additionally, it calls for comments on technologies that can alert consumers to unwanted AI-generated calls and texts.

This initiative follows a series of actions by the FCC to combat AI-related scams, including fines for illegal robocalls using deepfake technology and requests to carriers about their preventive measures against fraudulent AI-generated political calls. The full Commission will vote on the proposal in August.

EU’s AI Rules Set to Squeeze Chinese Tech Firms
The European Union’s groundbreaking Artificial Intelligence Act, set to take effect Aug. 1, is reportedly poised to hit Chinese tech companies’ wallets. Read more


Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

Kevin Townsend, Security Week

Few people understand AI, nor how to use nor control it, nor where it is going. Yet politicians wish to regulate it.

Governments are rushing to regulate artificial intelligence. Is meaningful regulation currently possible? AI is the new wild west of technology. Everybody sees enormous potential (or profit) and huge risks (to both business and society). But few people understand AI, nor how to use nor control it, nor where it is going. Yet politicians wish to regulate it.

We cannot deny that AI, currently in the form of generative AI (gen-AI) large language models (LLMs), is here and is here to stay. This is the beginning of a new journey: but are we on a run-away horse that we can neither steer nor control, or can we rein it in through regulation?

From open non-profit to closed and profit-driven
Gen-AI is controlled by Big Tech, and Big Tech is driven by profit rather than user benefit. We can see the problems this can cause by looking at Microsoft and OpenAI. Similar problems and pressures will exist within all Big Tech companies heavily invested in developing AI.

The purpose of this analysis is not to be critical, but to demonstrate the complexities in developing and funding large-scale gen-AI, and by inference (pun intended), the difficulties for regulation. OpenAI was founded in 2015, describing itself as a non-profit artificial intelligence research company. Sam Altman is one of the co- founders. In 2019, Microsoft invested $1 billion with OpenAI. Read more


Digital is Draining Banks’ Emotional Connections with Customers. GenAI May Make Things Worse

Steve Cocheo, The Financial Brand

Digital transformation has dominated much of the industry’s thinking, so much so that it is eroding customer experience ratings issued annually by Forrester. How can banks provide the latest in digital without becoming generic and unsatisfying?

Forrester’s average customer experience scores for both multichannel banks and direct banks fell for the third year in a row, hitting the lowest levels in years.

Worse, the industry risks driving these measurements lower still if institutions don’t provide greater balance between digitization of financial services and the human factor, according to a senior analyst at Forrester. Further, misguided adoption of generative artificial intelligence could open further chasms between banks and their customers.

Alyson Clarke, principal analyst, says the industry is in some cases working against its own interests.

“Primacy is back on the table,” says Clarke. “But what’s fascinating is that banks say they want primacy, but they don’t want their customers to interact with them. They keep pursuing cost reduction, but they don’t understand what drives customer loyalty.”

Increasingly, many institutions are pushing most customers towards self-service solutions, which are very efficient and good for keeping costs down. But they increase the distance between the bank and its customers.

“I don’t know about you, but I find it just becomes increasingly harder and harder to talk to a human being at your bank when you want to,” says Clarke. Read more


Three Benefits of AI For Credit Unions

Ashley LaBombard, CUSO Magazine

Artificial intelligence (AI) is all the buzz these days, even for financial institutions! While you may be wary about this technology as it’s still somewhat in its infancy, there are certainly reasons to consider AI for credit unions.

Benefits AI can bring for your members
As you discover the benefits of AI for credit unions, be sure to stay abreast of the potential risks. Here are three big benefits of AI for credit union members:

1. Improve member communication
With more members and often fewer employees, efficient communication isn’t always possible—at least not in the traditional manner. Credit unions are known for their “family feel,” welcoming environment, and personable nature. While you may think AI will diminish that, it can help your credit union uphold its reputation.

Improved member support
With an AI-powered chatbot, simple questions can be answered promptly without taking time away from a staff member. Therefore, members with more complicated inquiries won’t have to wait on hold for a member service representative.

AI can also help break down language and communication barriers, providing a more inclusive experience for your credit union. When dealing with a diverse member makeup, your team may be unable to support those members who are not native English speakers.

“…Cultural misunderstandings cost American businesses over $2 billion per year…” and can also negatively impact the experience your members have. Enter the interpretative abilities of AI. Member questions and concerns can be translated, and answers can be provided in their first language. Additionally, your credit union can substantially save by harnessing AI for this service instead of hiring costly translation services (that may not be as timely or responsive as AI would be).

Personalized member advice
Generative AI can also help improve communications by generating personalized advice for members based on an individual’s historic member data (such as their loan or payment history). Relationship building is a key differentiator of credit unions, and personalized advice, particularly involving financial wellness, can help strengthen relationships between the member and the credit union. Read more 

 

July 12, 2024: AI & Digital Assets


OpenAI Breach Is a Reminder That AI Companies Are Treasure Troves for Hackers

Devin Coldewey, TechCrunch

There’s no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAI’s systems. The hack itself, while troubling, appears to have been superficial — but it’s a reminder that AI companies have in short order made themselves into one of the juiciest targets out there for hackers.

The New York Times reported the hack in more detail after former OpenAI employee Leopold Aschenbrenner hinted at it recently in a podcast. He called it a “major security incident,” but unnamed company sources told the Times the hacker only got access to an employee discussion forum. (I reached out to OpenAI for confirmation and comment.)

No security breach should really be treated as trivial, and eavesdropping on internal OpenAI development talk certainly has its value. But it’s far from a hacker getting access to internal systems, models in progress, secret roadmaps, and so on. But it should scare us anyway, and not necessarily because of the threat of China or other adversaries overtaking us in the AI arms race. The simple fact is that these AI companies have become gatekeepers to a tremendous amount of very valuable data.

Let’s talk about three kinds of data OpenAI and, to a lesser extent, other AI companies created or have access to: high-quality training data, bulk user interactions, and customer data. Read more


The May theft of more than $300 million in Bitcoin from Japanese crypto exchange DMM Bitcoin is the largest digital currency heist so far this year.

Alexei Alexis, Banking Dive

Dive Brief:

  • Hackers stole $1.38 billion in cryptocurrency during the first half of the year, about twice the amount stolen during the same period in 2023, according to a recent report from blockchain intelligence firm TRM.
  • Similar to 2023, a small number of large attacks made up the lion’s share of the thefts, with the top five hacks and exploits accounting for 70% of the total amount stolen as of June 24, according to the research.
  • TRM said it has so far observed no fundamental changes in the security of the cryptocurrency market that may explain the trend. “However, the past six months did see significantly higher average token prices compared to this period last year; this is likely to have contributed to the increased theft volumes,” the report said.

Dive Insight:
The number of large enterprises using cryptocurrency for payment, stored value or collateral will rise to at least 20% this year, highlighting a trend that poses new financial risk management challenges for organizations and their CFOs, according to a 2022 prediction by Gartner.

Since Bitcoin’s launch in 2009, cryptocurrencies have exploded in popularity and are today collectively worth more than $1 trillion, the Council on Foreign Relations reported in January. Governments worldwide are grappling with challenges related to cryptocurrencies, including concerns over criminal activity, consumer protection and high levels of currency volatility, the report said.

In May, Japanese cryptocurrency exchange DMM Bitcoin suffered a theft of Bitcoin worth more than $300 million at the time, marking the largest crypto-related attack so far in 2024, according to TRM’s report.

“‍More money was stolen during each of the first six months of 2024 than in the corresponding months in 2023, with the median hack 150% larger,” the report said. “However, thefts from hacks and exploits are a third below the same period in 2022, which remains a record year.”  Read more


Regulating General-Purpose AI: Areas of Convergence and Divergence Across the EU and the U.S.

Benjamin Cedric Larsen and Sabrina Küspert, Brookings

The current fast-paced advancement of AI has been described as an “unprecedented moment in history” by one of the pioneers of this field, Yoshua Bengio, at a U.S. Senate hearing in 2023. In many cases, recent progress can be linked to the development of so-called “general-purpose AI” or “foundation models.” These models can be understood as the “building blocks” for many AI systems, used for a variety of tasks. OpenAI’s GPT-4, with its user-facing ​​​system​ ChatGPT and third-party applications building on it, is one example of a general-purpose AI model. Only a small number of actors with significant resources have released such models. Yet, they reach hundreds of millions of users with direct access, and power thousands of applications built on top of them across a range of sectors, including education, healthcare, media and finance. The developments surrounding the release and adoption of increasingly advanced general-purpose AI models have brought renewed urgency to the question of how to govern them, on both sides of the Atlantic and elsewhere.

The European Parliament has acknowledged that the speed of technological progress around general-purpose AI models is faster and more unpredictable than anticipated by policymakers. At the end of 2023, EU lawmakers reached political agreement on the EU AI Act, a pioneering legislative framework on AI, which introduces binding rules for general-purpose AI models and a centralised governance structure at the EU level through a new European AI Office.

Until recently, the U.S. government has pursued a more laissez-faire approach to AI regulation. The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued in fall 2023, outlines a comprehensive approach to U.S. AI governance. Members of Congress also have presented a variety of legislative proposals for AI, but a federal legislative process dedicated to regulating general-purpose AI models remains absent. Read more


Visa and Tangem Unveil Combined Payment Card-Crypto Wallet

Pymnts.com

Switzerland-based cryptocurrency wallet maker Tangem AG has launched a payments partnership with Visa.

The collaboration, announced Friday (July 5), has resulted in a Visa payments card combined with a hardware wallet that lets Tangem users make payments using their crypto or stablecoin balances at merchants that accept Visa.

“We are delighted that Visa has chosen to partner with Tangem, one of the most reliable and secure solutions for personal cryptocurrency storage,” Andrey Kurennykh, co-founder and CEO of Tangem, said in a news release. “Our users will get a two-in-one solution — the convenience of a regular bank card and the capabilities of a self-custodial crypto wallet, all in one card.”

Kurennykh added that the partnership will go a long way toward “bridging the gap between traditional banking and digital assets, making it easier for everyday users to navigate and leverage the benefits of both worlds.” According to the release, the new solution differs from traditional custodial solutions, which rely on third-party entities to handle user funds. In this case, Tangem’s card embeds a private key within the chip and requires the physical card’s use for every transaction, making sure users are always in control of their assets.

The partnership is happening a moment when, as PYMNTS wrote earlier this week, the cryptocurrency and blockchain sector finds itself at a crucial juncture. “It is the same critical juncture, or at least one strikingly similar, that the crypto and digital asset sector has always found itself at — a juncture where regulatory developments, interoperability and scalability, and institutional acceptance are at the forefront,” that report said. Read more

 

June 28, 2024: AI & Digital Assets


Credit Unions Must Share Data to Fight New AI Fraud Risks

Tom Oscherwitz, Informed.IQ/CUSO Magazine

In March, the Department of the Treasury issued a troubling report warning financial institutions they are at risk of emerging AI fraud threats. The culprit is a failure to collaborate. The report warns that lenders are not sharing “fraud data with each other to the extent that would be needed to train anti-fraud AI models.”

This report should be a wake-up call. As any fraud-fighting veteran knows, combating fraud is a perpetual arms race. And when new technologies like generative AI emerge, the status quo is disrupted. Right now, the fraudsters are gaining the upper hand. According to a recent survey by the technology firm Sift, two-thirds of consumers have noticed an increase in fraud scams since November 2022, when generative AI tools hit the market.

How is AI changing the fraud landscape? According to the Treasury report, new AI technologies are “lowering the barrier to entry for attackers, increasing the sophistication and automation of attacks, and decreasing time-to-exploit.” These technologies “can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks.”

The same generative AI technology that helps people create songs, draw pictures, and improve their software coding is now being used by fraudsters. For example, they can purchase an AI chatbot on the Dark Web, called Fraud GPT, to create phishing emails and phony landing pages. AI technology can help produce human-sounding text or images to support impersonation and generate realistic bank statements with plausible transactions. Read more


FBI Warns of Fake Law Firms Targeting Crypto Scam Victims

Bill Toulas, Bleeping Computer

The FBI is warning of cybercriminals posing as law firms and lawyers that offer cryptocurrency recovery services to victims of investment scams and steal funds and personal information.

The latest alert is an update to similar warning from the agency’s Internet Crime Complaint Center (IC3) that alerted of an increase in scams involving fake services for recovering digital assets.

Posing as lawyers
The FBI says that fraudsters trick victims of the legitimacy of the service by claiming a collaboration with government agencies like the FBI and the Consumer Financial Protection Bureau (CFPB).

They also build credibility by referencing real financial institutions and money exchanges in their communication with the victims. This tactic gives a false sense of authorization and capability to trace and recover lost funds.

According to the FBI, among the common claims the scammers include:

  • Request victims to provide personal or banking information to get their money back.
  • Request victims to state the judgment amount they seek from the initial fraudster.
  • Request victims to pay a portion of fees upfront, with the balance due when funds are recovered.
  • Direct victims to pay back taxes and other fees to recover their funds.

Read more


DHS Names China, AI, Cyber Standards as Key Priorities for Critical Infrastructure

Agencies that oversee critical infrastructure are developing new sector risk management plans, with cybersecurity continuing to be a high priority.

Justin Doubleday, Federal News Network

Agencies that oversee critical infrastructure should address threats posed by China and work to establish baseline cybersecurity requirements over the next two years.

That’s according to new guidance signed out by Homeland Security Secretary Alejandro Mayorkas on June 14. The document lays out priorities over the next two years for sector risk management agencies. SRMAs are responsible for overseeing the security of specific critical infrastructure sectors.

“From the banking system to the electric grid, from healthcare to our nation’s water systems and more, we depend on the reliable functioning of our critical infrastructure as a matter of national security, economic security, and public safety,” Mayorkas said in a statement. “The threats facing our critical infrastructure demand a whole of society response and the priorities set forth in this memo will guide that work. Read more


Coinbase Accuses U.S. SEC, FDIC of Improperly Blocking Document Requests

The U.S. crypto exchange wanted the SEC to give up documents on closed probes involving ether’s status as a security, and its research contractor is now suing to get them.

Jesse Hamilton, CoinDesk

  • Coinbase, through an intermediary, is again taking U.S. regulators to court to argue about Freedom of Information Act requests.
  • The U.S. crypto exchange is going after documents at the Securities and Exchange Commission that may reveal how it first began deciding what digital tokens the agency would consider as securities.
  • The company’s contractor, History Associates, is also suing the Federal Deposit Insurance Corp. over letters sent to financial firms to ask them to pause crypto activities.

A research firm Coinbase contracted is suing the U.S. Securities and Exchange Commission (SEC) and a federal banking agency, accusing them on Thursday of failing to produce documents under open-records laws that would shed light on the regulators’ views on cryptocurrencies.

On behalf of the U.S. digital assets exchange, History Associates Inc. said it’s been improperly rebuffed by the SEC and the Federal Deposit Insurance Corp. regarding documents that Coinbase contends should be available under the Freedom of Information Act (FOIA). At the SEC, Coinbase is seeking written communications in three closed cases for how the agency formally worked out what digital assets it thinks qualify as securities, including Ethereum’s ether (ETH). And at the FDIC, the exchange wants copies of the so-called “pause letters” the agency’s inspector general said were sent to financial firms advising that they slam the brakes on crypto activity. Read more


June 21, 2024: AI & Digital Assets


In Spite of Hype, Many Companies Are Moving Cautiously When It Comes to Generative AI

It’s harder to implement at scale than it looks

Vendors would have you believe that we are in the midst of an AI revolution, one that is changing the very nature of how we work. But the truth, according to several recent studies, suggests that it’s much more nuanced than that.

Companies are extremely interested in generative AI as vendors push potential benefits, but turning that desire from a proof of concept into a working product is proving much more challenging: They’re running up against the technical complexity of implementation, whether that’s due to technical debt from an older technology stack or simply lacking the people with appropriate skills.

In fact, a recent study by Gartner found that the top two barriers to implementing AI solutions were finding ways to estimate and demonstrate value at 49% and a lack of talent at 42%. These two elements could turn out to be key obstacles for companies. Consider that a study by LucidWorks, an enterprise search technology company, found that just 1 in 4 of those surveyed reported successfully implementing a generative AI project.

Aamer Baig, senior partner at McKinsey and Company, speaking at the MIT Sloan CIO Symposium in May, said his company has also found in a recent survey that just 10% of companies are implementing generative AI projects at scale. He also reported that just 15% were seeing any positive impact on earnings. That suggests that the hype might be far ahead of the reality most companies are experiencing.

What’s the holdup?
Baig sees complexity as the primary factor slowing companies down with even a simple project requiring 20-30 technology elements, with the right LLM being just the starting point. They also need things like proper data and security controls and employees may have to learn new capabilities like prompt engineering and how to implement IP controls, among other things. Read more


Citi Sees AI Impacting More Than Half of All Finance Jobs

PYMNTS

Which industry’s job market will be most impacted by artificial intelligence (AI)?

According to a new report by Citi, a little more than half — 54% — of jobs in the banking sector have a higher potential for automation, while another 12% could be augmented by AI.

“AI-powered clients could increase price competition in the finance sector. The balance of power may shift,” the banking giant said in the intro to the report. “AI may be adopted faster by digitally native, cloud-based firms, such as FinTechs and BigTechs, with agile incumbent banks following fast. Many incumbents, weighed down by tech and culture debt, could lag in AI adoption, losing market share.”

The report also noted that a move to a “bot-powered world” also raises issues dealing with compliance, security, regulation and ethics. “Since AI models are known to hallucinate and create information that does not exist, organizations run the risk of AI chatbots going fully autonomous and negatively affecting the business financially or its reputation,” Citi said.

Other industries with a high potential for automation, the report said, include insurance (46%), capital markets (40%) and energy (43%). Citi’s report follows one earlier this year from the International Monetary Fund (IMF), which contended that the impact of AI will be especially pronounced on advanced economies.

While about 40% of global employment is exposed to AI, around 60% of jobs in advanced economies could be impacted by the technology, as it tends to affect high-skilled jobs. Meanwhile, PYMNTS recently examined the way technology such as AI and automation are increasingly combining to help CFOs amid a shortage of accountants. Read more


Understanding the EU AI Act: A Security Perspective

Megan Gates, ASIS Online

Almost six years after the European Union (EU) set the global standard for privacy regulation, it’s poised to make similar moves to regulate artificial intelligence (AI) systems and technologies.

The EU AI Act was originally proposed in April 2021 before being endorsed by the European Parliament on 13 March 2024 (523 votes in favor, 46 against, and 49 abstentions).

As of late March, the act was in its final review stage before becoming law and member states issuing guidance on its implementation.

“Considering the significant majority in the European Parliament vote, we do not foresee any member states withholding approval of the act,” says Dave McCarthy, program manager, government relations, Axis Communications, which is headquartered in Sweden. “Throughout the coming months, we will closely monitor the implementation of the EU AI Act, including the delegating acts and the emergence of new standards.”

Dragos Tudorache, civil liberties committee co-rapporteur and MEP representing Romania, said in a statement that the EU has now linked the concept of AI to the fundamental values that form the basis of member states’ societies. Read more


Wholesale, Not Retail, CBDCs More Likely to Be Issued in Near-Term

FinExtra

There has been a sharp uptick in experiments and pilots with wholesale central bank digital currencies over the last year, according to the Bank for International Settlements. In recent years, almost all central banks have begun exploring CBDCs – 94% of 86 surveyed by BIS in late 2023.

Now, there is a shift away from theoretical research on potential implications towards real-life experiments to test the feasibility and desirability of specific design features. More than half of the central banks quizzed by BIS are working on proofs of concept, and one out of three is running a pilot.

And, while much of the debate around the issue has focused on retail CBDCs, the banks are now shifting focus to wholesale, where there has been a noticable uptick in experiments. BIS predicts the “likelihood that central banks will issue a CBDC within the next six years is now generally greater for wholesale than for retail CBDC”.

As for designs, many CBDC features are still undecided. Yet, interoperability and programmability are often considered for wholesale CBDCs, while for retail CBDCs, more than half of central banks are considering holding limits, interoperability, offline options and zero remuneration.

On crypto, the survey indicates that, to date, stablecoins are rarely used for payments outside the crypto ecosystem. Moreover, about two out of three responding jurisdictions have or are working on a framework to regulate stablecoins and other cryptoassets.

 

June 14, 2024: AI & Digital Assets


States Take Up A.I. Regulation Amid Federal Standstill

California legislators have made the biggest push to pass new laws to rein in the technology. Colorado passed one protecting consumers.

Cecilia Kang, New York Times

Lawmakers in California last month advanced about 30 new measures on artificial intelligence aimed at protecting consumers and jobs, one of the biggest efforts yet to regulate the new technology.

The bills seek the toughest restrictions in the nation on A.I., which some technologists warn could kill entire categories of jobs, throw elections into chaos with disinformation, and pose national security risks. The California proposals, many of which have gained broad support, include rules to prevent A.I. tools from discriminating in housing and health care services. They also aim to protect intellectual property and jobs.

California’s legislature, which is expected to vote on the proposed laws by Aug. 31, has already helped shape U.S. tech consumer protections. The state passed a privacy law in 2020 that curbed the collection of user data, and in 2022 it passed a child safety law that created safeguards for those under 18.

“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” said Rebecca Bauer-Kahan, a Democratic assembly member who chairs the State Assembly’s Privacy and Consumer Protection Committee.

As federal lawmakers drag out regulating A.I., state legislators have stepped into the vacuum with a flurry of bills poised to become de facto regulations for all Americans. Tech laws like those in California frequently set precedent for the nation, in large part because lawmakers across the country know it can be challenging for companies to comply with a patchwork across state lines. Read more


78% of CFOs Say AI Is ‘Extremely Important’ to Payments Processes

Pymnts.com

Most CFOs working on behalf of organizations earning at least $1 billion annually use multiple systems to manage their source-to-pay cycles, and nearly 60% of firms earning between $10 billion and $20 billion each year rely on at least five accounts payable (AP) systems.

Why does this matter? Because multiple systems can mean multiple points of vulnerability. Each system represents a door to potential interoperability or incompatibility problems that can ultimately cause AP processes to grind to a halt.

And that is exactly what is happening. As PYMNTS Intelligence revealed in “60 CFOs Can’t Be Wrong … AI Can Help Accounts Payable” — a report based on surveys with 60 chief financial officers representing U.S. companies that generate more than $1 billion annually — nearly two-thirds of the CFOs surveyed say that their companies were impacted by a source-to-pay cycle interruptions in the past year, and, as a result, the CFOs experienced payment execution and authorization delays.

Meanwhile, data also shows that only 17% of enterprise firms run their source-to-pay cycle largely free of human involvement, meaning the remaining 83% rely on manual processes rather than and may benefit from an automated, single-source AP system.

Inefficiencies such as these — multiple AP systems, manual processes — are likely the reason CFOs are increasingly embracing artificial intelligence (AI) to streamline their processes. Granted, some CFOs appear to be slower to implement AI depending on their past experiences with automation, which disfavors those with a larger number of systems in their AP processes. Among those CFOs who say that accessing AI for source-to-pay systems was not at all or slightly important to them, they all have two or more different AP systems.

But overall, most CFOs surveyed — 78% — say their access to AI technology is very or extremely important. Only 5% say the AI automation they are now using to support their AP cycle is not at all or only slightly important. Read more


Financial Professional or Artificial Intelligence? FINRA Foundation Report Examines Which of These Consumers Trust More

Key findings include the following:

  • Few report currently relying on AI for financial advice: Over half of the respondents consulted with financial professionals (63 percent) and friends and family (56 percent) for information when making financial decisions, while only 5 percent indicated they used AI.
  • Homeownership information: Respondents broadly trusted information about homeownership regardless of the source. However, more respondents trusted the information when they were told a financial professional provided it, while more distrusted it when AI was cited as the source.
  • Projected stock and bond performance information: Overall, roughly one-third of the respondents trusted the information, whether the source was AI (34 percent) or a financial professional (33 percent). However, white men were more likely to trust AI compared to a financial professional. The same was true among those with a higher level of self-assessed financial knowledge.
  • Portfolio allocation information: More respondents trusted the information when coming from a financial professional (37 percent) than from AI (30 percent).
  • Savings and debt information: Respondents generally trusted the information whether it came from AI or a financial professional. However, a greater proportion of Black respondents trusted the information when it came from a financial professional (69 percent) compared to AI (48 percent).

The FINRA Investor Education Foundation (FINRA Foundation) has released a new report, The machines are coming (with personal finance information). Do we trust them?

Despite the growing popularity of artificial intelligence (AI), very few consumers knowingly turn to AI for information on personal finances, according to the report.

“As AI continues to be integrated into consumers’ everyday lives, it is vital to get a better understanding of how they perceive it and how they are using the technology to help make financial decisions. This report found that while more consumers indicated trusting individual financial professionals than AI, there are instances where some consumers preferred AI-generated information related to topics like homeownership and saving,” said FINRA Foundation President Gerri Walsh. “These perceptions could change with time, so it will be crucial for the financial services industry to continue to better understand how consumers interact with AI to better equip them with the resources and knowledge to make sound financial decisions.” Read more


Visa Reaches Milestone with Tokenization Technology

Dave Kovaleski, Financial Regulation News

Payment processing company Visa hit a new milestone with its tokenization technology, as Visa tokens have generated more than $40 billion in incremental ecommerce revenue for businesses globally.

Over the last 10 years, Visa has enhanced its security across the payment ecosystem through tokenization – a technology that replaces sensitive personal data with a cryptographic key that conceals sensitive payment data. Tokenization can be embedded into any device, making digital payments more secure while being virtually useless to scammers.Currently, 29 percent of all transactions processed by Visa use tokens. Since the technology launched in 2014, Visa has issued more than 10 billion tokens.

Tokenization technology has led to a six-basis point increase in payment approval rates globally. In addition, tokenization can reduce the rate of fraud by up to 60 percent. In the last year alone, it prevented some $650 million in fraud.

“Today’s milestone represents the impact that tokenization has had on the entire payments ecosystem since we introduced the technology 10 years ago,” Jack Forestell, chief product officer at Visa, said. “Tokens have changed the game – securing online payments and paving the way for more innovations – from tapping to pay on a phone to enabling a future where we have more control over our data in the age of AI.”

Visa, which now has issued its 10 billionth token, has seen the adoption of tokens accelerated significantly in the last four years, due in part to the shift to digital during the pandemic. Currently, over 8,000 issuers are enabled for tokenization, with over 200 markets empowered with the technology globally. In the last 12 months, over 1.5 million eCommerce merchants transacted with Visa Tokens every day. Read more


June 7, 2024: AI & Digital Assets


Yellen to Warn of ‘Significant Risks’ From Use of AI in Finance

David Lawder, Reuters

U.S. Treasury Secretary Janet Yellen will warn that the use of artificial intelligence in finance could lower transaction costs, but carries “significant risks,” according to excerpts from a speech to be delivered on Thursday.

In the remarks to a Financial Stability Oversight Council and Brookings Institution AI conference, Yellen says AI-related risks have moved towards the top of the regulatory council’s agenda.

“Specific vulnerabilities may arise from the complexity and opacity of AI models, inadequate risk management frameworks to account for AI risks and interconnections that emerge as many market participants rely on the same data and models,” Yellen says in the excerpts.

She also notes that concentration among the vendors that develop AI models and that provide data and cloud services may also introduce risks that could amplify existing third-party service provider risks.

“And insufficient or faulty data could also perpetuate or introduce new biases in financial decision-making,” according to Yellen.
But the remarks to the conference on AI and financial stability show that Yellen recognizes the benefits of AI in the automation of customer support services, improved efficiency, fraud detection and combating illicit finance.

“Advances in natural language processing, image recognition, and generative AI, for example, create new opportunities to make financial services less costly and easier to access,” Yellen says in the excerpts. Read more


Is The Latest Crypto Bill an Opening for Banks to Bypass Regulation?

Claire Williams, American Banker

A crypto bill that passed by an unusually wide bipartisan margin in the House could create a loophole for traditional financial firms — including banks — to slip past more stringent regulation.

The House last month voted to pass a bill establishing a regulatory regime for cryptocurrencies, and the bill notably passed with uncommon bipartisan support. Seventy-one Democrats voted for the bill, including senior party members like former House speaker Rep. Nancy Pelosi of California.

The bill faces tougher odds in the Senate, where Senate Banking Committee Chairman Sherrod Brown, D-Ohio, has been skeptical of crypto legislation that he sees as being too favorable to the crypto industry.

But the vote in the House was a turning point for crypto in Washington, as more Democratic lawmakers seem to be interested in considering some kind of bill. Senate majority leader Chuck Schumer, D-N.Y., along with a group of mostly northeast and west coast Democrats, are becoming more friendly toward a narrow set of crypto issues, creating an opportunity for the crypto industry and for Republicans to find common ground and pass a bill setting up a regulatory regime for the industry.

A number of Democratic senators, including Schumer, voted in favor of the Congressional Review Act challenge to the SEC’s staff accounting bulletin 121, which banks argue effectively undercuts their ability to custody cryptocurrency. While Biden vetoed that challenge on Friday, the bill that just recently came out of the House would have a very similar effect, barring the SEC from making rules or issuing guidelines that would prevent banks from entering the crypto custody business.

Going forward, a key element of the forthcoming debate on the crypto bill — known as the “Financial Innovation and Technology for the 21st Century Act,” or FIT21 — will be the implications of the bill not just on crypto, but traditional financial institutions, including banks. Read more

Study Finds That AI Models Hold Opposing Views on Controversial Topics

Kyle Wiggers, Tech Crunch

Not all generative AI models are created equal, particularly when it comes to how they treat polarizing subject matter.

In a recent study presented at the 2024 ACM Fairness, Accountability and Transparency (FAccT) conference, researchers at Carnegie Mellon, the University of Amsterdam and AI startup Hugging Face tested several open text-analyzing models, including Meta’s Llama 3, to see how they’d respond to questions relating to LGBTQ+ rights, social welfare, surrogacy and more.

They found that the models tended to answer questions inconsistently, which reflects biases embedded in the data used to train the models, they say. “Throughout our experiments, we found significant discrepancies in how models from different regions handle sensitive topics,” Giada Pistilli, principal ethicist and a co-author on the study, told TechCrunch. “Our research shows significant variation in the values conveyed by model responses, depending on culture and language.”

Text-analyzing models, like all generative AI models, are statistical probability machines. Based on vast amounts of examples, they guess which data makes the most “sense” to place where (e.g. the word “go” before “the market” in the sentence “I go to the market”). If the examples are biased, the models, too, will be biased — and that bias will show in the models’ responses.


ICYMI FTX and Binance: How Latest Crypto Scandals Could Influence Public Opinion on Digital Currency Regulation

Pepper Culpepper, University of Oxford, the Conversation

True believers in cryptocurrency have had a rough few weeks. The US government just fined Binance – the world’s largest crypto exchange – US$4.3 billion (£3.4 billion) for its involvement in money laundering.

It forced the firm to accept intrusive monitoring and demanded that its secretive boss, Changpeng Zhao, step down and pay a personal fine of $50 million. Zhao, known as CZ, has been called the most powerful man in crypto.

The industry is still reeling from the conviction of Zhao’s bitter rival, Sam Bankman-Fried, earlier in November on seven counts of fraud and conspiracy. His company FTX – previously the second-largest crypto exchange in the world – collapsed in November 2022. SBF, as he is commonly known, could theoretically face more than 100 years in jail when he is sentenced in March 2024. Several other former leading crypto executives are also under investigation or being prosecuted.

Crypto has just squared off against the US state, and at halftime the result is state power 2 – crypto 0. After this display of muscular regulation on one side of the Atlantic, what is the future of crypto regulation in the UK? Read more

May 31, 2024: AI & Digital Assets


Financial Institutions Face Rising Threat from Sophisticated AI Fraud

Savannah Fortis, CoinTelegraph

Many financial institutions are struggling to keep up with the rising sophistication of AI-driven fraud, creating a critical need for enhanced detection and prevention methods.

In the world of finance, artificial intelligence (AI) has emerged as both a tool and a generator of new problems. It brings forth innovation, productivity and efficiencies for companies, however, it has also introduced sophisticated challenges that many financial institutions are unprepared to address.

Since the rise of accessible AI tools, many financial institutions have struggling with a lack of tools to accurately identify and segregate AI fraud from other types of fraud. This inability to differentiate various fraud types within their systems leaves these institutions with a blind spot and makes it difficult to comprehend the scope and impact of AI-driven fraud.

Cointelegraph heard from Ari Jacoby, an AI fraud expert and the CEO of Deduce, to better understand how financial institutions can identify and separate AI fraud, what can be done to prevent this type of fraud before it occurs and how its rapid growth may impact the entire industry.

AI fraud identification
Given that the main challenge is that most financial institutions currently have no way of distinguishing between AI-generated fraud and all other types, it is aggregated into one category of fraud. Jacoby said the combination of legitimate personal identifiable information — like social security numbers, names, and birthdates — with socially engineered email addresses and legitimate phone numbers makes detection by legacy systems nearly impossible. Read more


OPINION: The Future of AI Is Decentralized

Alex Goh, CoinDesk

Younger readers may not remember, but cloud computing was once the future. The advent of unlimited computing and storage resources represented one of the few tech ‘revolutions’ worthy of the name. But the age of AI has made the centralized cloud model not only obsolete but also an active danger for those building on it — and for every user, too.

If that sounds a little hyperbolic, consider the recently uncovered vulnerability affecting Hugging Face, a major AI-as-a-Service platform. This vulnerability could potentially allow tampered models uploaded by users to execute arbitrary code via their inference API feature to gain escalated control. Fortunately, this was spotted in time and did not seem to have seriously affected users — although researchers point out that such vulnerabilities are “far from unique.”

The problem here isn’t with AI at all; it’s the outdated, centralized, X-as-a-Service models, where there’s no incentive either to guarantee the security of their systems or to develop applications that the market and ordinary users want. The preferred future of AI — where it is safe, secure, and, above all, able to draw on vast compute resources — can only be achieved by flipping the cloud on its head and embracing the decentralization revolution.

‘Big Cloud’ and the monopolization of AI
Megacorps like Microsoft, OpenAI, Google, and Amazon dominate the AI field because they have the immense financial, human, and compute resources necessary to make it work at scale. Read more


How Fraudsters Stole $37M from Coinbase Pro Users

Zeljka Zorz, Help Net Security

A convincing phishing page and some over-the-phone social engineering allowed a group of crooks to steal over $37 million from unlucky Coinbase Pro users. One of them – Chirag Tomar, a 30-year-old citizen of the Republic of India – has been arrested on US soil, has pleaded guilty to wire fraud conspiracy, and is awaiting sentencing.

The scheme
Around June 2021, Tomar and his co-conspirators set up a spoofed Coinbase Pro website at CoinbasePro.Com, the prosecutors claim. (The legitimate site was hosted at Pro.Coinbase.Com.)

“Once victims entered their login credentials into the fake website, an authentication process was triggered. In some instances, victims were tricked into providing their login and authentication information of the real Coinbase website to fraudsters. Other times, victims were tricked into allowing fake Coinbase representatives into executing remote desktop software, which enabled fraudsters to gain control of victims’ computers and access their legitimate Coinbase accounts,” says the US Deparment of Justice.

In some cases, the fraudsters impersonated Coinbase customer service representatives and tricked the users into providing their two-factor authentication codes over the phone. Read more


Ripple Donates $25M to Crypto Super PAC

FinExtra

Ripple is pumping $25 million into Fairshake, a federal super political action committee (PAC) backing pro-crypto candidates during the 2024 US elections. The money brings Ripple’s total contribution to Fairshake to $50 million, nearly half the super PAC’s entire money on hand.

Explaining its decision, Ripple says the SEC’s approach to regulating crypto has failed and that the 2024 elections will be the “most consequential in crypto’s history”. “We must elect leaders who understand this potential and support policies that protect consumers and markets in ways that are fair and innovation-forward,” says the firm.

Fairshake recently spent $10 million helping to kill off a Senate run from Democrat Congresswoman Katie Porter. The industry secured another win last week when the House of Representatives passed the Financial Innovation and Technology for the 21st Century Act despite opposition from the Biden Administration and SEC chair Gary Gensler.

The bill establishes a regulatory framework for digital assets, covering areas such as consumer protections, the use of crypto in illicit finance, but still faces an uncertain future in the Senate. Ripple CEO Brad Garlinghouse says: “Ripple will not – and the crypto industry should not – keep quiet while unelected regulators actively seek to impede innovation and economic growth that millions of Americans utilise.”

May 23, 2024: AI & Digital Assets


OPINION: House Crypto Bill Sows the Seeds of The Next Financial Crisis  

Mark Hays, The Hill

A buzzy new financial player, once thought invincible, nearly collapses due to poor management and reckless bets gone wrong. Congress holds hearings. Regulators offer measures to limit speculation and increase disclosures and oversight. But the industry calls this approach burdensome and harmful to investors. An industry champion makes a case for a lighter touch regulatory approach and succeeds.

This scenario, plucked from the 1990s derivatives industry, has eerie parallels to today, as the House considers cryptocurrency legislation. The Financial Innovation and Technology for the 21st Century Act, the industry claims, will foster innovation while protecting consumers. But we should fear a repeat of the past — a bill that fosters weak regulation for crypto and undermines investor safeguards.

In the 1990s, the firm was Long-Term Capital Management, a hedge fund. In 1998, the firm nearly collapsed due to a series of bad trades that cost it $4.6 billion. To avoid contagion, the New York Fed and private firms stepped in with a cash infusion. Congress then held hearings, a presidential working group studied the issue, and Treasury came up with regulatory ideas for the esoteric financial products that brought down LTCM, known as swaps and over-the-counter derivatives.

But then there was a changing of the guard.

Lawrence Summers, notably, replaced Robert Rubin as Treasury secretary. Suddenly, the word was that derivatives and swaps should be less regulated. Summers argued that “these products have transformed the world of finance” and that “a cloud of legal uncertainty” over the industry could “discourage innovation and growth…driving transactions offshore.” Read more


AI Should Be Trained to Respect a Regulatory ‘Constitution’ Says BofE Policy Maker

FinExtra
Innovative AI models should be trained to respect a ‘constitution’ or a set of regulatory rules that would reduce the risk of harmful behavior, argues a senior Bank of England policy maker.

In a speech at CityWeek in London, Randall Kroszner, an external member of the Bank of England’s financial policy committee, outlined the distinction between fundamentally disruptive versus more incremental innovation and the different regulatory challenges posed.”When innovation is incremental it is easier for regulators to understand the consequences of their actions and to do a reasonable job of undertaking regulatory actions that align with achieving their financial stability goals,” he says.

However, in the case of AI, innovation comes thick and fast, and is more likely to be a disruptive force, making it “much more difficult for regulators to know what actions to take to achieve their financial stability goals and what the unintended consequences could be for both stability and for growth and innovation.”

Kroszner suggests that the central bank’s up-and-coming Digital Securities Sandbox, which will allow firms to use developing technology, such as distributed ledger technology, in the issuance, trading, and settlement of securities such as shares and bonds, may no longer be an applicable tool for dealing with artificial intelligence technology. Read more


Generative AI Poses Unique Risks to Data Security, NIST Warns

Sean C. Griffin of Robinson & Cole LLP – Data Privacy + Security Insider, National Law Review

Generative artificial intelligence (AI) has opened a new front in the battle to keep confidential information secure. The National Institute of Standards and Technology (NIST) recently released a draft report highlighting the risk generative AI poses to data security. The report, entitled “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” details generative AI’s potential data security pitfalls and suggests actions for generative AI management.

NIST identifies generative AI’s data security risk as “[l]eakage and unauthorized disclosure or de-anonymization of biometric, health, location, personally identifiable [information], or other sensitive data.” Training generative AI requires an enormous amount of data culled from the internet and other publicly available sources. For example, ChatGPT4 was trained with 570 gigabytes from books, web texts, articles, and other writing on the internet, which amounts to about 300 billion words residing in a generative AI database. Much of generative AI’s training data is personal, confidential, or sensitive information.

Generative AI systems have been known to disclose any information within its training data, including confidential information, upon request. During adversarial attacks, large language models have revealed private or sensitive information within their training data, including phone numbers, code, and conversations. The New York Times has sued ChatGPT’s creator, OpenAI, alleging in part that ChatGPT will furnish articles behind the Times paywall. This disclosure risk poses obvious data security issues. Read more


How Blockchain is Reshaping Supply Chains Beyond Finance

Julie Lamb, CoinDesk

Have you ever wondered about the true potential of blockchain technology beyond its association with finance? Blockchain offers transparency, security, and efficiency, revolutionizing processes and unlocking new opportunities for businesses worldwide. Hence, the expected growth trajectories in the coming years.

Below we look at how these benefits can provide advantages to companies adopting blockchain tech beyond pure financial applications.

Transparency: There’s a common misconception that blockchain technology lacks transparency. In reality, it is inherently transparent, thanks to a decentralized ledger system. Each transaction is recorded on a public ledger, enabling quick and easy identification of affected products. This transparency enhances accountability and trust in supply chains, ensuring consumer safety and product integrity.
Simplicity: While blockchain technology may seem complex at first glance, businesses can benefit from simplified explanations and practical applications. By focusing on real-world case studies and actionable insights, organizations that are using data science like GE, IBM, PayPal, AWS, Uber, John Deere, NASA and others have grasped data-driven insights across various sectors and its potential to streamline operations, enhance security, and drive innovation. Read more

 

May 17, 2024: AI & Digital Assets


Taking the Fight to the Fraudsters: How AI Safeguards the Digital Economy Headline

Mastercard is at the forefront of technology companies embracing AI to uncover and resolve financial crime.

Ranita Iyer, Fast Company

Think about how many times a week you swipe your card, tap your phone on an e-reader, or click “complete purchase” on a checkout page. Whether you’re buying a latte or ordering new boots, how often does it cross your mind that hackers could break into your account?

For most consumers, the answer is probably almost never. Digital commerce has become second nature, and most people don’t think twice about their financial information being unsafe.

But criminal tactics are always evolving, so detecting and preventing fraud has become an arms race. Today, criminals are using increasingly sophisticated methods to take advantage of the digital ecosystem and digital transactions. Last year, cybercrime cost an estimated $5 trillion across the globe—close to 5% of the world’s GDP.

To help fight fraud, technology companies are embracing artificial intelligence (AI). Mastercard has been at the forefront, using AI tools for more than a decade to uncover financial crime. In 2023, Mastercard’s AI-powered insights protected more than 143 billion transactions and deflected $20 billion in fraud across its network.

Staying ahead of the bad guys will require work from the entire ecosystem, so we’re sharing what we’ve learned—and forecasting how AI can reinforce our collective defenses going forward.

Finding The Signal in The Noise
Every day, humans generate 2.5 quintillion—that is, 2,500,000,000,000,000,000—bytes of data. Camouflaged within the noise are signals that can help us detect and prevent cybercrime. But the information is useless if we can’t analyze it. That’s where AI comes in. Designed to find patterns in reams of data, AI can detect subtle anomalies and keep up with fraudsters in real-time. Mastercard’s AI solutions help protect customers and fight fraudsters at each stage of the transaction, from the moment a person opens their browser or app until a purchase is complete. Read more


AI is Set to Shake Up Banks’ Employee Ranks – But Maybe Not How You Think

Caroline Hroncich, The Financial Brand

While most employees only need familiarity with AI, banks will need strong leadership to help weather some of the challenges the technology could present. But for marketing teams, AI is table stakes.

There’s no question that generative AI tools are having a moment right now.

JPMorgan’s CEO Jamie Dimon called the technology “transformational,” saying that it could be as impactful as the advent of electricity or the internet one day. “Think the printing press, the steam engine, electricity, computing and the Internet, among others,” Dimon wrote in a letter to shareholders in April.

AI is shaking up banking, especially in the U.S. North American banks are leading the charge when it comes to AI — 80% of all bank AI research was conducted by banks in the region in 2022, according to a report from banking data provider Evident. For megabanks, AI is par for the course. Community banks have been slower on the uptake, with some only engaging with AI through vendor technology.

But banks that don’t have a clear internal AI strategy are going to fall behind, and potentially lose out on business. AI could deliver value equal to $200 to $340 billion annually in the banking industry, according to a McKinsey report on AI and workforce productivity. But what exactly this technology will actually mean for bank employees remains to be seen.

The most likely scenario is that workers will have to make a concerted effort to develop new technical skills to be able to do their jobs successfully — and banks will be on the hunt for new hires that understand and are comfortable using technology, says Tim Bates, professor of practice at the University of Michigan’s College of Innovation and Technology. Read more


U.S. Lawmakers Seek $32 Billion To Keep American AI Ahead of China

David Shepardson & Alexandra Alper, Reuters

A bipartisan group of senators, including Majority Leader Chuck Schumer, on Wednesday called on Congress to approve $32 billion in funding for artificial intelligence research to keep the U.S. ahead of China in the powerful technology. The senators, including Republicans Mike Rounds and Todd Young and Democrat Martin Heinrich, announced the goal as part of a legislative roadmap to address the promises and perils of AI.

If China is “going to invest $50 billion, and we’re going to invest in nothing, they’ll inevitably get ahead of us. So that’s why even these investments are so important,” Schumer said Wednesday. The roadmap could help the U.S. address mounting worries about China’s advances in AI. Washington fears Beijing could use it to meddle in other countries’ elections, create bioweapons or launch muscular cyberattacks.

U.S. officials flagged concerns over China’s “misuse” of artificial intelligence in their first formal bilateral talks on the issue this week. Reuters reported this month that President Joe Biden’s administration is poised to open a new front in its effort to safeguard U.S. AI from China and Russia. “This is a time in which the dollars related to this particular investment will pay dividends to the taxpayers of this country long term,” Rounds said. “China now spends probably about 10 times more than we do on AI development. They are in a hurry.”

The funding would cover non-defense uses of AI, the lawmakers said. Senators are still considering how much Congress should dedicate to defense-related AI, “but it’s going to be a very large number,” Schumer added. Senators called for Congress to fund cross-government AI research and development including an all-of-government “AI-ready data” initiative and government AI testing and evaluation infrastructure. Read more


DOJ Charges 2 Brothers With ‘Cutting-Edge Scheme’ to Steal Cryptocurrency

PYMNTS.com

The Department of Justice (DOJ) announced the unsealing of an indictment Wednesday (May 15) charging two brothers with crimes resulting from an alleged “cutting-edge scheme” in which they stole $25 million worth of cryptocurrency from the ethereum blockchain.

Anton Peraire-Bueno, 24, of Boston, and James Peraire-Bueno, 28, of New York, were arrested Tuesday (May 14) and charged Wednesday with conspiracy to commit wire fraud, wire fraud and conspiracy to commit money laundering, the DOJ said in a Wednesday press release.

“As alleged in today’s indictment, the Peraire-Bueno brothers stole $25 million in Ethereum cryptocurrency through a technologically sophisticated, cutting-edge scheme they plotted for months and executed in seconds,” Deputy Attorney General Lisa Monaco said in the release. “Unfortunately for the defendants, their alleged crimes were no match for Department of Justice prosecutors and IRS agents, who unraveled this first-of-its-kind wire fraud and money laundering scheme.”

The indictment alleges that the defendants tampered with the process and protocols by which transactions are validated and added to the ethereum blockchain, gained access to pending private transactions, altered certain transaction and obtained their victims’ cryptocurrency, according to the release.

Following the theft, the defendants received requests to return the stolen cryptocurrency but instead kept it and took steps to hide it, the release said. Before, during and after they did these things, the defendants searched online for information about how to carry them out, how to conceal their involvement and how to launder the criminal proceeds, per the release. Read more

May 10, 2024: AI & Digital Assets


OPINION: Stablecoin Legislation Must Respect the Dual-Banking System 

The Lummis-Gillibrand stablecoin bill subordinates state regulation to federal control, giving Washington too much power as to which entities can issue these important digital assets.

Jack Solowey and Jennifer J. Schulp, Consensus Magazine/CoinDesk

Long before Bitcoin, Ethereum, or DAOs, the U.S. banking industry had its own form of decentralized governance: the dual-banking system. Under this system, banks can be chartered and, with some important caveats, supervised at either the state or the federal level.

When U.S. Senators Cynthia Lummis (R-WY) and Kirsten Gillibrand (D-NY) introduced their stablecoin regulation bill on April 17, the legislators emphasized that the bill seeks to “preserve our dual banking system.” But while the senators deserve credit for pursuing this goal, the Lummis-Gillibrand stablecoin bill falls short by subordinating state regulation to federal control.

To support the healthy competition among both financial institutions and regulators that the dual-banking system represents, stablecoin legislation must provide a bona fide state pathway free from arbitrary limits and federal gatekeeping.

To be sure, the dual-banking system itself at present is far from federalism perfected, and state and federal jurisdiction over banks does overlap in important ways; state-chartered banks that are members of the Federal Deposit Insurance Corporation and/or the Federal Reserve System, for example, face additional federal supervision. But such federal bank supervision makes even less sense for stablecoin issuers, which ultimately provide a payment tool (tokens designed to maintain a 1:1 peg with the U.S. dollar), not banking services. Read more


Artificial Intelligence Is Making It Hard to Tell Truth from Fiction

Kathiann Kowalski, Science News Explores

Experts report that AI is making it increasingly hard to trust what we see, hear or read

Taylor Swift has scores of newsworthy achievements, from dozens of music awards to several world records. But last January, the mega-star made headlines for something much worse and completely outside her control. She was a target of online abuse.

Someone had used artificial intelligence, or AI, to create fake images of Swift. These pictures flooded social media. Her fans quickly responded with calls to #ProtectTaylorSwift. But many people still saw the fake pictures.

That attack is just one example of the broad array of bogus media — including audio and visuals — that non-experts can now make easily with AI. Celebrities aren’t the only victims of such heinous attacks. Last year, for example, male classmates spread fake sexual images of girls at a New Jersey high school.

AI-made pictures, audio clips or videos that masquerade as those of real people are known as deepfakes. This type of content has been used to put words in politicians’ mouths. In January, robocalls sent out a deepfake recording of President Joe Biden’s voice. It asked people not to vote in New Hampshire’s primary election. And a deepfake video of Moldovan President Maia Sandu last December seemed to support a pro-Russian political party leader. Read more


Blockchain Researchers Use AI to Spot Bitcoin Money Laundering

FinExtra

Researchers from Elliptic, IBM Watson and MIT have used AI to detect money laundering on the Bitcoin blockchain.

Back in 2019, blockchain analytics firm Elliptic published research with the MIT-IBM Watson AI Lab showing how a machine learning model could be trained to identify Bitcoin transactions made by illicit actors, such as ransomware groups or darknet marketplaces.

Now the partners have put out new research applying new techniques to a much larger dataset, containing nearly 200 million transactions. Rather than identifying transactions made by illicit actors, a machine learning model was trained to identify “subgraphs”, chains of transactions that represent bitcoin being laundered.

Identifying these subgraphs rather than illicit wallets let the researchers focus on the “multi-hop” laundering process more generally rather than the on-chain behaviour of specific illicit actors. Working with a crypto exchange, the researchers tested their technique: of 52 money laundering subgraphs predicted and which ended with deposits to the exchange, 14 were received by users who had already been flagged as being linked to money laundering.

On average, less than one in 10,000 accounts are flagged in this way “suggesting that the model performs very well,” say the team. The researchers are now making their underlying data publicly available. Says Elliptic: “This novel work demonstrates that AI methods can be applied to blockchain data to identify illicit wallets and money laundering patterns, which were previously hidden from view.

“This is made possible by the inherent transparency of blockchains and demonstrates that cryptoassets, far from being a haven for criminals, are far more amenable to AI-based financial crime detection than traditional financial assets.”


Crypto Firms Among ‘Greatest Risks’ for Money Laundering in 2022-2023

Tom Mitchelhill, CoinTelegraph

Crypto firms, wealth management companies, and retail and wholesale banking remain “particularly vulnerable” to financial crime, according to a U.K. Treasury report. Crypto firms were among those with the “greatest risk” of being exploited for money laundering, according to the United Kingdom’s top financial regulator.

In a May 1 report, the U.K. Treasury concluded from data provided by the Financial Conduct Authority (FCA) that crypto-asset companies were among the top four kinds of firms that remained “particularly vulnerable” to financial crime, particularly for cases of money laundering between 2022 and 2023.

Crypto firms were listed alongside retail banking, wholesale banking, and wealth management companies. The report showed between 2022 and 2023, there were a total of 52.8 full-time specialist employees overseeing Anti-Money Laundering cases, with nearly one-third focused specifically on supervising crypto firms.

During the 2022 to 2023 period, the FCA’s financial crime specialists conducted a total of 231 reviews of financial firms operating in the U.K. as well as an additional 375 cases related to financial crimes and sanctions.

Related: UK regulator to tighten measures against crypto market abuse

As part of a broader supervisory effort outside of these full-time reviews, FCA teams launched a total of 95 cases into British crypto companies. Britain has been working to introduce clearer legislation for local crypto firms with the U.K. Treasury announcing on April 16 that it would aim to present a full regulatory framework for crypto assets and stablecoins by July. Read more

May 3, 2024: AI & Digital Assets


Deepfakes Are Coming for the Financial Sector

Isabelle Bousquette, Wall Street Journal

Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise.

“There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new,” said Bill Cassidy, chief information officer at New York Life. Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast,” said Kyle Kappel, U.S. Leader for Cyber at KPMG.

How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse. Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise.

“There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new,” said Bill Cassidy, chief information officer at New York Life.

Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast,” said Kyle Kappel, U.S. Leader for Cyber at KPMG. How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse. Read more


How Many Businesses Are Affected by Deepfake Fraud?

Florian Zandt, Statista

So-called artificial intelligence is a contentious topic for various reasons, including handling of copyright, power usage, privacy concerns and chatbots like ChatGPT sometimes returning incorrect answers to queries. However, there’s one thing that critics and evangelists can agree on: AI increasingly permeates many layers of digital life, from culture to business to politics. As is common with new technologies, an increase of illicit usage of AI in the form of deepfakes is deeply connected to its rise in popularity and, more importantly, accessibility. Deepfakes are AI-generated videos or audio formats that impersonate, for example, celebrities or politicians to spread mis- or disinformation or defraud consumers. Businesses around the world can already feel the impact of this type of artificial content.

A survey by identity verification provider Regula conducted among more than 1,000 experts in fraud detection or prevention from countries like the United States, the United Kingdom, France and Germany shows that a sizable chunk of their companies were targeted by one of three methods of advanced identity fraud. 46 percent of respondents experienced cases of synthetic identity fraud, where a combination of real and fake identity components, like a fake social security number and a real name, address and birth date, was used.

37 percent reported seeing voice deepfakes being used, a recent high-profile example of which from the realm of politics was an artificial President Biden robocalling prospective voters in January to dissuade them from voting in the primaries. Video deepfakes are, as of now, less common, with only 29 percent of respondents having already experienced such fraud attempts. With generative AI companies now focusing on moving pictures, with tools like OpenAI’s Sora, this problem could become more pronounced in the coming months and years. Read more


Mastercard Announces New AI Suite with Behavioral Biometrics to Fight Fraud

Joel R. McConvey, BioMetricUpdate

LexisNexis report IDs synthetic identity fraud as most common type for financial services.

Large financial institutions are coming to terms with the new world of tech-driven fraud and adopting digital identity tools as an increasingly necessary defense – but also for customer service, accessibility, and other use cases. Mastercard has launched a new suite of AI-driven tools to protect customers against scams. LexisNexis Risk Solutions is reporting a correlation between spiking fraud rates and customer trust. And Feedzai is seeing significant growth in its behavioral biometrics business.

Mastercard AI security suite aims to foster trust in the digital world
Mastercard’s offering, Scam Protect, combines its digital identity, biometric, AI and open banking capabilities to safeguard consumers from a wide spectrum of scams, from card-based and account-to-account payments to fraudulent account openings using fake or stolen identities. A press release from the credit giant says the system provides comprehensive protection by performing identity verification throughout the lifecycle of an account, and benefits from partnerships with organizations across the ecosystem of financial services, telecommunications, consumer advocacy and more.

Verizon is among Mastercard’s larger partners in implementing new protections against multichannel attack vectors. The two companies will collaborate to leverage Mastercard’s AI-based identity tools with Verizon’s network technologies, to get ahead of scammers and fraudsters. Kyle Malady, CEO of Verizon Business, says that while the security landscape and scamming tactics are constantly evolving, one constant has been social engineering – using texts and phone calls to coerce people. “By combining our expertise, we’re building solutions to identify and thwart scammers before they initiate contact.” Read more


Why Banks Risk Data Chaos by Rushing into AI for CRM

Brian Ellsworth, The Financial Brand

Banks eager to leverage AI for customer relationship management (CRM) are cautioned against rushing adoption without first organizing customer data and implementing robust data governance. Deploying AI on messy data risks perpetuating inaccuracies and enabling unintended discriminatory outcomes.

AI’s promise and power have led many organizations to race up the adoption curve, especially when it comes to customer relationship management (CRM) applications. Many are tempted to use AI itself to clean up their customer data in advance of deploying downstream AI applications that will yield meaningful productivity and revenue upsides. But is that really a good idea?

That’s what banks are asking. And the answer, at least according to a recent study by Forrester, is a resounding “not so fast.”

Forrester warns that unleashing AI on disorganized CRM data without a strategy or governance practices could create data chaos. Companies should instead organize their data and formalize the governance through their data management practices before trying to build AI into CRM systems.

Getting the sequencing right has implications that go far beyond mere timing. Companies that get too far ahead of the AI curve may set themselves up for bad results later on, because they’ve trained their systems on incomplete or inadequately organized data, according to the study, The Journey To AI-Powered CRM. Read more