AI & Digital Assets

Apr. 19, 2024: AI & Digital Assets


Banks Told to Anticipate Risks from Using AI, Machine Learning

Huw Jones, Reuters

Banks must anticipate risks from using artificial intelligence (AI) and machine learning (ML) in their operations as part of their day-to-day governance, a top global banking regulator said on Wednesday. There are unanswered questions on whether the use of AI or ML in banking is a net positive or negative to global financial stability, said Bank of Spain Governor Pablo Hernandez de Cos, who chairs the international Basel Committee on Banking Supervision.

“My main message is that the use of AI in banking raises important prudential and financial stability challenges,” de Cos said in a speech in Washington. “Left unchecked, such models could potentially amplify future banking crises.” Digital innovation will further fuel cross-border and cross-sectoral financial interconnections, requiring collaboration among central banks and regulators to achieve an appropriate regulatory baseline to oversee the use of AI and ML, de Cos said.

“When it comes to banking, it is critical that banks anticipate and oversee the risks and challenges posed by AI/ML – both at the micro and the macro level – and incorporate them in their day-to-day risk management and governance arrangements,” de Cos said. The Basel Committee will soon publish a more comprehensive report on the digitalization of finance and its implications for regulation and supervision, he said.


Sens. Tillis, Hagerty Introduce Draft of Bill Targeted At Cryptocurrency Institutions

Dave Kovaleski, Financial Regulation News

U.S. Sens. Thom Tillis (R-NC) and Bill Hagerty (R-TN) released a discussion draft of a bill that would ensure that cryptocurrency financial institutions follow Bank Secrecy Act anti-money laundering (BSA/AML) standards.

In addition, the Ensuring Necessary Financial Oversight and Reporting of Cryptocurrency Ecosystems (ENFORCE) Act, would provide regulators and law enforcement with additional important tools to combat digital asset illicit finance.

“We must take action to stop bad actors who launder with cryptocurrency, however this does not provide a license for heavy-handed, regulatory-obsessed lawmakers to regulate an entire industry into oblivion,” Tillis said. “Congress needs to focus on right-sizing its regulatory approach to cryptocurrency, which requires building consensus among lawmakers, law enforcement, and stakeholders to protect consumers and fight illicit actors.”

Specifically, the ENFORCE Act would:

  • Ensure Bank Secrecy Act anti-money laundering (BSA/AML) requirements apply to all centralized and customer-facingdigital asset financial institutions;
  • Clarify Treasury’s authority to use a powerful illicit finance policy tool against transactions and financial institutions associated with digital asset money laundering;
  • Ensure digital asset bad actors and money launderers cannot be tipped off to investigations into their activities;
  • Establish a public-private task force to coordinate digital asset illicit finance information sharing and best practices;
  • Establish formal examination standards for BSA/AML compliance for digital asset financial institutions; and
  • Explicitly state a rule of construction to ensure that the bill does not limit or restrict any current BSA/AML requirements.

“This discussion draft represents a positive step towards right-sizing our approach to regulating cryptocurrency while preserving digital innovation, and I look forward to receiving feedback and working with my colleagues on the path forward,” Tillis added. Tillis and Hagerty are members of the Senate Banking, Housing, and Urban Affairs Committee.


Stablecoins Are Growing Rapidly. What Does This Mean for The Stability of The Financial System?

Amanda Blanco,  Federal Reserve Bank of Boston

Boston and New York Fed conference examines the potential impacts of stablecoins
The stablecoin market has quickly expanded in recent years, becoming more intertwined with the traditional financial system. What does that mean for financial stability?

This was a key question researchers, regulators, and industry participants explored at the virtual Conference on the Financial Stability Implications of Stablecoins, which the Federal Reserve Banks of Boston and New York hosted Friday.

In different sessions, presenters discussed past investor “runs” on stablecoins and similarities between stablecoins and traditional financial products. They also shared high-level recommendations for the global regulation of stablecoins.

In welcoming remarks, Boston Fed President and CEO Susan M. Collins said stablecoins could be thought of as a form of “private money,” and central banks should be interested in them. “Stablecoin issuers are not prudentially regulated or supervised,” she said. “This highlights the importance of policymakers and market participants understanding and considering (their) key features.”

How do stablecoin investors react to shocks in the crypto market?
In one of the first sessions, Pablo Azar – a financial research economist at the New York Fed – presented a working paper he co-authored with colleagues at the Boston Fed and in academia called “Runs and Flights to Safety: Are Stablecoins the New Money Market Funds?

Stablecoins are digital assets designed to maintain a stable price – usually pegged to the U.S. dollar. Azar first described the different types of stablecoins, based on the collateral that backs them. Both money market funds and stablecoins aim to maintain price stability. But, unlike stablecoins, money market funds are regulated by the U.S. Securities and Exchange Commission. Read more


Google Confirms Major Gmail AI Security Update For 3 Billion Users

Davey Winder, Forbes

Google’s Cloud Next 2024 has drawn to a close but the news stories keep on coming. One that hasn’t surfaced, however, could well turn out to be the most important, at least from the user security perspective: the use of AI large language models to protect Gmail users from harm.

The main problem being addressed is that generative AI has become so good so quickly that it has “dramatically lowered the barrier to attacks,” according to Google, which it admitted has led to “a spike in higher quality phishing at scale.” As you might imagine, getting access to Gmail and Drive accounts is high on the attacker’s agenda, given the goldmine of readily actionable data they contain.

Google Announces AI-Powered Gmail Security Evolution
The solution, Google said, was conceptually simple albeit technically challenging: “We built custom LLMs to help fight back.” First deployed in late 2023, these LLMs are now “yielding big results,” Google said.

These custom LLMs are trained “on a diet of the latest, most terrible spam and phishing” content because what LLMs are uniquely good at is identifying semantically similar content. Given the large Google Workspace user base of 3 billion, “the results are very impactful—and the LLMs will only get better at this as we go,” a Google spokesperson said.

The Positive Side Of The AI Security Fence
Although there has been plenty of talk in recent months about the Google Gemini LLM, not all of it has been filled with praise, quite the opposite in fact. My colleague Zak Doffman, a highly respected privacy contributor at Forbes, recently warned of concerns regarding Google’s AI-powered message helpers. While Doffman’s concerns come from the right place—real-world knowledge of AI privacy implications—many commentators have just jumped upon the “AI is evil” bandwagon. It’s comforting, therefore, to be able to report on something focused on generative AI LLMs but from the positive side of the security fence. Read more

Apr. 12, 2024: AI & Digital Assets


U.S. Treasury Demands More Control Over Foreign Crypto Exchanges

Courtesy of Anna Kharton, Crypto.News

The U.S. Treasury is pushing to expand its powers to increase oversight of cryptocurrency service providers, including foreign ones. Deputy Secretary of State Adewale O. Adeyemo, ahead of the Senate hearing, said malicious actors are constantly looking for new ways to hide identities and move assets.

Adeyemo referred to the use of cryptocurrencies to finance several terrorist groups, as well as by countries under U.S. sanctions, including Russia and North Korea. Adeyemo hopes lawmakers approve a secondary sanctions tool targeting foreign digital asset providers that facilitate illegal transactions.

The U.S. Treasury also requested that foreign-based crypto platforms be prosecuted if U.S. national security is harmed by exploiting the country’s financial system.

Interest in cryptocurrencies from criminals continues to grow, and authorities in various countries are trying to take several steps to combat such transactions. In October 2023, the Wall Street Journal wrote that Palestinian militants received at least $134 million in digital assets. The publication caused a sharp reaction from American legislators, who demanded that the Justice Department take action against the industry, pointing separately to Binance and Tether.

According to Chainalysis experts, when counting terrorist-linked cryptocurrencies, many incorrectly include third-party funds that have passed through various payment services interacting with criminals. Elliptic also pointed out that the scale of terrorist group financing in cryptocurrencies is overstated. The analytics company also said that data previously presented about Hamas’ cryptocurrency collection must be more accurate and greatly exaggerated in the Wall Street Journal article.


Real-world Lessons for GenAI in Banking, According to Google

Courtesy of Steve Cocheo, the Financial Brand

More banks are moving their data and processing to the cloud to clear the way for adoption of GenAI. Then banks will need to pick and choose among various GenAI types to find the right tools for the right jobs.

The hype — and controversy — surrounding GenAI has tended to overshadow the forward progress being made. But now 2024 is shaping up to be the year the banking industry will move from pilots and proofs of concept to real applications of the technology.

So says Zac Maufe, head of regulated industries at Google Cloud.

“People have been getting their arms around it,” says Maufe, who worked in various banking posts, chiefly at Wells Fargo, prior to coming to Google Cloud in late 2020, “This year, you’re going to see real use cases in production — and the underlying economics of GenAI start to come to life.”

The Short-term Focus is on Cost Savings
Research conducted for Google Cloud released late last year reported that over a third of large bank executives believe that GenAI will begin driving major cost savings over the next five years.

The top five areas the survey respondents said they were testing were tools for employees to prepare emails, documents and presentations (57%); preparation of new marketing content such as ads, offers and social media posts (55%); assistance in software and application coding (50%); production of financial reports and prospectuses (49%); and summarizing capital markets research for investment decision making and client briefings (49%). Read more


Mastercard Reorg Shows Urgent Industry Focus on AI

Courtesy of PYMNTS

The jury is still out as to how artificial intelligence (AI) will affect jobs in the financial services industry. But if the past six weeks have shown anything, it is this: AI is definitely coming for the org chart. The technology’s impact has been felt at major and minor levels for almost every company in the payments industry. 

At Visa, it sparked what was arguably one of the company’s most high-profile new product launches with Visa Protect, which uses AI to reduce fraud across account-to-account and card-not-present payments, as well as transactions both on and off Visa’s network. 

At Amex, CEO Stephen Squeri used his March 15 shareholder letter to introduce a new Generative AI Council that includes technology, data science, risk management, legal and strategy teams to assess and approve the deployment of generative AI (GenAI) use cases across the company. 

And at JPMorgan Chase, most news outlets missed CEO Jamie Dimon’s announcement of a new chief data and analytics officer, still to be filled. 

“Elevating this new role to the Operating Committee level — reporting directly to Daniel Pinto and me — reflects how critical this function will be going forward and how seriously we expect AI to influence our business,” Dimon said in a letter to shareholders. 

“This will embed data and analytics into our decision making at every level of the company. The primary focus is not just on the technical aspects of AI but also on how all management can — and should — use it,” he added. “Each of our lines of business has corresponding data and analytics roles so we can share best practices, develop reusable solutions that solve multiple business problems, and continuously learn and improve as the future of AI unfolds.”

At Mastercard the importance of data — which these days is synonymous with AI — played a big role in an executive reshuffling announced April 9. Read more


Hong Kong Set to Approve First Crypto ETF

Courtesy of FinExtra

Hong Kong’s financial regulator is poised to give the green light to the first batch of bitcoin spot ETFs. The move would make Hong Kong the first domicile in the Asia-Pacific region to approve a crypto-backed, exchange-based investment fund. It would also put the island in prime position in the race to become the digital assets hib for the region.

According to a Reuters report, the Securities and Futures Commission (SFC) is set to grant approval later this month. So far there have been four applications from entities based in mainland China, including Harvest Fund Management, China Asset Management and Bosera Asset Management.

The SFC has already granted approval to Harvest and CAM to provide virtual-asset related fund management services. By greenlighting the crypto ETF applications, Hong Kong would be following in the footsteps of the US, which granted approval for the first spot bitcoin ETFs in January, albeit only as a result of legal cases brought by a number of crypto trading firms.

Those US ETFs have amassed more than $58 billion in assets so far.

However, Hong Kong’s funds market is unlikey to expereicne the same level of investment, it will be hoping that the approvals will provide a boost in the face of challenges posed by the economic slowdown in China and the hangover from the pandemic, including the strict travel restrictions which made it hard for Hong Kong asset managers to attract and retain quality staff.

Apr. 5, 2024: AI & Digital Assets


Opinion: Fraudsters Have Artificial Intelligence Too

Courtesy of Lawrence Zelvin, Chicago Tribune

Soon, personal artificial intelligence agents will streamline and automate processes that range from buying your groceries to selling your home. You’ll tell it what you want, and it will do the research and legwork, log into your personal accounts and execute transactions in milliseconds.

It is a technology with extraordinary potential, but also significant new dangers, including financial fraud. As Gail Ennis, the Social Security Administration’s inspector general, recently wrote: “Criminals will use AI to make fraudulent schemes easier and faster to execute, the deceptions more credible and realistic, and the fraud more profitable.”

The story of cyberfraud has always been a technological arms race to innovate faster between criminals and those they’re trying to rob. In banking, AI’s advent both supercharges that competition and raises its stakes. When scammers used an AI-powered audio deepfake to convince the CEO of a British utility to transfer $243,000 to a Hungarian bank account in 2019, it was called “an unusual case” because it involved AI. That is not the case anymore.

Earlier this year, criminals made headlines when they used deepfake technology to pose as a multinational company’s chief financial officer and tricked one of the company’s employees in Hong Kong into paying the scammers $25 million.

Globally, 37% of businesses have experienced deepfake-audio fraud attempts, according to a 2022 survey by identity verification solutions firm Regula, while 29% have encountered video deepfakes. And that doesn’t include individuals who receive realistic-sounding calls purportedly from hospitalized or otherwise endangered family members seeking money.

As these AI-enabled fraud threats proliferate, financial institutions such as BMO, where I lead the financial crimes unit, are working to continually innovate and adapt to outpace and outsmart the criminals. Read more


U.S., Britain, and Other Countries Ink Agreement to Make AI ‘Secure By Design’

Courtesy of Raphael Satter and Diane Bartz, Reuters

The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”

In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”

The agreement is the latest in a series of initiatives – few of which carry teeth – by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large. In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore. Read more 


U.S. Department of the Treasury Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector

The U.S. Department of the Treasury released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. The report was written at the direction of Presidential Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP) led the development of the report. OCCIP executes the Treasury Department’s Sector Risk Management Agency responsibilities for the financial services sector.

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Under Secretary for Domestic Finance Nellie Liang. “Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud.”

In the report, Treasury identifies significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector. The report outlines a series of next steps to address immediate AI-related operational risk, cybersecurity, and fraud challenges:

  1. Addressing the growing capability gap. There is a widening gap between large and small financial institutions when it comes to in-house AI systems. Large institutions are developing their own AI systems, while smaller institutions may be unable to do so because they lack the internal data resources required to train large models. Additionally, financial institutions that have already migrated to the cloud may have an advantage when it comes to leveraging AI systems in a safe and secure manner.
  2. Narrowing the fraud data divide. As more firms deploy AI, a gap exists in the data available to financial institutions for training models. This gap is significant in the area of fraud prevention, where there is insufficient data sharing among firms. As financial institutions work with their internal data to develop these models, large institutions hold a significant advantage because they have far more historical data. Smaller institutions generally lack sufficient internal data and expertise to build their own anti-fraud AI models. Read more

The Imperatives of AI Governance

Courtesy of Hayden J. Silver III, Dr. Christian E. Mammen of Womble Bond Dickinson (US) LLP, National Law Review

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.
AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology. 

  • AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.
  • AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines. And the capabilities of AI tools are not always foreseeable. Read more

 

Mar. 29, 2024: AI & Digital Assets


Hackers Breached Hundreds of Companies’ AI Servers, Researchers Say

Cyberattacks target AI compute power to mine cryptocurrency using a vulnerability in popular open source software called Ray, according to researchers at Oligo Security.

Courtesy of Thomas Brewster, Forbes

Hackers may have breached hundreds of companies by targeting an open source software called Ray that is used to scale AI models, cybersecurity researchers have warned.

It is believed to be the first example of cyberattacks exploiting AI computing vulnerabilities found in the wild and researchers say that there is evidence it has been used to attack at least three “very well-known, large organisations” and dozens of smaller ones.

In many cases, the hackers used the exploit to install cryptocurrency miners on exposed servers, diverting the processing power used to train AI to churn out digital coins instead, according to Israeli cyber startup Oligo Security which discovered the attacks. In other cases, vulnerable servers leaked so-called access “tokens” that could have allowed an attacker to breach various AI and business applications, including OpenAI and Slack. Because some companies were incorporating the ability to process financial transactions into their AI apps, tokens for the Stripe payments service may have been accessed. It’s unclear if hackers used those tokens to steal any money. OpenAI and Stripe did not respond to requests for comment.

Dolleen Cross, spokesperson for Slack, said it was “an unfortunate incident and we feel for any customers that were impacted.” She noted the vulnerability isn’t “inherent to the Slack platform.”

The researchers declined to name the hacked entities, but told Forbes that the three largest are household names and may have had “thousands of compromised machines.” One was doing pharmaceutical research, another was an American college, Oligo cofounder and chief technology officer Gal Elbaz told Forbes. The researchers reported the exploit to all of them.

“This is an active campaign right now,” said Elbaz. “They’re attacking that infrastructure of AI, they’re leveraging it to make a lot of money.” Read more


What Every CEO Needs to Know About the New AI Act

Courtesy of Bernard Marr, Forbes

Having recently passed the Artificial Intelligence Act, the European Union is about to bring into force some of the world’s toughest AI regulations.

Potentially dangerous AI applications have been designated “unacceptable” and will be illegal except for government, law enforcement and scientific study under specific conditions.

As was true with the EU’s General Data Protection Regulation, this new legislation will add obligations for anyone who does business within the 27 member states, not just the companies based there.

Those responsible for writing it have said that the aim is to protect citizens’ rights and freedoms while also fostering innovation and entrepreneurship. But the 460-odd published pages of the act contain a lot more than that.

If you run a business that operates in Europe or sells to European consumers, there are some important things you need to know. Here’s what stands out to me as the key takeaways for anyone who wants to be prepared for potentially significant changes.

When Does It Come Into Force?
The Artificial Intelligence Act was adopted by the EU Parliament on March 13 and is expected to soon become law when it is passed by the European Council. It will take up to 24 months for all of it to be enforced, but enforcement of certain aspects, such as the newly banned practices, could start to happen in as little as six months. Read more


Key U.S. Lawmaker McHenry Says House Has ‘Workable’ Stablecoin Bill

Courtesy of Jesse Hamilton, CoinDesk

The House Financial Services Committee chairman, in his final year in Congress, is still optimistic about passing a U.S. stablecoin bill, and Sen. Cynthia Lummis said the Senate’s majority leader is open to it.

  • Both Rep. Patrick McHenry and Sen. Cynthia Lummis said there’s a potential path for a stablecoin bill on Congress this year, though they couldn’t say when it could happen.
  • The U.S. Senate has been trailing on the stablecoin issue, without any committee work yet on a bill.

U.S. Rep. Patrick McHenry (R-N.C.) said some of the recent chaos in Congress derailed crypto legislation for a while, but a stablecoin bill is largely worked out in the House of Representatives and just needs a scheduled floor vote.

“We have a workable frame,” McHenry, the chairman of the House Financial Services Committee, said at Coinbase’s Update the System Summit in Washington on Wednesday. “I think we’re at the phase where we can see the airport; we can see where we’re going to land the plane; we can see how we’re going to land the plane; we just don’t know when we’re going to land the plane.”

McHenry has been working with his panel’s top Democrat, Rep. Maxine Waters (D-Calif.) on a stablecoin bill for nearly two years, and “month over month, we’re in a better position.” But with the ongoing debates over federal spending plans that have locked up Congress, he said the lawmakers need to get past the budget issues to be able to get into the legislation he hopes to accomplish before his announced retirement at the end of this session. The House is aiming to vote on its latest effort to fund the federal government this week. Read more


Prominent Global Cryptocurrency Exchange KuCoin and Two of Its Founders Criminally Charged with Bank Secrecy Act and Unlicensed Money Transmission Offenses

Courtesy of U.S. Attorney’s Office, Southern District of New York

KuCoin and Two of Its Founders, Chun Gan and Ke Tang, Flouted U.S. Anti-Money Laundering Laws to Grow KuCoin Into One of World’s Largest Cryptocurrency Exchanges

Damian Williams, the United States Attorney for the Southern District of New York, and Darren McCormack, the Acting Special Agent in Charge of the New York Field Office of Homeland Security Investigations (“HSI”), announced today the unsealing of an Indictment against global cryptocurrency exchange KuCoin and two of its founders, CHUN GAN, a/k/a “Michael,” and KE TANG, a/k/a “Eric,” for conspiring to operate an unlicensed money transmitting business and conspiring to violate the Bank Secrecy Act by willfully failing to maintain an adequate anti-money laundering (“AML”) program designed to prevent KuCoin from being used for money laundering and terrorist financing, failing to maintain reasonable procedures for verifying the identity of customers, and failing to file any suspicious activity reports. KuCoin was also charged with operating an unlicensed money transmitting business and a substantive violation of the Bank Secrecy Act. GAN and TANG remain at large.

HSI Acting Special Agent in Charge Darren McCormack said: “Today, we exposed one of the largest global cryptocurrency exchanges for what our investigation has found it to truly be: an alleged multibillion-dollar criminal conspiracy. KuCoin grew to service over 30 million customers, despite its alleged failure to follow laws necessary to ensuring the security and stability of our world’s digital banking infrastructure. The defendants’ alleged pattern of skirting these vitally important laws has finally come to an end. I commend HSI New York’s El Dorado Task Force and our law enforcement partners for their commitment to the mission.” Read more

Mar. 22, 2024: AI & Digital Assets


AI Regulation on the Move: EU Leads with AI Act as U.S. States Forge Their Own Paths

Courtesy of Kevin Pomfret, Williams Mullen, JDSupra

Quite understandably, the passing of the AI Act this week by the European Parliament received a great deal of attention, as it represents a significant step forward in developing a legal framework for artificial intelligence (AI).

However, while a few of the provisions will become effective in the coming months, it will take several years before many of its provisions are in full force. In the U.S., not unexpectedly given the current environment in Washington, there has not been movement in Congress on the myriad of AI bills that have been introduced, including several intended to provide oversight of AI systems.

However, that does not mean that U.S. businesses do not have to consider their legal obligations when developing, deploying or using AI systems, products or services. For example, in February the Utah Legislature passed the Artificial Intelligence Policy Act (the “AI Policy Act”), which when signed by the state’s governor would go into effect in May of this year. The AI Policy Act will establish liability for the use of AI in ways that violate Utah’s consumer protection laws if proper disclosure is not made. In addition, it permits companies who are considering developing an AI product while the AI regulations are being developed, to enter into a risk mitigation agreement with the Office of Artificial Intelligence Policy (a newly created agency under the AI Policy Act with AI rulemaking authority). The law also makes clear that synthetic data – defined as data that has been generated by computer algorithms or statistical models and does not contain personal data – is considered “de-identified data.”

Another area where states have taken action is in the procurement by state agencies of products and services that use AI. For example, in January Governor Youngkin of Virginia issued Executive Order Number 30 (2024) that included provisions directing the Virginia IT Agency (VITA) to issue “guiding principles for the ethical use of AI, general parameters to determine the business case for AI, a mandatory approval process for all Al capabilities, a set of mandatory disclaimers to accompany any products or outcomes generated by Al, methods to mitigate third-party risks, and measures to ensure that the data of private citizens are protected”. Read more


Blockchain Analysis and Related Expert Testimony Admissible in Criminal Trial

Courtesy of Kelly A. Lenahan-Pfahlert, Money Laundering News/BallardSpahr

It is challenging for law enforcement to track down and trace illicit activities conducted through digital currencies. The process can be very time- and resource-intensive.  Further, securing charges and arrests, and subsequent convictions, often requires the strong support of traditional sources of evidence, such as fact witness testimony and electronic communications.  Nonetheless, blockchain analytics is a key component of the government’s ability to pursue such cases.

On March 12, a jury in the United States District Court for the District of Columbia found Roman Sterlingov guilty on charges of money laundering conspiracy, so-called “sting” money laundering, operating an unlicensed money transmitting business, and violations of the D.C. Money Transmitters Act.  We blogged about the initial criminal complaint issued against Sterlingov here.  Sterlingov allegedly laundered $400 million through Bitcoin Fog, a bitcoin mixing service which can be used to obscure the origins of cryptocurrency transactions.

Shortly before the trial and guilty verdicts, the Court issued an order addressing the admissibility of expert testimony related to blockchain analysis software under the factors established by the Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals, Inc. to assess the reliability of expert testimony under Federal Rule of Evidence 702.  This blog post focuses on that order.

Specifically, the Court addressed proprietary software used by the private digital asset forensic firm Chainalysis, Chainalysis Reactor (“Reactor”), and whether expert testimony by witnesses propounded by the government – Luke Scholl (“Scholl”) from the FBI, and Elizabeth Bisbee (“Bisbee”) from Chainalysis – could rely upon Reactor under Daubert.  Reactor is a software used to dissect bitcoin transactions, utilizing techniques like co-spend analysis to connect multiple addresses to a single entity. The defense raised significant concerns about the reliability of Reactor. Read more


North Korean Crypto Hackers Have Stolen $3B Since 2017, Says UN Security Council

Courtesy of Jamie Crawley, CoinDesk

A UN Security Council panel is investigating 17 crypto heists in 2023, for which North Korea may have been responsible, which were valued at more than $750 million. North Korea-linked cryptocurrency hacks totaled $3 billion between 2017 and 2023, South Korean news agency Yonhap reported on Thursday, citing a United Nations (UN) Security Council study.

A UN Security Council panel is investigating 17 crypto heists in 2023, for which North Korea may have been responsible, which were valued at more than $750 million, the report added.

There were a total of 58 suspected cyberattacks on crypto-linked firms between 2017 and 2023, according to the report. The report said that North Korea derives around 50% of its foreign currency income from cyber attacks, which is used to fund its weapons programs.

North Korea has been targeting the crypto industry as a means of evading sanctions, the report said, branding the country “the world’s most prolific cyber-thief.” In December, cybersecurity firm Recorded Future also calculated that $3 billion in cryptocurrency had been stolen in the last six years by North Korea-linked hacker organization Lazarus Group.


EU Parliament Approves New Sanctions Laws That Also Apply to Crypto

Courtesy of Sandali Handagama, CoinDesk

The laws are to ensure sanctions rules are applied uniformly across the EU’s 27 member states.

  • The European Parliament on Tuesday voted to approve a new set of sanctions rules to harmonize enforcement across its 27 member states.
  • EU sanctions law applies to crypto service providers and can involve freezing assets, including crypto.

The European Parliament on Tuesday approved a new batch of rules to crack down on sanctions violations, including through crypto. Parliamentarians representing the 27 member states of the European Union cast 543 votes in favor of the new rules, with 45 voting against and 27 abstentions. The new rules were prompted by Russia’s invasion of Ukraine and rising concerns that EU financial sanctions on Russia were being violated.

“We need this legislation because diverging national approaches have created weaknesses and loopholes, and it will allow for frozen assets to be confiscated,” Dutch lawmaker Sophie in ’t Veld, who’s in charge of shepherding the laws through Parliament, said in a press statement.

Although sanctions are adopted at the EU level, individual states are tasked with enforcing those rules – and everything from “definitions of sanction violation” and “associated penalties” can change from country to country, a press release on the plenary vote said.

The EU’s restrictive measures apply to a wide range of financial services, including providing “crypto-assets and wallets,” the adopted text said. Sanctions can involve freezing assets, including crypto. Read more

Mar. 15, 2024: AI & Digital Assets


Let’s Not Make the Same Mistakes with AI That We Made with Social Media

Courtesy of Nathan E. Sanders & Bruce Schneier, MIT Technology Review

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising
The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.  Read more


EU Approves World’s First AI Regulations—Here’s What to Know

Courtesy of Ty Roush, Forbes

The European Union approved regulations for artificial intelligence on Wednesday, the world’s first framework governing AI amid concerns the quickly developing technology could pose risks to humanity.

KEY FACTS

  • The EU’s AI Act—which received final approvalfrom the European Parliament—places regulations on various AI technologies based on “its potential risks and level of impact.”
  • High-risk AI systems like those used in critical infrastructure or medical devices will face more regulations, requiring those systems to “assess and reduce risks,” be transparent about data usage and ensure human oversight.
  • Some AI applications will be banned outright because they “threaten citizens’ rights,” including emotion recognition systems in schools and workplaces, among others.
  • Biometric identification systems—applications used to identify people in public spaces—can only be used by law enforcement to find victims of trafficking and sexual exploitation, to prevent terrorist threats and to identify people suspected of committing a crime.
  • The regulations also require labels for AI-generated images, video or audio content.

WHAT TO WATCH FOR
The AI Act is expected to become law in May, following final approval from some EU member states. A complete set of regulations—including rules governing chatbots—will be in effect by mid-2026, according to the European Parliament, which noted each EU country will establish its own AI watchdog agency.

BIG NUMBER
$38 million. That’s the maximum fine for violations of the AI Act, or up to 7% of a company’s global revenue. Read more


FDIC Vice Chair: Tokenization ‘Beginning’ to Deliver Benefits

Courtesy of PYMNTS

Blockchain is full of promise, cryptocurrency perhaps less so, and tokenization is already shifting what it means to “own” an asset. In a speech delivered this week by Travis Hill, vice chairman of the Federal Deposit Insurance Corp. (FDIC), at the Mercatus Center, the call was for regulatory “clarity” on technology, its uses, and especially “what we consider safe and sound.”

He took issue with his own agency’s approach, stating, “But there are significant downsides to the FDIC’s current approach, which has contributed to a general public perception that the FDIC is closed for business if institutions are interested in anything related to blockchain or distributed ledger technology. … The confidential nature of the existing process means there is little public information on what types of activities the FDIC might be open to, if any.”

In reference to regulations governing crypto, he said that requiring banks to keep them on balance sheets as liabilities effectively dissuades banks from expanding their operations to provide more crypto-related services and products, as their capital requirements are affected.

The on-balance-sheet designation, Hill said, “makes it prohibitively challenging for banks to engage in this activity at any scale. It is worth asking whether it is in the public interest for one crypto exchange to provide custody services for most of the market in approved bitcoin exchange-traded products, while highly regulated banks are effectively excluded from the market.”

PYMNTS Intelligence found that a minority of traditional FIs have embraced crypto fully: 5% of credit unions offer crypto investing services, and only another 5% plan to offer investing services tied to these digital holdings in the current year. Meanwhile, just 1% of banks offer cryptocurrency, and another 1% plan to do so.

Tokenization’s Promise
Hill’s remarks delved into tokenization, which he said, “transforms the way ownership of assets is recorded and enables far-reaching new functions.” And through the use of blockchain, he said, commercial bank deposits, government and corporate bonds, money market fund shares, gold and other commodities can “improve the way we transfer value” by operating 24/7/365.

Tokenization is already beginning to deliver benefits, Hill said, smoothing and speeding settlement times for multicurrency bond issuance.


Opinion: The Biggest Bank Heist in History Is Coming

Courtesy of Linda Jeng, Consensus Magazine/CoinDesk

Regulators are permitting banks to tokenize financial assets such as bank deposits, U.S. Treasuries and corporate debt. But they want institutions to use permissioned networks rather than the decentralized blockchains that keep assets safe from hackers.

In February, the Office of the Comptroller of the Currency’s acting head Michael Hsu announced plans for new rules on operational resilience for large banks with critical operations, including third-party service providers. Critically, he did not discuss how these rules will treat the use of permissioned networks by the big banks to tokenize real world assets and liabilities, an omission that neglects critical new vulnerabilities for the global financial system.

As Hsu pointed out, bank call report data show that the top four custodian banks alone now safeguard over $108 trillion in assets. These assets are in the process of being tokenized by the big banks, which is the process of creating digital representations of real world assets and liabilities on blockchain. These banks have been piloting the tokenization of bank deposits and will soon turn to tokenizing U.S. Treasuries and corporate debt.

Regulators acknowledge this tokenization trend. The Fed’s Vice-Chair Michael Barr announced last September the launch of the Fed’s Novel Activities Supervision Program while allowing state-member banks to also explore tokenization if they demonstrate sufficient risk management. In November, Hong Kong’s Securities & Finance Commission issued regulatory guidance on the tokenization of securities, and the OCC held a symposium on tokenization in February.

This mainstreaming of crypto by traditional financial institutions and regulators is exciting. But these banks are mostly tokenizing on permissioned networks, which regulators are encouraging. In December, while announcing plans to revise its bank capital standard for crypto-assets, the Basel Committee on Banking Supervision stated that since permissionless blockchains “create risks that cannot be sufficiently mitigated at present”, the highest bank capital requirements would be retained for crypto-assets held on permissionless blockchains. The Committee probably concluded this because permissionless blockchains are maintained by thousands of validators that are not subject to regulatory authorities, while permissioned networks would be controlled by banks. Read more

Mar. 8, 2024: AI & Digital Assets


Fairness in Machine Learning: Regulation or Standards?

Courtesy of Mike H. M. Teodorescu and Christos Makridis, Brookings

Machine Learning (ML) tools have become ubiquitous with the advent of Application Programming Interfaces (APIs) that make running formerly complex implementations easy with the click of a button in Business Intelligence (BI) software or one-line implementations in popular programming languages such as Python or R. However, machine learning can cause substantial socioeconomic harm through failures in fairness.

We pose the question of whether fairness in machine learning should be regulated by the government, as in the case of the European Union’s (EU) initiative to legislate liability for harmful Artificial Intelligence (AI) and the New York City AI Bias law, or if an industry standard should arise, similar to the International Standards Organization (ISO) quality-management manufacturing standard ISO 9001 or the joint effort between ISO and the International Electrotechnical Commission (IEC) standard ISO/IEC 27032 standard for cybersecurity in organizations, or both.

We suggest that regulators can help with establishing a baseline of mandatory security requirements, and standards-setting bodies in industry can help with promoting best practices and the latest developments in regulation and within the field.

Introduction
The ease of incorporating new machine learning (ML) tools into products has resulted in their use in a wide variety of applications, including medical diagnostics, benefit fraud detection, and hiring. Common metrics in optimizing algorithm performance, such as algorithm Accuracy (the ratio of correctly predicted to total number of attempted predicted), do not paint the complete picture regarding False Positives (algorithm incorrectly predicted positive) and False Negatives (algorithm incorrectly predicted negative), nor do they quantify the individual impact of being mislabeled.

The literature has, in recent years, created the subfield of Machine Learning Fairness, attempting to define statistical criteria for group fairness such as Demographic Parity or Equalized Opportunity, which are explained in section II, and over twenty others, described in comprehensive review articles like the one by Mehrabi et al (2021). As the field of ML fairness continues to evolve, there is currently no one standard agreed upon in the literature for how to determine whether an algorithm is fair, especially when multiple protected attributes are considered. The literature on which we draw includes computer science literature, standards and governance, and business ethics.

Fairness criteria are statistical in nature and simple to run for single protected attributes—individual characteristics that cannot be the basis of algorithm decisions (e.g., race, national origin, and age, among other individual characteristics). Protected attributes in the United States are defined in U.S. federal law and began with Title VI of the Civil Rights Act of 1964. However, in cases of multiple protected attributes it is possible that no criterion is satisfied.

Furthermore, oftentimes a human decision maker needs to audit the system for compliance with the fairness criteria with which it originally complied at design, given that a machine learning-based system often adapts through a growing training set as it interacts with more users. Moreover, no current federal law nor industry standard mandates regular auditing of such systems. Read more


California Seeks to Regulation Employer Use of AI

Courtesy of Hunton Andrews Kurth’s Privacy and Cybersecurity of Hunton Andrews Kurth  –  Privacy and Information Security Law Blog-Hunton Andrews Kurth

As reported on the Hunton Employment & Labor Perspectives blog, on February 15, 2024, California lawmakers introduced the bill AB 2930. AB 2930 seeks to regulate use of artificial intelligence (“AI”) in various industries to combat “algorithmic discrimination.” The proposed bill defines “algorithmic discrimination” as a “condition in which an automated decision tool contributes to unjustified differential treatment or impacts disfavoring people” based on various protected characteristics including actual or perceived race, color, ethnicity, sex, national origin, disability and veteran status.

Specifically, AB 2930 seeks to regulate “automated decision tools,” that make “consequential decisions.” An “automated decision tool” is any system that uses AI which has been developed to make, or be a controlling factor in making, “consequential decisions.” And a “consequential decision” is defined as a decision or judgment that has a legal, material, or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of, any of the following: 1) employment, including any decisions regarding pay or promotion, hiring or termination and automated task collection; 2) education; 3) housing or lodging; 4) essential utilities, 5) family planning, 6) adoption services, reproductive services or assessments related to child protective services; 7) health care or health insurance; 8) financial services; 9) the criminal justice system; 10) legal services; 11) private arbitration; 12) mediation and 13) voting.

AB 2930 aims to prevent algorithmic discrimination through impact assessments, notice requirements, governance programs, policy disclosure requirements and providing for civil liability.

Impact Assessments
Any employers or developers using or developing automated decision tools, by January 1, 2026, will be required to perform annual impact assessments. The annual impact assessment requirements are largely the same for both employers and developers and include, among other things, a statement of purpose for the automated decision tool; descriptions of the automated decision tool’s outputs and how they are used in making a consequential decision; and analysis of potential adverse impacts. Employers, but not developers, are required to: 1) describe the safeguards in place to address reasonably foreseeable risks of algorithmic discrimination, and 2) provide a statement of the extent to which the employer’s use of the automated decision tool is consistent with or varies from the developer’s statement of the intended use of the automated decision tool (which developers are required to provide under Section 22756.3 of the proposed bill). Employers with fewer than 25 employees will not be required to perform this assessment, unless the automated system impacted more than 999 people in the calendar year. Read more


Firms Turn to AI for Smarter, Quicker Cybersecurity Solutions

Courtesy of PYMNTS

Google CEO Sundar Pichai recently noted that artificial intelligence (AI) could boost online security, a sentiment echoed by many industry experts.

AI is transforming how security teams handle cyber threats, making their work faster and more efficient. By analyzing vast amounts of data and identifying complex patterns, AI automates the initial stages of incident investigation. The new methods allow security professionals to begin their work with a clear understanding of the situation, speeding up response times.

AI’s Defensive Advantage
“Tools like machine learning-based anomaly detection systems can flag unusual behavior, while AI-driven security platforms offer comprehensive threat intelligence and predictive analytics,” Timothy E. Bates, chief technology officer at Lenovo, told PYMNTS in an interview. “Then there’s deep learning, which can analyze malware to understand its structure and potentially reverse-engineer attacks. These AI operatives work in the shadows, continuously learning from each attack to not just defend but also to disarm future threats.”

Cybercrime is a growing problem as more of the world embraces the connected economy. Losses from cyberattacks totaled at least $10.3 billion in the U.S. in 2022, per an FBI report.

Increasing Threats
The tools used by attackers and defenders are constantly changing and increasingly complex, Marcus Fowler, CEO of cybersecurity firm Darktrace Federal, said in an interview with PYMNTS. “AI represents the greatest advancement in truly augmenting the current cyber workforce, expanding situational awareness, and accelerating mean time to action to allow them to be more efficient, reduce fatigue, and prioritize cyber investigation workloads,” he said.

As cyberattacks continue to rise, improving defense tools is becoming increasingly important. Britain’s GCHQ intelligence agency recently warned that new AI tools could lead to more cyberattacks, making it easier for beginner hackers to cause harm. The agency also said that the latest technology could increase ransomware attacks, where criminals lock files and ask for money, according to a report by GCHQ’s National Cyber Security Centre.

Google’s Pichai pointed out that AI is helping to speed up how quickly security teams can spot and stop attacks. This innovation helps defenders who have to catch every attack to keep systems safe, while attackers only need to succeed once to cause trouble. While AI may enhance the capabilities of cyberattackers, it equally empowers defenders against security breaches. Read more


U.S. Senators Seek to Ban CBDCs Before They Ever Happen

Courtesy of Tom Nawrocki, PaymentsJournal

Five Republican senators have introduced legislation aimed at preventing the Biden administration from issuing a central bank digital currency (CBDC). The effort, spearheaded by Texas Senator Ted Cruz, claims that CBDCs would intrude on “the privacy of citizens to surveil their personal spending habits.”

“Congress must clarify that the Federal Reserve has no authority to implement a CBDC,” Cruz said in a statement.

A CBDC would open the door for the federal government to surveil and control the spending habits of all Americans,” said Senator Ted Budd of North Carolina, a co-sponsor of the bill. “Any push to establish a CBDC must be confronted and stopped, and that’s why I’m proud to join Senator Cruz’s effort to do just that.” Other sponsors include Bill Hagerty of Tennessee, Rick Scott of Florida, and Mike Braun of Indiana.

Focus on Privacy Concerns
“Senator Cruz and others are responding to one of the highest concerns about a CBDC: privacy,” said Joel Hugentobler, Cryptocurrency Analyst at Javelin Strategy & Research. “It’s a fine line to walk on, and once you cross it there’s no going back. They have to get it right if they want to get it passed.”

The Treasury Department followed up that order with recommendations to the president in March 2023. At that time, Under Secretary for Domestic Finance Nellie Liang said: “The Fed has also emphasized that it would only issue a CBDC with the support of the executive branch and Congress, and more broadly the public.” She also said that Treasury’s CBDC Working Group would continue examining the issue.

The Biden administration has not yet introduced plans for a CBDC. In March 2022, the White House called for the Federal Reserve to “continue to research and report on the extent to which CBDCs could improve the efficiency and reduce the costs of existing and future payments systems, to continue to assess the optimal form of a United States CBDC, and to develop a strategic plan for Federal Reserve and broader United States Government action, as appropriate, that evaluates the necessary steps and requirements for the potential implementation and launch of a United States CBDC.”

A CBDC Can Take Many Forms
Experts have pointed out that the GOP’s objections are to a specific form of CBDC that is not likely to be the instrument’s final form. “The anti-CBDC bill is aimed at a retail version of a digital dollar, meaning a central bank digital currency issued to consumers by the Federal Reserve,” said James Wester, Director of Cryptocurrency and Co-Head of Payments at Javelin Strategy & Research. “What most opponents are reacting to is an implementation that would include the worst possible design choices in terms of privacy and control. There are a number of ways a digital currency backed by the Fed could be designed, but the bill would prevent even those from being implemented without congressional authority.” Read more

Mar. 1, 2024: AI & Digital Assets


AI is Uncle Sam’s New Secret Weapon to Fight Fraud

Courtesy of Matt Egan, CNN

Uncle Sam has quietly deployed a new secret weapon designed to catch bad guys trying to steal from taxpayers: artificial intelligence. Starting around late 2022, the Treasury Department began using enhanced fraud-detection methods powered by AI to spot fraud, CNN has learned.

The strategy mirrors what is already being done in the private sector. Banks and payment companies are increasingly turning to AI to root out suspicious transactions — which the technology can often do with lightning speed.

Uncle Sam’s AI-fueled crackdown on fraud appears to be paying off.

Treasury’s AI-powered fraud detection recovered $375 million in fiscal 2023 alone, Treasury officials tell CNN, marking the first time Treasury is publicly acknowledging it is using AI to detect fraud. Using these new crime-fighting strategies, the federal government can halt check fraud almost in real time, in part by looking for unusual transaction patterns, Treasury officials tell CNN. And this focus on AI has led to multiple active cases and arrests by law enforcement, Treasury said.

Treasury is not relying on generative AI, the technology powering ChatGPT and other popular tools that can create song lyrics, conjure up images and even create movie-quality videos from text prompts. Instead, Treasury officials say the type of AI they are using falls more into the bucket of machine learning and Big Data. The goal is to move with such speed that anomalies are flagged and banks are alerted before fraudulent checks are ever cashed, Treasury officials said.

Fraud spiked after Covid
Washington needs all the help it can get on the fraud front. Fraud spiked during the Covid-19 pandemic. US officials were under enormous pressure in 2020 and 2021 to quickly distribute aid to families and small businesses devastated by the health crisis. Fraudsters took advantage of the unprecedented flow of money from Washington. Read more


The Women in AI Making a Difference

Courtesy of Kyle Wiggers and Dominic-Madori Davis, TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

As a reader, if you see a name we’ve missed and feel should be on the list, please email us and we’ll seek to add them. Here are some key people you should know:

The gender gap in AI
In a New York Times piece late last year, the Gray Lady broke down how the current boom in AI came to be — highlighting many of the usual suspects like Sam Altman, Elon Musk and Larry Page. The journalism went viral — not for what was reported, but instead for what it failed to mention: women.

The Times’ list featured 12 men — most of them leaders of AI or tech companies. Many had no training or education, formal or otherwise, in AI.

Contrary to the Times’ suggestion, the AI craze didn’t start with Musk sitting adjacent to Page at a mansion in the Bay. It began long before that, with academics, regulators, ethicists and hobbyists working tirelessly in relative obscurity to build the foundations for the AI and GenAI systems we have today.

Elaine Rich, a retired computer scientist formerly at the University of Texas at Austin, published one of the first textbooks on AI in 1983, and later went on to become the director of a corporate AI lab in 1988. Harvard professor Cynthia Dwork made waves decades ago in the fields of AI fairness, differential privacy and distributed computing. And Cynthia Breazeal, a roboticist and professor at MIT and the co-founder of Jibo, the robotics startup, worked to develop one of the earliest “social robots,” Kismet, in the late ’90s and early 2000s. Read more


U.S. Patent Office: AI Is All Well and Good, But Only Humans Can Patent Things

Courtesy of Devin Coldewey, TechCrunch

The question of where AI sits in the legal personhood stack isn’t as simple as it may seem (i.e. “nowhere”) — but the U.S. Patent and Trademark Office today declared that, as with other intellectual property, only a person can receive its official protections.

The news arrived via “guidance,” which is to say official policy but not ironclad rule, set to be entered into the federal register soon. The guidance document (PDF) specifies that for clear legal reasons, as well as the notion that, fundamentally, “patents function to incentivize and reward human ingenuity,” only “natural humans” can be awarded patents.

It is not necessarily obvious when you think of how, for example, corporations are considered people for some legal purposes, but not others. Not being citizens, they cannot vote, but being legal persons, their speech is protected by the first amendment.

There was a legal question as to whether, when a patent is evaluated for awarding to an “individual,” whether that individual must be a human, or whether an AI model can be an individual. Precedent made it clear (the guidance summarizes) that individual means human unless specifically stated otherwise. But it was still an open question whether or how to cite or award an AI-assisted invention application.

For instance, if a person designed an AI model, and that AI model independently designed the shape and mechanism of a patentable device, is that AI a “joint inventor” or “coinventor”? Or, perhaps, does the lack of a human inventor in this case preclude that device from being patented at all?

The USPTO guidance makes it clear that while AI-assisted inventions are not “categorically unpatentable,” AI systems themselves are not individuals and therefore cannot be inventors, legally speaking. Therefore, it follows that at least one human must be named as the inventor of any given claim. (There are actually some interesting parallels to the infamous “monkey selfie” case — where the monkey obviously taking the photo can’t be awarded copyright, because copyrights must be owned by legal persons, and monkeys, though they are many things, are not that.) Read more


Consumers Distrustful of AI at Financial Institutions; Education Going to be Needed, Says J.D. Power 

Courtesy of CUToday

As new technologies are being integrated into financial services, most bank customers in the U.S. are expressing distrust over artificial intelligence, according to the newest Banking and Payments Intelligence Report from J.D. Power.

JD Power AI 1

“Although the annual inflation rate is close to dipping below 3% for the first time since 2020, bank customers in the United States have yet to see major improvement in their financial situations,” the company said in releasing its analysis.

According to J.D. Power, the percentage of U.S. bank customers that are financially healthy remains near the all-time low, while “new concerns are cropping up with the emergence of artificial intelligence (AI) in the financial services sector. And despite that many of these AI-driven tools could help customers, many are hesitant to trust the technology to help manage their money.”

Financial Woes Continue
J.D. Power reported its new consumer survey found customers’ financial health remains at a standstill, with 30% of respondents saying they are financially healthy, and 46% saying they fall into the vulnerable category. Those numbers are in line with the previous four months, J.D. Power said.

JD Power AI 2

“Customer sentiment regarding financial health status, stress levels and empowerment to improve one’s financial situation also remain virtually unchanged month-over-month,” the company reported. “A small silver lining: The percentage of customers that are extremely worried that the prices for common goods will continue to rise dropped to 37% from 40% in January.” Read more

Feb. 23, 2024: AI & Digital Assets


White House Wades into Debate On ‘Open’ Versus ‘Closed’ Artificial Intelligence Systems

Courtesy of Associated Press/U.S. News & World Report

The Biden administration is wading into a contentious debate about whether the most powerful artificial intelligence systems should be “open-source” or closed. The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system’s key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.

Tech companies are divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM. Biden’s order described open models with the technical name of “dual-use foundation models with widely available weights” and said they needed further study. Weights are numerical values that influence how an AI model performs.

When those weights are publicly posted on the internet, “there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” Biden’s order said. He gave Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

Now the Commerce Department’s National Telecommunications and Information Administration says it is also opening a 30-day comment period to field ideas that will be included in a report to the president.

“One piece of encouraging news is that it’s clear to the experts that this is not a binary issue. There are gradients of openness,” said Alan Davidson, an assistant Commerce secretary and the NTIA’s administrator. Davidson told reporters Tuesday that it’s possible to find solutions that promote both innovation and safety. Read more


China’s Rush to Dominate A.I. Comes with a Twist: It Depends on U.S. Technology

China’s tech firms were caught off guard by breakthroughs in generative artificial intelligence. Beijing’s regulations and a sagging economy aren’t helping.

Courtesy of Paul Mozur, John Liu, & Cade Metz, New York Times

In November, a year after ChatGPT’s release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems.

The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Meta’s generative A.I. model, called LLaMA.There was just one twist: Some of the technology in 01. AI’s system came from LLaMA. Mr. Lee’s start-up then built on Meta’s technology, training its system with new data to make it more powerful.

The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.

“Chinese companies are under tremendous pressure to keep abreast of U.S. innovations,” said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was “yet another Sputnik moment that China felt it had to respond to.”

Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch “aren’t very good,” leading to many Chinese firms often using “fine-tuned versions of Western models.” She estimated China was two to three years behind the United States in generative A.I. developments.

The jockeying for A.I. primacy has huge implications. Breakthroughs in generative A.I. could tip the global technological balance of power, increasing people’s productivity, aiding industries and leading to future innovations, even as nations struggle with the technology’s risks. Read more


Filene Webinar: The Evolution and Impact of AI in Credit Unions 

Review webinar materials from The Evolution and Impact of AI in Credit Unions hosted February 15th, 2024.

In this record-attended session we explored the growing adoption of AI in a business context and shared specific use cases within the credit union industry.


Lawmakers Seek Answers from Treasury Secretary on Spot Markets for Digital Assets

Courtesy of Dave Kovaleski, Financial Regulation News

A group of Republican lawmakers are seeking answers from Treasury Secretary Janet Yellen on the Financial Stability Oversight Council (FSOC) efforts related to the spot market for digital assets. The Republican lawmakers said regulators have failed to facilitate an environment that ensures consumer protection and fosters digital asset innovation in the United States.

They added that the bipartisan Financial Innovation and Technology Act for the 21st Century (FIT) would provide federal regulators with clear authority over the digital asset spot markets. It would also ensure the customer protections seen in the current financial regulatory structure apply to intermediaries and digital asset-related activities. Overall, they say the act would provide the clarity and certainty that digital asset spot markets desperately need.

“In 2021, Securities and Exchange Commission (SEC) Chair Gensler identified some of these same gaps as it relates to trading platforms, specifically stating, ‘I think it’s only Congress that could really address it, it’d be good to consider – if it was – if you asked my thoughts, to consider whether to bring greater investor protection to the crypto exchanges.’ The same week, Chair Gensler emphasized, ‘while the [SEC’s] sister agency, the Commodity Futures Trading Commission (CFTC), has some limited anti-fraud and anti-manipulation authority, there is no federal authority to actually bring a regime to the crypto exchanges… [the SEC] will be working with Congress, if they see fit to try to bring some protection for people who want to invest in this asset class,’” they wrote in a letter to Yellen, in her capacity as the chair of FSOC.

The letter was penned by Rep. Glenn Thompson (R-PA), chairman of the House Committee on Agriculture, Rep. Patrick McHenry (R-NC), chairman of the House Financial Services Committee; Rep. French Hill (R-AR), chairman of the Digital Assets, Financial Technology and Inclusion Subcommittee; and Rep. Dusty Johnson (R-AL), chairman of the Commodity Markets, Digital Assets, and Rural Development Subcommittee. Read more

 

Feb. 8, 2024: AI & Digital Assets


Cryptocurrency Enforcement Increased More Than 50% In 2023

Courtesy of Liz Carey, Financial Regulation News

According to the Security and Exchange Commission, cryptocurrency-related enforcements rose by more than half in 2023.

Of the 46 enforcement actions brought against digital-asset market participants last year, nearly half (20) came in the first quarter, the highest number in a single quarter, officials said. The number of enforcements was more than half as much (53 percent) as the previous year. A report by Cornerstone Research found that the SEC brought 26 litigations to federal courts and initiated 20 administrative actions, more triple the number of administrative actions in 2022. The enforcements brought in more than $281 million in monetary penalties, the report found.

“Chair Gensler has noted that ‘enforcement is a tool, not the destination,’ and the number of SEC enforcement actions brought in the crypto space has ramped up over the last two years,” Simona Mola, the report’s author and a principal at Cornerstone Research, said. “We will be watching to see what 2024 brings, particularly in light of the SEC’s recent approval of the first Bitcoin ETFs.”

The report found that more than a third of the enforcement actions (37 percent) were related to initial coin offerings (ICOs), down from 2022’s 47 percent. Nearly all of the 17 ICO-related actions (82 percent) involved allegations of fraud. Additionally, the SEC brought two administrative actions related to Non-fungible tokens (NFTs), with allegations of conducting unregistered securities offerings of crypto asset securities.

“The SEC has continued in 2023 to focus on its implementation of the Howey test,” said Abe Chernin, a Cornerstone Research vice president and cohead of the firm’s FinTech practice. “The SEC has increasingly concentrated on trading platforms for their crypto lending and staking programs or for allegedly failing to register as an exchange, a broker-dealer, and a clearing agency.”

Of the total enforcement actions, nearly half involved fraud, while 61 percent alleged an unregistered securities offering violation and 37 percent alleged both. Fraud and unregistered securities continue to be the most frequent allegations.


U.S. Lawmakers Push Back on Proposed CFPB Rule, Citing Potential Impact on Crypto

Courtesy of CoinTelegraph

Representatives Patrick McHenry, Mike Flood and French Hill called for an additional 60 days for a CFPB proposal to consider the impact on digital assets.

Leaders of the United States House Financial Services Committee and Subcommittee on Digital Assets, Financial Technology and Inclusion called for a longer comment period on a proposed rule from the Consumer Financial Protection Bureau (CFPB), claiming its impact on the digital asset space would be “unclear” if implemented.

In a Jan. 30 letter to CFPB Director Rohit Chopra, Representatives Patrick McHenry, Mike Flood and French Hill questioned how a November 2023 proposal “would apply to specific entities within the digital asset ecosystem.” The CFPB rule suggested extending its supervisory authority over depository institutions, including digital assets in its definition of “funds,” and allowing it to target wallets.

The three lawmakers said a lack of clarity for affected crypto exchanges could dissuade firms from allowing peer-to-peer transactions through wallets hosted on the platforms. They requested the CFPB open the proposal to public comments for an additional 60 days, accepting and considering feedback on crypto.

“Peer-to-peer transactions through ‘self-hosted wallets’ is a core component for the digital asset ecosystem, as it eliminates third-party risk,” said the letter. “Capturing certain digital asset wallet providers, who themselves do not maintain an ongoing relationship with consumers, would essentially introduce regulatory risk. […] We urge the CFPB to refrain from pursuing such a broad definition.”

The Crypto Council for Innovation said on Jan. 8 that it had “deep concerns” about the proposed rule’s impact on the crypto space, claiming it could “increase regulatory fragmentation.” The advocacy group proposed the CFPB not extend its authority over the digital asset space, hinting at waiting for Congress to provide an appropriate regulatory framework. Read more


The Future: Credit Unions and Decentralized Finance

By Becky Reed, Featured in CUToday

Reed 2In case you haven’t noticed, the credit union model is dying – lost in a quagmire of dizzying consumer options. Our message is getting lost as we fall further behind with digital adoption and modernizing our infrastructure.  We continue to communicate our differentiators to the marketplace based on products or services rather than the fundamental uniqueness of being owned by the very people we serve.  Our collaborative superpower is being diluted by mega-mergers where we look to everyone else just like another bank.

Yet – there seems to be a never-ending supply of people, companies, communities, industries, and even whole countries that are unable to access fair and equitable financial services.  Isn’t serving the needs of the “little guy” why we exist?  Are we succeeding?

Credit Unions were created because people, specifically farmers, were being left out of the traditional financial system.  Through early grassroots efforts in Europe and Canada, people banded together to create their own democratically controlled (decentralized) financial cooperatives.

The New Grassroots
The first credit union in the United States was officially chartered in 1909 with the creation of what we now know as St. Mary’s Bank. Edward Filene took up the cause, which eventually led to the passage of the Federal Credit Union Act in 1934.

This global movement is rooted in the inability for people to have fair and equal access to financial services, particularly credit.  Today, credit unions play a crucial role in providing financial services to individuals and communities that are underserved by traditional banks.  Our industry prides itself on this concept.

But, were you aware that there is a new grassroots movement afoot?  It’s one that appeals to the digitally native generations that are fed up with traditional financial systems.  It’s called the DeFi Movement and it will disrupt everything we do. Read more


Global Coalition and Tech Giants Unite Against Commercial Spyware Abuse

Courtesy of the Hacker News

A coalition of dozens of countries, including France, the U.K., and the U.S., along with tech companies such as Google, MDSec, Meta, and Microsoft, have signed a joint agreement to curb the abuse of commercial spyware to commit human rights abuses.

The initiative, dubbed the Pall Mall Process, aims to tackle the proliferation and irresponsible use of commercial cyber intrusion tools by establishing guiding principles and policy options for States, industry, and civil society in relation to the development, facilitation, purchase, and use of such tools.

The declaration stated that “uncontrolled dissemination” of spyware offerings contributes to “unintentional escalation in cyberspace,” noting it poses risks to cyber stability, human rights, national security, and digital security.

According to the National Cyber Security Centre (NCSC), thousands of individuals are estimated to have been globally targeted by spyware campaigns every year. Notably missing from the list of countries that participated in the event is Israel, which is home to a number of private sector offensive actors (PSOAs) or commercial surveillance vendors (CSVs) such as Candiru, Intellexa (Cytrox), NSO Group, and QuaDream.

Recorded Future News reported that Hungary, Mexico, Spain, and Thailand – which have been linked to spyware abuses in the past – did not sign the pledge. The multi-stakeholder action coincides with an announcement by the U.S. Department of State to deny visas for individuals that it deems to be involved with the misuse of dangerous spyware technology.

One hand, spyware such as Chrysaor and Pegasus are licensed to government customers for use in law enforcement and counterterrorism. On the other hand, they have also been routinely abused by oppressive regimes to target journalists, activists, lawyers, human rights defenders, dissidents, political opponents, and other civil society members.

Such intrusions typically leverage zero-click (or one-click) exploits to surreptitiously deliver the surveillanceware onto the targets’ Google Android and Apple iOS devices with the goal of harvesting sensitive information. Read more

Feb. 2, 2024: AI & Digital Assets


FOLLOW-UP: Deepfake Bill Would Open Door for Victims to Sue Creators

Courtesy of Kat Tenbarge, NBCNews

A bipartisan group of three senators is looking to give victims of sexually explicit deepfake images a way to hold their creators and distributors responsible.

Sens. Dick Durbin, D-Ill.; Lindsey Graham, R-S.C.; and Josh Hawley, R-Mo., plan to introduce the Disrupt Explicit Forged Images and Non-Consensual Edits Act on Tuesday, a day ahead of a Senate Judiciary Committee hearing on internet safety with CEOs from Meta, X, Snap and other companies. Durbin chairs the panel, while Graham is the committee’s top Republican.

Victims would be able to sue people involved in the creation and distribution of such images if the person knew or recklessly disregarded that the victim did not consent to the material. The bill would classify such material as a “digital forgery” and create a 10-year statute of limitations.

“The volume of deepfake content available online is increasing exponentially as the technology used to create it has become more accessible to the public,” Durbin’s office said in a news release. “The laws have not kept up with the spread of this abusive content.”

In the release, the senators noted that Taylor Swift had recently become a victim of such deepfakes, which spread across Elon Musk’s X and later Instagram and Facebook.

“Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real,” Durbin said. “Victims have lost their jobs, and may suffer ongoing depression or anxiety.” Read more


Banks Tap AI to Navigate Regulatory Maze of Managing Vendor Contracts

Courtesy of PYMNTS

Michael Berman, CEO of Ncontracts, told PYMNTS that financial institutions (FIs) are grappling with the increasing burdens of vendor lifecycle management. There can be what Berman termed “dire consequences” for FIs as they navigate contract management with their vendors. With the explosion of technology and digital channels, FIs are interacting with consumers and other clients in ways that had never been anticipated.

And as FIs work with vendors to get the services and technologies and compliance in place, to meet customer expectations, “the number of agreements have exploded — and the challenges have exploded.”

Even a small FI, he said, must hammer out as many as 300 agreements with vendors (FinTechs among them), a tally that stretches out into the thousands for larger banks. Federal laws governing compliance are constantly changing too. For FIs, there’s the challenge of knowing whether their agreements are up to date, whether they’re protected, and whether the vendors are notifying their FI partners in timely fashion about cyber-risk related events.

To get a sense of the scope and complexity of vendor management, consider the fact that, as Berman noted, joint guidance from the FDIC, OCC and Federal Reserve lists 17 items about contractual controls that govern third-party vendor relationships.

Complexity reigns, then, and can create friction, Berman said. FIs must invest significant resources in forging relationships with FinTechs before an agreement is even struck, then invest resources in the contractual process, and even invest resources when offloading that vendor in the event a contract is terminated. Armies of attorneys, databases, untold employee hours are spent grappling with legal minutiae.

“You’re either taking a lot of risks,” he told PYMNTS, “because you don’t have the legal resources to spend. Or you’re spending money on legal resources that you could be spending elsewhere if you had the appropriate technology in place.” Read more


Feds Kick Off National AI Research Resource with Pilot Program

Courtesy of Devin Coldewey, TechCrunch

A year to the day after it was proposed, the National AI Research Resource (NAIRR) is coming online — at least in pilot form — as a coalition of U.S. agencies and private partners start to apply billions in federal funding toward public-access tools for aspiring AI scientists and engineers.

NAIRR is the Biden administration’s answer to the sudden rise of AI in the global tech scene, and the concentration of its resources and expertise among a relatively small group of tech giants and privately funded startups. In an attempt to democratize the tech a bit and keep the U.S. competitive with its rivals abroad, the feds decided to dedicate some money to making a variety of resources available to any qualified researcher.

The National Science Foundation, Department of Energy, NASA, NOAA, DARPA and others are all partners in the effort, both providing resources (like datasets and consultation) and working with applicants in their areas of expertise. And more than two dozen major tech companies are also contributing in some way. The whole thing has an $800 million per-year budget for the next three years, subject to congressional approval, of course.

In a panoply of statements, executives from OpenAI, Anthropic, Nvidia, Meta, Amazon, and Microsoft commit a variety of resources, expertise, free access, and so on to the NAIRR effort.

The resources that will be made available haven’t been listed anywhere. Instead, the overall organization will accept applications and proposals, which will be evaluated and assigned resources. Think of it more like a grant-making process than a free supercomputer. Read more


Digital Asset Association Launches Today to Accelerate Tokenization in TradFi

Courtesy of Rick Steves, FinanceFeeds

Membership benefits include networking opportunities, partnerships at events, branding with the DAA logo, listing on the DAA website, exclusive discounts and offers, and participation in regulatory discussions​. Today, the Digital Assets Association (DAA) announced its launch. This transnational organization aims to promote responsible development and adoption of institutional digital assets.

It unites financial institutions, fintechs, technology providers, and legal and regulatory experts. The DAA’s goal is to integrate traditional finance and tokenized real-world assets (RWA). DAA is headquartered in Singapore and is led by industry participants operating mostly within the APAC region. This initiative is in line with a Citi report predicting significant growth in asset tokenization, potentially reaching a value of nearly US$4 trillion by 2030.

TradFi meets RWA
The DAA committee consists of leaders from various sectors, including Henry Zhang (DigiFT), Chia Hock Lai (Onfet), Danny Chong (Tranchess), Daniel Lee (Banking Circle), Steven Hu (Standard Chartered), and Chang Tze Ching (Bright Point International Digital Assets). These members are committed to the future of digital assets.

The formation of DAA was led by DigiFT, Onfet, and Tranchess. DigiFT is a regulated on-chain exchange for RWA, Onfet focuses on blockchain-based technology, and Tranchess specializes in tokenized asset management.

The Digital Assets Association aims to:

  • Share knowledge and best practices via working groups, conferences, and online resources.
  • Develop industry standards on tokenization protocols, risk management, and data governance.
  • Advocate for responsible adoption of legal and regulatory frameworks.
  • Empower future leaders through training, mentorship, and placements. Read more

Jan. 26, 2024: AI & Digital Assets


Deepfake Phenomenon, Impact, and Challenges

Courtesy of Amitkumar Shrivastava

Deepfakes represent a formidable intersection of technology and ethics, challenging our notions of truth and trust. Their potential to reshape narratives, whether in politics, media, or personal identity, demands vigilant and informed scrutiny. As we embrace their creative possibilities, we must also fortify our technological advancements and legal and ethical frameworks to safeguard against their darker uses. The balance between innovation and integrity becomes increasingly critical in the digital age.

Introduction
In a shocking twist, the triumphant Apollo 11 moon mission takes an unexpected turn in a convincing deepfake video. Former U.S. President Richard Nixon somberly declares, “Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace!” However, this unsettling scenario is not reality; it is a powerful deepfake crafted by the MIT Center for Advanced Virtuality to shed light on the potential threats posed by this emerging artificial intelligence (AI)-based technology.

Deepfakes, a product of advanced AI technologies such as machine learning and deep neural networks, are digitally manipulated synthetic media content. This includes videos, images, and sound clips where individuals are depicted saying or doing things that never occurred. The authenticity of these creations is so striking that distinguishing them from genuine media becomes a daunting challenge for the human eye.

Deepfakes have roots in the academic sphere, where the exploration of using artificial intelligence (AI) for image processing began as early as the 1990s. While the technology simmered in the lab, significant advancements in machine learning and computational power in the mid-2010s created the perfect storm for its wider emergence.

In 2014, a crucial turning point arrived in the evolution of deepfakes with the introduction of Generative Adversarial Networks (GANs). This breakthrough allowed for the creation of increasingly intricate and realistic manipulations in various forms of media. Read more


AI Heralds The Next Generation of Financial Scams

Courtesy of Siddharth Venkataramakrishnan, Financial Times

Voice cloning is just one of the new tools in the tricksters’ armory.

It was last spring when Paddric Fitzgerald received a telephone call at work. He had been playing music via his phone, so when he picked up, the voice of his daughter screaming that she had been kidnapped erupted over the speakers.

“Everyone has those points in their lives like ‘Oh, that moment I almost drowned as a kid’,” he says. “It was one of the most emotionally scarring days of my life.” Declining an offer of a firearm from a colleague, Fitzgerald, a shop manager based in the western US, raced to get cash out from a bank, while staying on the phone.

“[My daughter] was screaming in the background, saying they’d cut her while I was waiting in line,” he says. “I was going to give everything that I have financially.” It was only a chance text from his daughter that revealed that the voice on the phone didn’t belong to her. It was a remarkably cruel and elaborate scam generated with artificial intelligence.

Fitzgerald’s story is a terrifying example of how AI has become a powerful new weapon for scammers, forcing banks and fintechs to invest in the technology to keep pace in a high-tech arms race. “I had no protection over my child in that moment — a year later, I’d love to find that person and just make them realise how evil what they did was, and they did it with keystrokes,” says Fitzgerald. “Are we really that advanced as a society if we can do that?”

The continued evolution and uptake of the technology means scammers do not just pose a threat to the unaware or vulnerable. Even cautious consumers are at risk of huge financial losses from AI-powered fraud. FT Money explores the latest developments.

Increasing sophistication
Identifying the scale of AI use by scammers is a difficult task, says Alex West, banking and payments fraud specialist at consultant PwC. He was one of the authors of a report into the impact of AI on fraud and scams last December in collaboration with cross-industry coalition Stop Scams UK. This identified the kind of “voice cloning” that targeted Fitzgerald as one of biggest ways in which criminals are expected to use AI. Read more


Treasury Cries “Uncle” to Crypto Industry: Crypto Reporting Delayed

Courtesy of Cadwalader, Wickersham & Taft LLP

On January 16th, the IRS published Announcement 2024-4 (the “Announcement”), postponing certain reporting requirements for large crypto transactions which were set to go into effect for the 2024 tax year.

Persons engaged in a trade or business who receive more than $10,000 in cash in a single transaction or multiple related transactions generally must report identifying information about the payor and the transaction on IRS Form 8300 and may be subject to civil and criminal penalties for noncompliance.

For the 2024 tax year and future years, the Infrastructure Investment & Jobs Act modified the aforementioned reporting requirements by treating digital assets as cash solely for these reporting purposes and thereby subjecting certain receipts of digital assets to reporting (here).  Pursuant to the Announcement, taxpayers will not be subject to these reporting requirements solely as a result of their receipt of digital assets until Treasury and the IRS publish regulations thereon.

The Announcement does not indicate when additional guidance will be published or the substance of such guidance.  As an example, recently proposed broker reporting regulations included an expansive view of the term digital assets (e.g., crypto, non-fungible tokens, stablecoins), as discussed here.

Whether a similarly expansive view of digital assets would apply for purposes of reporting large digital asset transactions remains to be seen.  Once regulations are published, businesses will need to update their reporting systems to adapt to these rules in order to accept digital assets as payment.


AI’s Next Fight Is Over Whose Values It Should Hold

Courtesy of Ina Fried, Axios AI+

There’s no such thing as an AI system without values — and that means this newest technology platform must navigate partisan rifts, culture-war chasms and international tensions from the very beginning.

Why it matters: Every step in training, tuning and deploying AI models forces its creators to make choices about whose values the system will respect, whose point of view it will present and what limits it will observe.

The big picture: The creators of previous dominant tech platforms, such as the PC and the smartphone, had to wade into controversies over map borders or app store rules. But those issues lay at the edges rather than at the center of the systems themselves.

  • “This is the first time a technology platform comes embedded with values and biases,” one AI pioneer, who asked not to be identified, told Axios at last week’s World Economic Forum in Davos. “That’s something countries are beginning to notice.”

How it works: AI systems’ points of view begins in the data with which they are trained — and the efforts their developers may take to mitigate the biases in the data.

  • From there, most systems undergo an “alignment” effort, in which developers try to make the AI “safer” by rating its answers as more or less desirable.

Yes, but: AI’s makers routinely talk about “alignment with human values” without acknowledging how deeply contested all human values are.

  • In the U.S., for instance, you can say your chatbot AI is trained to “respect human life,” but then you have to decide how it handles conversations about abortion.
  • You can say that it’s on the side of human rights and democracy, but somehow it’s going to have to figure out what to say about Donald Trump’s claim that the 2020 election was stolen.
  • As many AI makers struggle to prevent their systems from showing racist, antisemitic or anti-LGBTQ tendencies, they face complaints that they’re “too woke.” The Grok system from Elon Musk’s X.ai is explicitly designed as an “anti-woke” alternative.

Globally, things get even trickier. Some of the biggest differences are between the U.S. and China — but for geopolitical reasons, U.S.-developed systems are likely to be inaccessible in China and vice versa.

  • The real global battle will be in the rest of the world, where both Chinese and U.S. systems will likely be competing head-on. Read more

Jan. 19, 2024: AI & Digital Assets


The State of AI Regulation Around the World

Courtesy of Livia Giannotti, TechMonitor.AI

Only a few months ago, it still seemed the tension between regulating AI and pushing for its innovation originated from governments on one side and tech companies on the other. But as the AI sector is expanding, so is the corresponding legal framework. While AI companies are increasingly using AI safety as a selling point, governments are now becoming reluctant to impose guardrails that would stifle innovation and global competition.

And the stakes in getting it right are rising with every passing month. While research into AI has led to concurrent advances in fields including healthcare, finance and education, critics are warning that whole areas of the global economy could be automated, eliminating jobs and human agency over a wide range of vital processes. That isn’t even mentioning the impact that poorly coded or malign AI programs might have on labor relations, misinformation, and surveillance.

The potential impact of AI and tech companies on the world is growing, but policies and strategies to keep it under control differ from one area of the world to another. While the main government efforts to regulate AI are still a work in progress, here is a breakdown of the most fully-fledged regulations around the world.

European Union: risk-based legislation
On 9 December, the European Union agreed on the first-ever legal framework for AI regulation, the EU AI Act. As the first set of legally binding rules, the AI Act is a historic resolution: the measures, which are currently the most restrictive worldwide, will be starting to be made law in June 2024 for all member states.

The EU has adopted a risk-based approach. This means that regulations will be enforced on AI systems depending on the level of risk they present for humanity. AI applications that are considered to present a minimal risk – for example, AI-powered recommendation systems – will not be subjected to mandatory rules. Read more


Americans Receptive to Open Banking

Courtesy of Tony Zerucha, FinTechNexus

The results of a recent Axway survey on open banking in America bode well for its adoption stateside. More than half, 55%, have heard of open banking, with 32% believing they have a decent understanding of it. The awareness rate rises from 48% in 2021. Also on the way up are Americans’ attitudes about the movement toward open banking. Today, 60% believe it’s a positive development, up from 51% in 2021.

Axway is a global API management and integration software provider with 11,000 customers across 100 countries. The survey polled 1,000 American adults about their expectations and top concerns about how companies use, track and handle their personal data.

Education key to adoption
One clear result is that all industries must better educate consumers about how they protect their data. More than half of Americans, 56%, don’t know where their data is stored; 87% wish they knew what data companies collected on them.

Banking and financial services firms are tops in terms of how much consumers trust them to protect their information. The bar is low, however, at 57%. Healthcare and life sciences are at 42%, insurance at 33%, and transportation and logistics at 16%. A solid majority of Americans 57%said they would stop doing business with companies suffering a data breach or cyberattack until the issue had been resolved.

“It’s encouraging that consumer trust seems to be growing when it comes to open banking and the API technologies that underpin it because trust is critical to banking,” Axway SVP of financial services and open banking North America Laurent Van Huffel said.  Read more


Big Tech Firms Begin Fight Back Over Regulatory Oversight of Digital Wallets

Courtesy of FinExtra

A US lobby group representing the interests of Big Tech firms has hit out at proposals by the Consumer Financial Protection Bureau to regulate tech giants such as Apple and Google that offer digital payment apps and wallets.

In November, the Bureau published a proposed rule that would see non-bank financial companies that handle more than five million transactions per year face the same rules as large banks and credit unions.

The rule would cover around 17 companies, most notably Google, Apple, PayPal, and CashApp operator Block. These firms would have to adhere to applicable funds transfer, privacy, and other consumer protection laws.

In its written response, the Computer & Communications Industry Association (CCIA) argues that the current regulatory proposal “fails to clearly identify a specific risk it seeks to address and merely identifies the possibility of ‘new risks’ from ‘new product offerings’ without explicitly stating what those risks might be.”

CCIA vice president of global competition and regulatory policy Krisztian Katona, comments: “It’s worth keeping in mind as the CFPB considers further regulations on digital services that consumer feedback seems to point towards a general satisfaction with payment services, which suggests the absence of a market failure in the sector.

“We would urge regulators to tailor new regulations to specific problems they want to fix as broad, overly burdensome or heavy-handed digital regulation could significantly hinder new startups in this industry and harm U.S. innovation and economic growth.”


Crypto for Advisors: The Regulators Are Here

Courtesy of Sarah Morton & Katherine Bos, CoinDesk

As the crypto industry matures, regulators have indicated they will continue to focus on crypto after the spot bitcoin ETF approvals. Last week was a big week for the “crypto” industry. The SEC approved 11 spot bitcoin ETFs, allowing them to trade legally in the U.S. on Jan. 10; however, it was not without controversy as the day before the official announcement, a fake announcement was posted to SEC’s X account, which was later attributed to a hack – a dramatic start indeed.

For better or worse, the regulators are here now, and Wall Street-wrapped crypto ETFs saw record-breaking Day 1 trades of over $4.6B. So what happens next? On one hand, JPMorgan’s recently released forecast expects that $36B of other crypto investments will move to the ETFs, while on the other many firms are refusing access to invest in these products to their clients. Katherine Kirkpatrick Bos, chief legal officer from CBOE Digital, takes us through what’s next for 2024 and crypto now that the U.S. regulators are here.

The Regulators are Here
In 2022, “crypto winter” arrived with a blizzard of fraud, overreliance on bad debt, and bankruptcies. These painful events led to two things that are now being keenly felt across crypto – the maturation of the industry and regulatory backlash. First, projects are more circumspect. The lateral market for crypto legal and compliance remains active. Gray hair is often no longer seen as an entirely bad thing, particularly with respect to institutional engagement.

Second, there was already a natural increase in regulatory scrutiny aligned with the growth of the industry. Much was made of the SEC’s announcement of the allocation of 20 additional positions in the newly renamed Crypto Assets and Cyber Unit (formerly the Cyber Unit) in May 2022, shortly before the collapse of Terra/Luna, but that was the commission’s way of addressing the explosion of crypto markets. Read more

Jan. 12, 2024: AI & Digital Assets


Every Lender Has a Fraud Problem, But AI-Powered Detection Is Here to Help

Courtesy of David Snitkof, FintechNexus

If you’re a lender, you have a fraud problem! Fraud is an unfortunate reality of every single lending business, because if your product is money, someone will try to steal it. As a potentially major component of a lender’s P&L, loan losses from fraud can be a costly issue. In fact, every $1 lost to fraud now costs U.S. financial services firms $4.23, according to LexisNexis.

Just like fraud, documents are often a constant across many lending application processes from mortgages to small business lines of credit and beyond. And while fraud has the potential to negatively impact the profitability and efficiency of a lending operation, it can be mitigated through the intelligent application of automation, fraud detection technologies, and advanced analytics.

Document fraud in lending

Let’s begin by reviewing how lenders collect and assess documents. Legacy methods, especially the manual review of documents, can increase the risk of fraud going undetected, as many alterations are invisible to the naked eye. Various technologies, ranging from straightforward pattern recognition to advanced machine learning and AI, can go deeper into the digital layers of a document and identify modifications, anomalies, and the fingerprints of malfeasance.

One may assume that fraud only occurs in complex materials, but evidence of tampering can be found in even the most common documents used by lenders. Having reviewed literally hundreds of millions of documents over the past few years, Ocrolus has used this massive dataset to train its models to identify some of the most common ways documents are altered, including:

  • Altered date fields – This type of document fraud is often found in bank statements that are legitimately those of the prospective borrower. For example, a lender might ask applicants for three months of statements to assess financial health and cash flow. Let’s say a potential borrower doesn’t have the best numbers from that time frame. An applicant might take its own statements from a previous period when finances were better and change the dates to those requested so it appears they are who they say they are, do belong to that financial institution, that the statement is real, and the information found within is accurate.
  • Modified transactions – Another prime example of fraud we see in financial statements is altered transaction data. Applicants may edit the size or source of a deposit to make income or revenue appear larger or more legitimate than it actually is. This more complex alteration requires additional edits, sometimes hundreds, throughout the document to make sure numbers reconcile and the formatting appears legitimate. Read more

Addressing the Complexities and Risks of AI in Finance

Courtesy of PYMNTS.com

AI-enhanced synthetic identity fraud is a growing problem in the financial sector.

A recent EY survey of 1,200 global CEOs reveals that while executives are investing in AI strategies, they face significant challenges in both formulating and operationalizing these plans. More than two-thirds of CEOs recognize the urgent need to act on generative AI, but many feel hindered by uncertainties in this area — partly due to the rise in firms claiming AI expertise — complicating their ability to make bold moves toward deploying the technology.

This dynamic has stymied strategic decisions concerning capital allocation, investment and transformation in an economic landscape marked by higher interest rates, increased inflation and complex geopolitical challenges. As a result, the survey reflects the lowest acquisition appetite since 2014. Only 35% of CEOs plan mergers and acquisitions in the next 12 months, a trend also influenced by geopolitical tensions.

Perceptions of AI in financial planning vary among investors by generation.

A survey of 500 American fraud and risk professionals finds that half of companies believe their synthetic fraud prevention measures are only somewhat effective. The financial impact of this deficiency is considerable, with nearly nine in 10 companies extending credit to synthetic identities and one in five valuing average losses per incident between $50,000 and $100,000.

With the aid of AI, fraudsters are ginning up increasingly sophisticated strategies, nurturing accounts over extended periods of time for greater financial gain, for example. Consequently, FIs are struggling with legacy technologies and techniques, which are proving inadequate against these sophisticated synthetic identities. This inadequacy not only leads to financial losses but also risks reputational harm and competitive disadvantage, underscoring the need for a multilayered approach to combat AI-generated synthetic identity fraud. Read more


OpenAI Moves to Shrink Regulatory Risk in EU Around Data Privacy

Courtesy of Natasha Lomas, TechCrunch

While most of Europe was still knuckle deep in the holiday chocolate selection box late last month, ChatGPT maker OpenAI was busy firing out an email with details of an incoming update to its terms that looks intended to shrink its regulatory risk in the European Union.

The AI giant’s technology has come under early scrutiny in the region over ChatGPT’s impact on people’s privacy — with a number of open investigations into data protection concerns linked to how the chatbot processes people’s information and the data it can generate about individuals, including from watchdogs in Italy and Poland. (Italy’s intervention even triggered a temporary suspension of ChatGPT in the country until OpenAI revised the information and controls it provides users.)

“We have changed the OpenAI entity that provides services such as ChatGPT to EEA and Swiss residents to our Irish entity, OpenAI Ireland Limited,” OpenAI wrote in an email to users sent on December 28. A parallel update to OpenAI’s Privacy Policy for Europe further stipulates:

If you live in the European Economic Area (EEA) or Switzerland, OpenAI Ireland Limited, with its registered office at 1st Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01 YC43, Ireland, is the controller and is responsible for the processing of your Personal Data as described in this Privacy Policy.

The new terms of use listing its recently established Dublin-based subsidiary as the data controller for users in the European Economic Area (EEA) and Switzerland, where the bloc’s General Data Protection Regulation (GDPR) is in force, will start to apply on February 15, 2024. Users are told if they disagree with OpenAI’s new terms they may delete their account. Read more


SEC Approves Spot Bitcoin ETFs—First Crypto Funds of Kind

Courtesy of Derek Saul, Forbes

TOPLINE
The Securities and Exchange Commission announced Wednesday it greenlit the first spot bitcoin exchange-traded funds (ETF) in the U.S., a historic move for investors looking for exposure to the world’s largest digital asset.

KEY FACTS

  • Regulators approved all 11 outstanding applications for spot bitcoin ETFs, from firms including BlackRock, Grayscale and Fidelity, to begin trading as soon as Thursday.
  • The SEC’s approval was widely expected to come Wednesday, which was the deadline for the ETF application from Cathie Wood-led Ark Invest, the first of the applications set for a decision.
  • Bitcoin prices were largely flat after the announcement, trading near a two-year high of roughly $46,000 as the institutional funds open the door for easier access to bitcoin investments for many investors.
  • The decision “marks a significant step towards the institutionalization of cryptocurrency, expanding bitcoin’s accessibility to a wider audience in a more regulated and simpler manner,” Yiannis Giokas, senior director at Moody’s Analytics, explained in emailed comments.
  • The announcement came after several head fakes, as the SEC’s X social media account erroneously announced a blanket approval Tuesday afternoon following a breach, the Chicago Board Options Exchange jumped the gunearly Wednesday afternoon by saying trading would begin this week for the funds and the SEC’s website crashed shortly before 4 p.m., after some users accessed the announcement.

CRUCIAL QUOTE
“While we approved the listing and trading of certain spot bitcoin ETP shares today, we did not approve or endorse bitcoin. Investors should remain cautious about the myriad risks associated with bitcoin and products whose value is tied to crypto,” SEC Chairman Gary Gensler wrote in a statement Wednesday. Read more

Jan. 5, 2024: AI & Digital Assets


How to Recognize AI-Generated Phishing Mails

Courtesy of Pieter Arntz, MalwareBytes

Phishing is the art of sending an email with the aim of getting users to open a malicious file or click on a link to then steal credentials. But most phishers aren’t very good, and the success rate is relatively low: In 2021, the average click rate for a phishing campaign was 17.8%.

However, now cybercriminals have AI to write their emails, which might well improve their phishing success rates. Here’s why. The old clues for telling if something was a phishing mail were:

  1. It asks you to update/fill in personal information.
  2. The URL on the email and the URL that displays when you hover over the link are different from one another.
  3. The “From” address is an imitation of a legitimate address, especially from a known brand.
  4. The formatting and design are different from what you usually receive from a brand.
  5. The content is badly written and may well include typos.
  6. There is a sense of urgency in the message, encouraging you to quickly perform an action.
  7. The email contains an attachment you weren’t expecting.

While most of these are still valid, there are a few checks you can strike off your list due to the introduction of AI. When a phisher is using a Large Language Model (LLM) like ChatGPT, a few simple instructions are all it takes to make the email look as if it came from the intended sender. And LLMs do not make grammatical errors or put extra spaces between words (unless you ask them to).

They’re not limited to one language ether. AI can write the same mail in every desired language and make it look as if you are dealing with a native speaker. It’s also easier to create phishing emails tailored to the intended target.

All in all, the amount of work needed to create an effective phishing email has been reduced dramatically, and the number of phishing emails has gone up accordingly. In the last year, there’s been a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing in particular.

Because of AI, it’s become much harder to recognize phishing emails, which makes things almost impossible for filtering software. According to email security provider Egress 71% of email attacks created through Ai go undetected.

So how do you recognize AI phishing emails?
Here are some ideas:

Number 4 above—The formatting and design are different from what you usually receive from a brand—is helpful. Compare the email with any previous communications you have from the supposed sender. If there are inconsistencies in the tone, style, or vocabulary, this could indicate that the message is a phishing attempt.

Number 5—The content is badly written and may well include typos—AI phishing emails may still use generic greetings, such as “Dear user” or “Dear customer,” instead of addressing the recipient by name. Also, look for generic or mismatched signatures that do not align with the sender’s typical signature. Read more


How ChatGPT and Billions in Investment Helped AI Go Mainstream in 2023

As investors poured billions into generative AI tools, society started facing difficult questions about a technology that’s quickly becoming transformational.

Courtesy of Rashi Shrivastava, Forbes

2023 was the year of AI. After ChatGPT launched in November 2022, it became one of the fastest growing apps ever, gaining 100 millions monthly users within two months. With AI becoming the hottest topic of the year (just as Bill Gates predicted in January, a string of startups exploded into the market with AI tools that could generate everything from synthetic voice to videos. Evidently, AI has come a long way since the start of the year, when people questioned if ChatGPT would replace Google search.

“I’m much more interested in thinking about what comes way beyond search…What do we do that is totally different and way cooler?’” OpenAI CEO Sam Altman told Forbes in January.

Rapid advancements in the technology caught the attention of venture capitalists as billions of dollars flowed into the sector. Leading the way was Microsoft’s $10 billion investment into AI MVP OpenAI, which is now reportedly raising at $80 billion valuation. In June, high profile AI startup, Inflection, released its AI chatbot Pi and raised $1.3 billion at a $4 billion valuation. A month later, Hugging Face, which hosts thousands of open source AI models, reached a $4 billion valuation. In September, Amazon announced that it plans to invest $4 billion into OpenAI challenger, Anthropic, which rolled out its own conversational chatbot Claude 2.0 in July and is today valued at $18.4 billion, according to a source with direct knowledge.

But not all AI founders have had a straightforward path to fundraising. Stability AI raised funding at a $1 billion valuation in September 2022 for its popular text-to-image AI model Stable Diffusion but has struggled to raise since. Its CEO Emad Mostaque spun misleading claims about his own credentials and the company’s strategic partnerships to investors, a Forbes investigation found in June. In December, a Stanford study found that the dataset used to train Stable Diffusion contains illegal child sexual abuse material.

The AI gold rush minted several other unicorns like Adept, which is building AI assistants that can browse the internet and run software programs for you, and Character AI, that’s used by 20 million people to create and chat with AI chatbot characters like Taylor Swift and Elon Musk. Enterprise-focused generative AI startups such as TypefaceWriter and Jasper that are helping companies automate tasks like email writing and summarizing long documents, have also seen an influx of funding. But amid the race to build and launch AI tools, Google found itself flat footed and playing catch up. The tech giant launched its conversational AI chatbot Bard and its own AI model Gemini at the end of the year.

In the past year, AI has penetrated virtually every facet of life. Teachers worried that students would use ChatGPT to cheat on assignments and the tool was banned from the most popular school districts in the U.S. Doctors and hospitals began using generative AI tools not only for notetaking and grunt work but also to diagnose patients. While some political candidates started deploying AI in their campaigns to interact with potential voters, others used generative AI tools to create deep fakes of political opponents. Read more


IRS Rules Require Reporting Data From $10k Crypto Transactions in 2024

Courtesy of Turner Wright, CoinTelegraph

According to Coin Center executive director Jerry Brito, it’s “unclear how one can comply” with the crypto tax reporting guidelines in 2024.

Aspects of the infrastructure bill signed into law by United States President Joe Biden are now in effect — including provisions requiring many digital asset transactions worth more than $10,000 to be reported to the Internal Revenue Service (IRS).

The bipartisan infrastructure bill, passed by Congress and signed into law by President Biden in 2021, expanded the requirements for brokers to have many crypto exchanges and custodians report crypto transactions greater than $10,000 to the IRS. Following the bill’s passage, many lawmakers suggested additional legislation to “fix” the reporting requirement, claiming that the information required from brokers would be difficult or impossible to collect.

The bill mandates crypto brokers to report personal information on transactions to the IRS, including the sender’s name, address and social security number, within 15 days. The requirements, aimed at reducing the size of the tax gap in the United States, were initially scheduled to take effect in January 2023, having companies send reports to the IRS in 2024.

According to Coin Center executive director Jerry Brito, many users “will find it difficult to comply” with the reporting requirements without guidance from the IRS. He speculated that filers would attempt to comply with the law but risked being found guilty of a felony.

Related: IRS team reports rise in crypto tax investigations

“[I]f a miner or validator receives block rewards in excess of $10,000, whose name, address, and Social Security number do they report?” said Brito. “If you engage in an on-chain decentralized exchange of crypto for crypto and you therefore receive $10,000 in cryptocurrency, who do you report? And by what standard should you measure whether an amount of a particular cryptocurrency is equivalent to more than $10,000?” Read more


Coinbase Is Playing a ‘Dangerous Game’ Against the SEC with Its Stablecoin USDC

Courtesy of Leo Schwartz, Fortune/Yahoo Finance

In a year shaped by court cases, 2023 had one last surprise up its sleeve. On Dec. 28, with dreams of a Bitcoin ETF lulling the crypto industry into 2024, Judge Jed Rakoff of the Southern District of New York issued a summary judgment against Do Kwon and his failed Terraform Labs.

Pleased not to be spending New Year’s Eve in a Montenegrin or Brooklyn holding cell, the rest of the crypto sector applauded the resolution to the Terra debacle, though questions of fraud and the involvement of Jump Trading will be left to a jury trial in January. Still, unlike July’s surprising Ripple decision, Rakoff’s reasoning could spell trouble for the future of the industry.

As always, the ruling hinged on the question of whether the crypto tokens that Terraform offered investors qualified as unregistered securities. The edge case was UST, Terra’s signature stablecoin, which was ostensibly pegged to $1—until it disastrously was not.

The Howey test, after all, determines that an investment contract is defined as an investment of money in a common enterprise with the expectation of profits derived from the efforts of others. If stablecoins are set at $1, how could they represent an investment contract?

Even putting aside the separate Reves test, which complicates the designation of a security, Rakoff said that one factor clearly puts UST in the investment contract category. Terraform offered the stablecoin in conjunction with a lending and borrowing protocol called Anchor that promised yields of up to 20%. As Rakoff wrote, UST on its own was not a security, but instead constituted an investment contract when offered in combination with Anchor.

Stablecoins remain a corner of crypto where regulators will give the most leeway, with their novel status presenting a jump ball situation between the SEC, CFTC, OCC, Federal Reserve, and Treasury Department (which isn’t even to mention state regulators). After legislation targeting an update to anti-money-laundering provisions, stablecoin supervision represents the lowest-hanging fruit for Congress.

And yet, while it may have been evident before, there is now a clear judicial decision that explains when the SEC could target stablecoins. Two of the biggest issuers—Paxos and Tether—have opted not to offer yields to investors for their products, which could either enter them into bank or securities territory, despite the historic returns on cash-like instruments. USDC, still the second-largest stablecoin by market cap despite its ruinous 2023, is a separate matter. Read more

Dec. 22, 2023: AI & Digital Assets


How Generative AI is Upending Tech at Big Payment Companies

Courtesy of John Adams, American Banker
This is the first in a four-part series on disruption in the payments industry.

Terms like “large language models” and “generative AI” were barely part of the vernacular as recently as a year ago, but have become a substantial part of strategies at big payment companies.

“The biggest risk of gen AI is not using gen AI,” said Rohit Chauhan, head of AI at Mastercard.

Mastercard and Visa have made investments in gen AI that will drive large-scale global changes in everything from what consumers hear at the call center to what they see on a website to how their transactions are kept safe. The card networks face competition from fintechs that are also bullish on gen AI, making the technology one of the biggest sources of disruption to hit the payments industry in years.

“Gen AI is the transformative technology of our time, and it will have an enormous impact,” said Rajet Taneja, president of technology at Visa.

Definitions vary, but gen AI is generally considered to use advanced machine learning and large language models to produce original content or programming, whereas more traditional AI uses machine learning to improve the performance of an existing program over time. Read more


Binance Penalties Include a Number of Crypto Industry Firsts

Courtesy of Mengqi Sun, Wall Street Journal

U.S. regulators’ settlement with the largest cryptocurrency exchange marked a new era in its enforcement efforts in the nascent sector.

Some of the penalties imposed on Binance are a first for a cryptocurrency company, with regulators employing powerful measures typically used in the past to rein in major financial institutions.

Binance’s penalties include record-breaking civil fines for the Treasury Department, the first monitorship imposed by the Treasury’s Financial Crimes Enforcement Network and the first personal liability charge against a chief compliance officer by the Commodity Futures Trading Commission.

Binance Chief Executive Changpeng Zhao, also known as CZ, on Tuesday stepped down and pleaded guilty to violating U.S. anti-money-laundering laws. Binance also pleaded guilty and agreed to pay fines totaling $4.3 billion to settle claims from multiple agencies.

The fines include $3.4 billion to be paid to FinCEN over violations of U.S. anti-money-laundering laws and another $968 million to the Office of Foreign Assets Control for violations of U.S. sanctions laws. Both penalties are the largest in each unit’s history, eclipsing fines that were imposed on major financial institutions in the past. Read more


6 Predictions for Crypto in 2024: Pantera’s Paul Veradittakit

Courtesy of Paul Veradittakit, CoinDesk

Tokenized social experiences, TradFi bridges, DePIN, DeFi Summer #2, and more.

This past year has been a testament to the blockchain space’s ability to recover from even the harshest external conditions. From the depths of the “crypto winter” at the beginning of the year, the overall market cap of the crypto space has grown by 90% to $1.69 trillion, with bitcoin more than doubling from its yearly low of $16k in Jan 2023 to over $40k in December.

In 2023, we’ve continued to feel some of the aftershocks of the wave of major collapses in 2022, most notably the FTX trial and verdict and the Binance plea deal in November, as well as the momentary depegging of the USDC stablecoin in March amid the banking crisis in March . At the same time, we’ve continued to see breakthroughs in the space, including the Ethereum’s Shapella upgrade to a full Proof-of-Stake network, also in March, the ruling that XRP was (mostly) not a security in July, the launch of PayPal’s PYUSD stablecoin and Grayscale’s win over the SEC for the Bitcoin spot ETF in August, and the pioneering of novel tokenized social experiences such as the rise of Friend.tech.

We thus enter 2024 with a great optimism on the road ahead. Here are my top predictions for crypto industry in 2024:

The resurgence of Bitcoin and “DeFi Summer 2.0”
In 2023, Bitcoin has staged a comeback, with bitcoin dominance (Bitcoin’s proportion of crypto market cap) rising from 38% in January to around 50% in December, making it one of the top ecosystems to look out for in 2024. There are at least three major catalysts driving its renaissance in the next year: (1) the fourth Bitcoin halving due for April 2024, (2) the expected approval of several Bitcoin spot ETFs from institutional investors, and (3) a rise in programmability features, both on the base protocol (such as Ordinals), as well as Layer 2s and other scalability layers such as Stacks and Rootstock. Read more


UK Economic Secretary Will Discuss Crypto Firms’ Access to Banks

Courtesy of Turner Wright, CoinTelegraph

Bim Afolami, who became U.K. Economic Secretary in November, will be available to speak to the Crypto and Digital Assets All-Party Parliamentary Group.

Jeremy Hunt, who serves as chancellor of the exchequer for the United Kingdom, said Economic Secretary Bim Afolami would be “more than happy to meet” and discuss issues related to digital assets in the country.

In a Dec. 19 session of the U.K. House of Commons, Hunt fielded a question from pro-crypto Member of Parliament Lisa Cameron, who asked whether the finance minister was willing to meet to discuss licensed crypto firms’ access to banking. Hunt said Afolami would be available to speak with the Crypto and Digital Assets All-Party Parliamentary Group.

“The U.K., and London in particular, has become the global crypto hub, but to make sure that the market really can take off in the way that was intended — in a responsible way — we need to regulate it, which is why we’ve introduced regulations for stablecoins, for promotion of crypto services,” said Hunt. Read more

Dec. 15, 2023: AI & Digital Assets


BankSocial’s Plan to Create a Crypto-Friendly Credit Union

Courtesy of Frank Gargano, American Banker

“Our end goal is not to create the largest credit union ever invented with [the proposed] Defy Federal Credit Union, but rather to create a template” where credit unions can easily participate in a Web3 ecosystem, says John Wingate, chief executive and founder of BankSocial. Becky Reed chief operating officer of BankSocial.

Cryptocurrency remains a controversial topic since the collapse of the crypto exchange FTX near the end of 2022, as both banks and credit unions continue to rethink best approaches to the space without drawing the ire of financial regulators. But a team of people at a startup called BankSocial hope to launch a new credit union that would let people buy and sell digital currencies and maintain fiat currency deposits using distributed ledger technology.

Closures of crypto lenders like BlockFi and Celsius have exacerbated concerns about the industry’s path forward, pushing consumers toward traditional financial institutions for answers on where they could safely store the digital assets. As regulators continue to crack down on instances of misuse, many wonder if now is the right time for depositories to enter the market.

For John Wingate, chief executive and founder of the Dallas-based distributed ledger technology firm BankSocial, all signs point to yes.

Wingate was inspired by the similarities between the ownership structure of credit unions and “the ethos of decentralized finance” to begin the campaign for a federal credit union charter under the National Credit Union Administration at the beginning of last year. He brings with him more than 15 years of software development experience working for companies like the Las Vegas-based customer experience and fulfillment firm Speed Commerce. He has also launched ventures of his own, such as a content management software firm, Mastered Minds, and two real estate funding companies.

“The more I started to look into it, the more I started to understand that these credit unions were really a great mechanism. … I actually came to the conclusion that they were really just an analog version of decentralized finance,” Wingate said.

The proposed Defy Federal Credit Union, which was announced publicly last month, would offer members access to BankSocial’s platform which includes its self-custody crypto exchange for buying and selling currencies like bitcoin and ether, in addition to a deposit account. Eligibility would be extended to the roughly 4,000 members of both Block Advocates, a nonprofit organization founded by Wingate to help promote the adoption of distributed ledger technology as well as the Texas Blockchain Council, which includes firms like Riot Platforms and Genesis Digital Assets. Read more


Visa Bids to Combat Token Fraud

Courtesy of FinExtra, Source: Visa

Visa announced the commercial launch of Visa Provisioning Intelligence (VPI), an AI-based product designed to combat token fraud at its source.

Available as a value-added service for clients, VPI uses machine learning to rate the likelihood of fraud for token provisioning requests, helping financial institutions prevent fraud in a targeted way and enable more seamless and secure transactions for Visa cardholders.

Tokenization is a powerful fraud-fighting technology pioneered by that helps protect consumers’ account information from bad actors by replacing account numbers with a unique code. However, tokens may be illegitimately provisioned to bad actors, and Visa found that token provisioning fraud losses reached an estimated $450M globally in 2022 alone.

“While tokenization is one of the most secure ways to transact, we’re seeing fraudsters use social engineering and other scams to illegitimately provision tokens,” said James Mirfin, SVP and Global Head of Risk and Identity Solutions at Visa. “With VPI, we’re leveraging Visa’s vast network and data insights to help clients detect and prevent provisioning fraud before it happens.”

VPI is a real-time fraud propensity score between 1 (indicating the lowest probability of fraud) and 99 (indicating the highest probability of fraud) provided to issuers for each token provision request. VPI uses a segment-level supervised machine learning model to identify patterns in past token requests across device, e-commerce and card-on-file tokens to help predict the probability of token provisioning fraud. The VPI score is intended to provide financial institutions with the following benefits:

  • Improved fraud prediction by allowing issuers to detect provisioning fraud and decline a token provisioning request before the fraud occurs.
  • Accurate separation of fraudulent activity from non-fraudulent activity reducing the number of declines.
    An increased number of legitimate provisioning requests, increased payment volumes, and continued trust in the card payment network.
  • In an age where most of our financial lives exist digitally, Visa remains focused on enhancing the security of its network and providing clients with advanced technologies to ensure customer data is protected wherever transactions take place. VPI is now available to clients globally as a part of Visa’s suite of value-added services.

A Year In, the Outlook for Generative AI in FS

Courtesy of Gabriel Hopkins, FinExtra

Just over a year ago, ChatGPT launched. The excitement, anxiety and optimism associated with the new AI shows little sign of abating. In November OpenAI CEO Sam Altman was removed from his position, only to return some days later. Rishi Sunak hosted world leaders at the UK’s AI Safety Summit, interviewing Elon Musk in front of gathering of world leaders and tech entrepreneurs. Behind the scenes, AI researchers are rumoured to be close to even more breakthroughs.

What does it all mean for those industries that want to benefit from AI but are unsure of the risks?

Some form of machine learning – what we used to call AI – has been around for a century. Since the early 1990s, those tools have been a key operational element of some banking, government, and corporate processes, while being notably absent from others.

So why the uneven adoption? Generally, that’s down to risk. AI tools are great for tasks like fraud detection where well-established and tested algorithms can do things that analysts simply can’t by reviewing vast swathes of data in milliseconds. That has become the norm, particularly because it is not essential to understand each and every decision in detail.

Other processes have been more resistant to change. Usually, that’s not because an algorithm couldn’t do better, but rather because – in areas such as credit scoring or money laundering detection – the potential for unexpected biases to creep in is unacceptable. That has particularly acute in credit scoring when a loan or mortgage could be declined due to non-financial characteristics – including racial biases.

While the adoption of older AI techniques has been progressing year after year, the arrival of Generative AI, characterised by ChatGPT, has changed everything. The potential for the new models – both good and bad – is huge, and commentary has divided accordingly. What is clear is that no organisation wants to miss out on the upside. Despite the talk about risks with Generative and Frontier models, 2023 has been brimming with excitement about the revolution ahead. Read more


Global Banking Regulator Wants Tougher Criteria for Giving Stablecoins Preferential Risk Treatment

Courtesy of Sandali Handagama and Camomile Shumba, CoinDesk

The Basel Committee for Banking Supervision wants to tighten requirements that allow stablecoins to qualify as less risky than unbacked cryptocurrencies like bitcoin.

HIGHLIGHTS

  • The Basel Committee for Banking Supervision proposed tightening the criteria governing stablecoins.
  • The regulator wants to ensure that stablecoins’ reserve assets have the short-term maturity, high credit quality and low volatility that allow them to meet holders’ expectations for redemption.

The Basel Committee for Banking Supervision (BCBS) wants to impose stricter criteria for allowing stablecoins to be treated as less risky than unbacked cryptocurrencies such as bitcoin.

In a consultative document published Thursday, the global banking regulator proposed 11 standards for stablecoins, cryptocurrencies whose value is supposed to be pegged to a specific asset such as the dollar, euro or gold. To qualify for so-called Group 1b consideration, stablecoin reserve assets have to meet a range of criteria including having a short-term maturity, high credit quality and low volatility. The consultation runs until March 28.

“The reserves assets that are used to cover redemptions can pose various risks that call into question the ability of the stablecoin issuer to meet holders’ expectations of redemption on demand,” the paper said.

The standard-setter has so far taken a tough stance on crypto, recommending the maximum possible risk weight of 1,250% for free-floating digital assets like bitcoin, which means banks have to issue capital to match their exposure. Banks are also not allowed to allocate more than 2% of their core capital to these riskier assets. The BCBS will not be making any changes to these standards, it said in a statement.

However, cryptos with “effective stabilization mechanisms” – which covers stablecoins – qualify for “preferential Group 1b regulatory treatment.” This means they are subject to “capital requirements based on the risk weights of underlying exposures as set out in the existing Basel Framework,” instead of the tougher requirements set for bitcoin and other cryptocurrencies.

Right now, stablecoins must be “redeemable at all times” to qualify for this preferential regulatory treatment. This ensures “only stablecoins issued by supervised and regulated entities that have robust redemption rights and governance are eligible for inclusion,” the BCBS has said. Read more

Dec. 8, 2023: Digital Finance & Privacy Articles


U.S. Crypto Industry Lobby Spending on Track for New Record In 2023

Courtesy of Hannah Lang, Reuters

The cryptocurrency industry was on track to hit a new record for federal lobbying spending, after a year in which firms scrambled to repair their reputations and advance friendly legislation, according to data provided to Reuters by nonprofit research group OpenSecrets.

Crypto companies spent $18.96 million in the first three quarters of 2023 on lobbying, compared with $16.1 million during the same period in 2022. That was despite last year’s spectacular meltdown of crypto exchange FTX, which had been a top-ten spender. Last year, companies including FTX spent nearly $22 million on lobbying in total.

Coinbase (COIN.O), the largest U.S. crypto exchange, led the pack again, spending $2.16 million, followed by Foris DAX, which operates Crypto.com, the Blockchain Association and Binance Holdings.

“Our goal is to engage directly with policymakers, build relationships and bridge the education gap to build a commonsense regulatory framework,” said Kristin Smith, CEO of the Blockchain Association, in a statement.

Crypto companies have been expanding in Washington, in part to try to mend their reputations following a string of scandals last year, including the collapse of FTX, whose former CEO Sam Bankman-Fried had been a familiar presence in Washington. He was found guilty of fraud last month by a jury in a Manhattan federal court.

Crypto firms have also been trying to combat growing regulatory scrutiny, especially from the U.S. Securities and Exchange Commission which says the industry has been flouting its rules. Lobbying escalated after the SEC sued Coinbase and Binance in June for allegedly failing to register tokens, claims they deny. Read more


APIs: The Silent Fintech Security Concern

Courtesy of Tony Zerucha, FinTech Nexus

A quarterly report published by integrated app and security platform Wallarm gives granular attention to a little-discussed but critical security concern for fintechs – their APIs. The reports are developed from publicly available sources.

Wallarm co-founder and CEO Ivan Novikov said his goal for the reports is to estimate the scope of the threats and to group them into sensible sections. This helps CISOs and cybersecurity managers measure the dangers and build risk models. Each quarter, the Wallarm team analyzes every available incident, combines it with additional information and enriches it.

Novikov said that focus produces real-time analysis with better insights than other reports published less frequently. It also identifies some new threat groups that can likely be attributed to the proliferation of API use.

Leaks from APIs are an emerging threat

Injections were by far the top issue in the quarter. Their 59 known occurrences represent 25% of the 239 traced actions. Injections occur when someone sends dangerous API commands via a user input field. Authentication flaws rank second with 37. This involves identity verification failures. Cross-site issues are third with 30.

API leaks make up more than 10% of incidents. They’ve hit Netflix, open-source software providers and enterprise software firms. Novikov said API leaks are a recently discovered issue.

There are two types of APIs, and one specifically affects fintechs: open APIs for banking. Novikov said institutions are interested in two things, the first being tracking where their financial data travels. This includes personally identifiable information and internal bank account information. They need to know if it gets siphoned off somewhere it shouldn’t.

“If you notice that the internal banking account numbers are connected as a routing number, (criminals) can do many things,” Novikov said. “They can run completely different fraud schema. If you remember the movies with James Bond, they say, ‘I know your account number in Switzerland’, it’s exactly the same thing.”

These data pieces could be private access talking to your API. They could be certificates you issued to a partner bank that were compromised. Every party you share a key with is responsible for it, but you are responsible for the open data. Read more


Director of Financial Technology and Access Charles Vice’s Written Testimony Before Subcommittee on Digital Assets, Financial Technology, and Inclusion

Chairman Hill, Ranking Member Lynch, and members of the subcommittee, thank you for inviting me to discuss the efforts of the National Credit Union Administration (NCUA) to encourage innovation in financial technology. I am Charles A. Vice, Director of the NCUA’s Office of Financial Technology and Access.

Introduction and Background

I began my career with the Federal Deposit Insurance Corporation in 1990, serving as an examiner for 18 years. In 2008, I was appointed Commissioner of the Kentucky Department of Financial Institutions, a post I held for 14 years before joining the NCUA on January 1, 2023.

Having worked with financial regulators for over 33 years, I understand the financial services industry’s vital role in the U.S. economy. The NCUA insures deposits at federally insured credit unions, protects the members who are not only consumers but also owners of credit unions, and charters and regulates federal credit unions. The strength of the credit union industry is based on the number and diversity of credit unions that meet the financial needs of their members. Safe, fair, and affordable access to financial services is necessary to ensure that local, state, and national economies grow and thrive.

During my three decades of experience, I have witnessed the resiliency of the credit union and banking industries in the face of challenges such as Y2K, the Great Recession, natural disasters, and the COVID-19 pandemic. I have also seen how technology can be both a boon and a bane. On one hand, technology can improve efficiency, facilitate better communication with members, and offer round-the-clock services. On the other hand, technology presents risks that must be managed, monitored, and mitigated. The NCUA understands this fine balance.

NCUA Office of Financial Technology and Access

The NCUA Office of Financial Technology and Access identifies barriers, challenges, and opportunities credit unions face in adopting and using technology to provide financial products and services to their members. Recently, the NCUA Board adopted the financial innovation rule, which provides additional flexibility for federally insured credit unions to use advanced technologies and opportunities offered by the financial technology sector. During the notice and comment period, the proposed rule received supportive comments from the public and was finalized in September 2023. Read more


AICPA Develops Criteria to Increase Transparency for Stablecoins

Courtesy of Dave Kovaleski, Financial Regulation News

The Association of International Certified Professional Accountants (AICPA) has developed criteria to help increase transparency around stablecoins.

Stablecoins have gained prominence for their role in trading, but there hasn’t been consistency in the information available to token holders for stablecoins. These new criteria aim to remedy this. This framework for issuing stablecoins, backed by fiat currency, lays out how to report relevant information to stakeholders and will provide the basis for attestation services around this asset class. By creating a standardized framework, the AICPA seeks to reduce inconsistencies in measuring and reporting issued tokens and available assets backing those tokens.

“The AICPA is excited to develop the first available framework for reporting on stablecoins, and to be at the forefront of bringing transparency and consistency to the digital assets space,” said Ami Beers, senior director, assurance and advisory innovation at the AICPA & CIMA (Chartered Institute of Management Accountants). “We’re hopeful that these criteria will serve as the basis for evaluating the sufficiency of reserves that back stablecoins in attestation services that practitioners provide to their clients.”

The proposed criteria aim to provide transparency, not only benefiting token issuers but also token holders, regulators, and the wider industry.

“When developing this framework, we focused on the needs of the stakeholders which include consistency, comparability, and transparency of information, which will ultimately drive trust in these types of digital assets,” Jay Schulman, chair of the Attestation Subgroup of the AICPA Digital Assets Working Group, and principal at RSM US, said. “We’re looking forward to integrating comments we receive into the final document, creating a robust tool for practitioners performing work in this emerging area of accounting and finance.”

Once these guidelines are finalized and issued by the AICPA, the criteria can be used by practitioners when conducting an attestation engagement.

However, to ensure the criteria meet industry needs, the AICPA Assurance Services Executive Committee is seeking public feedback on the proposed stablecoin criteria. Written comments must be submitted before Jan. 29, 2024.

 

Dec. 1, 2023: Digital Finance & Privacy Articles


Binance Penalties Include a Number of Crypto Industry Firsts

The Treasury Department’s FinCEN is imposing its first-ever monitorship on the cryptocurrency exchange

Courtesy of Mengqi Sun, Wall Street Journal

U.S. regulators’ settlement with the largest cryptocurrency exchange marked a new era in its enforcement efforts in the nascent sector.

Some of the penalties imposed on Binance are a first for a cryptocurrency company, with regulators employing powerful measures typically used in the past to rein in major financial institutions.

Binance’s penalties include record-breaking civil fines for the Treasury Department, the first monitorship imposed by the Treasury’s Financial Crimes Enforcement Network and the first personal liability charge against a chief compliance officer by the Commodity Futures Trading Commission.

Binance Chief Executive Changpeng Zhao, also known as CZ, on Tuesday stepped down and pleaded guilty to violating U.S. anti-money-laundering laws. Binance also pleaded guilty and agreed to pay fines totaling $4.3 billion to settle claims from multiple agencies.

The fines include $3.4 billion to be paid to FinCEN over violations of U.S. anti-money-laundering laws and another $968 million to the Office of Foreign Assets Control for violations of U.S. sanctions laws. Both penalties are the largest in each unit’s history, eclipsing fines that were imposed on major financial institutions in the past.

As part of the settlement with FinCEN, Binance has agreed to retain an independent compliance monitor for five years, the first time such a penalty has been employed by the anti-money-laundering enforcement agency.

The CFTC also imposed charges and fines against Binance’s former chief compliance officer, Samuel Lim, an action a CFTC commissioner said should serve as a message to the crypto sector about their compliance programs. Read more


California’s Pioneering Digital Financial Assets Law: A New Era in Crypto Compliance

Courtesy of FinTech Global

Until now, cryptocurrency companies in California have operated without a license, but this is set to change with the introduction of the Digital Financial Assets Law (DFAL), signed into law by Governor Newsom in October.

This places California as the third state, following New York and Louisiana, to implement a licensing regime for cryptocurrency activities. The DFAL stands out as perhaps the most stringent virtual currency licensing law yet, encompassing a broad spectrum of provisions including licensure, disclosures, customer protections, stablecoins, exchange-specific sections, and robust enforcement powers. It also interestingly extends to gaming publishers under certain conditions.

The DFAL’s impact on the cryptocurrency sector is expected to be significant. RegTech company Alessa has created an in-depth guide for the new law, its requirements, and the broader implications for the digital assets industry.

Overview of DFAL The DFAL’s reach is extensive, covering any individual or entity involved in “digital financial assets business activity” with California residents. To engage in such activities, unless exempted, one must obtain a license from the California Department of Financial Protection and Innovation (DFPI) and adhere to various prudential requirements, recordkeeping rules, and disclosure obligations. Issuing a digital asset alone does not necessitate licensure unless it’s redeemable for legal tender, bank credit, or another digital asset.

California’s law is largely in line with New York’s BitLicense regulation, with some notable additions specific to digital assets.

Understanding the definitions in the DFAL is crucial. They range from defining an applicant, a covered person, digital financial assets, to digital financial asset business activities. The latter includes a broad range of activities, from exchanging and transferring digital financial assets to holding electronic precious metals on behalf of another.

Significantly, the DFAL exempts several categories from its requirements, including banks, certain credit unions, trust companies, payment processors, registered broker-dealers, entities regulated by the Commodity Futures Trading Commission (CFTC), and more. There’s also a public interest exemption allowing the DFPI discretion in applying the law. Read more


OpenAI’s Custom Chatbots Are Leaking Their Secrets

Released earlier this month, OpenAI’s GPTs let anyone create custom chatbots. But some of the data they’re built on is easily exposed.

Courtesy of Matt Burgess, Wired

You don’t need to know how to code to create your own AI chatbot. Since the start of November—shortly before the chaos at the company unfoldedOpenAI has let anyone build and publish their own custom versions of ChatGPT, known as “GPTs”. Thousands have been created: A “nomad” GPT gives advice about working and living remotely, another claims to search 200 million academic papers to answer your questions, and yet another will turn you into a Pixar character.

However, these custom GPTs can also be forced into leaking their secrets. Security researchers and technologists probing the custom chatbots have made them spill the initial instructions they were given when they were created, and have also discovered and downloaded the files used to customize the chatbots. People’s personal information or proprietary data can be put at risk, experts say.

“The privacy concerns of file leakage should be taken seriously,” says Jiahao Yu, a computer science researcher at Northwestern University. “Even if they do not contain sensitive information, they may contain some knowledge that the designer does not want to share with others, and [that serves] as the core part of the custom GPT.”

Along with other researchers at Northwestern, Yu has tested more than 200 custom GPTs, and found it “surprisingly straightforward” to reveal information from them. “Our success rate was 100 percent for file leakage and 97 percent for system prompt extraction, achievable with simple prompts that don’t require specialized knowledge in prompt engineering or red-teaming,” Yu says.

Custom GPTs are, by their very design, easy to make. People with an OpenAI subscription are able to create the GPTs, which are also known as AI agents. OpenAI says the GPTs can be built for personal use or published to the web. The company plans for developers to eventually be able to earn money depending on how many people use the GPTs.

To create a custom GPT, all you need to do is message ChatGPT and say what you want the custom bot to do. You need to give it instructions about what the bot should or should not do. A bot that can answer questions about US tax laws may be given instructions not to answer unrelated questions or answers about other countries’ laws, for example. You can upload documents with specific information to give the chatbot greater expertise, such as feeding the US tax-bot files about how the law works. Connecting third-party APIs to a custom GPT can also help increase the data it is able to access and the kind of tasks it can complete. Read more


Bitcoin Group: Taking Steps Against Money-Laundering, Terrorist Financing

Courtesy of Rachel More, Reuters

Germany’s Bitcoin Group (ADE.DE) said on Wednesday it was taking measures to improve its internal control system, after the financial regulator BaFin ordered its subsidiary futurum bank to remedy shortcomings on money-laundering and terrorist financing.

“The Bitcoin Group expressly points out that there are currently no indications of violations of money laundering and terrorist financing laws within the Group,” the company said in a statement.

The company said it had already taken measures in the current financial year to meet regulatory requirements and that it aimed “to remedy the identified deficiencies in a timely manner”.

On Tuesday, BaFin identified “severe deficits” at futurum bank involving its internal security measures, its fulfillment of due diligence obligations and its system for reporting suspicious activity.

“We are actively working with BaFin to quickly address the criticized weaknesses in our internal processes, which have not kept pace with the company’s growth in recent years,” Bitcoin Group Chief Executive Marco Bodewein said in the statement.

Nov. 17, 2023: Digital Finance & Privacy Articles


British Man Cannot Be Extradited to U.S. Over Fake Cryptocurrency Scheme, Court Rules

Courtesy of Sam Tobin, Reuters

A British man wanted in the U.S. on money laundering and wire fraud charges linked to the fake cryptocurrency OneCoin on Thursday won his battle against extradition, with London’s High Court ruling he should face prosecution in Britain.

Christopher Hamilton, 64, was indicted by a New York grand jury in 2019 over his alleged role in a $4 billion Ponzi scheme that defrauded around 3.5 million people worldwide. He has previously denied wrongdoing. OneCoin co-founder Karl Greenwood was jailed in New York for 20 years for fraud and money laundering in September.

Its other founder, Ruja Ignatova, who prosecutors say is also known as the “Cryptoqueen”, is on the FBI’s 10 most-wanted list and remains at large. Hamilton’s extradition to the U.S. was approved last year, but his appeal against that decision was upheld on Thursday.

Judge Victoria Sharp said most of the alleged money laundering by Hamilton took place in the United Kingdom and that British police and prosecutors should investigate if Hamilton can face criminal charges in his home country.

The Crown Prosecution Service (CPS) had previously said Hamilton should be prosecuted in the U.S. rather than the UK.

City of London Police had closed its investigation in 2019 but, Sharp said, it had not identified “potentially critical incriminating evidence” later discovered by U.S. authorities that can now be used by British police.

“The consequence of the appellant’s success on this appeal is not that he secures impunity. It is that he should be answerable to the law in the UK rather than the U.S.”, Sharp concluded. A CPS spokesperson said in a statement: “We are carefully considering the High Court’s judgment.”

Hamilton’s lawyers at Sonn Macmillan Walker declined to comment. City of London Police did not immediately respond to a request for comment.


International Crypto Tax Evasion Deal Agreed

Courtesy of FinExtra

Nearly 50 countries – including the US, UK, Brazil and Japan – have signed up to new standards starting in 2027 designed to combat cryptocurrency tax evasion.

The OECD’s Crypto-Asset Reporting Framework will require crypto platforms to share taxpayer information with tax authorities. The UK government says that the standard could potentially recoup hundreds of millions of pounds in lost revenue.

“Today we are sending out a strong message that we will not allow criminals to use crypto to avoid paying their fair share,” says Financial Secretary to the Treasury, Victoria Atkins.

The 48 signatories to the agreement encouraged other countries to join them. China, India and Russia are among the major crypto markets that have not signed on.


IMF Chief Urges More Proactive Push for Central Bank Digital Currencies

Courtesy of Marc Jones, Reuters

The head of the International Monetary Fund has urged countries to make a more proactive push to develop central bank digital currencies (CBDC). Eleven countries, including a number in the Caribbean, and Nigeria, have already launched CBDCs. Around 120 others are exploring them, although progress and approaches differ widely and a few have even abandoned the idea altogether.

“We may be at a point where the public sector needs to offer a little more guidance,” IMF Managing Director Kristalina Georgieva said in a speech in Singapore. “Not to crowd out, not to disrupt,” she added. “But to act as a catalyst, to ensure safety and efficiency – and to counter fragmentation.”

She made her remarks as the IMF published the first instalment of a “virtual handbook” on CBDCs, designed to help countries with the design and set-up process and ensure that the new technologies are globally interoperable.

Supporters say CBDCs will modernise payments with new functionality and provide an alternative to physical cash, which seems in terminal decline. But questions remain as to why they represent an advance when current systems are already capable of many of the proposed benefits, and countries such as Nigeria that have already launched CBDCs are seeing very low uptake among the public.

Georgieva said that with technology advancing so rapidly, countries needed to push ahead with development now to avoid getting caught out in future. “If anything, we need to raise another sail to pick up speed,” she said, likening the efforts to a nautical journey. “The world is changing faster than most imagined”.


Readout of FinCEN’s Participation in Harvard Kennedy School Event on Role of Cryptocurrency in Financing Terrorism

On November 9, a senior official from the Financial Crimes Enforcement Network (FinCEN) contributed to discussions on the role of cryptocurrency as a source of financing for terrorism during an event at the Harvard Kennedy School’s Mossavar-Rahmani Center for Business and Government. Participants discussed the analytical tools and anti-money laundering/countering the financing of terrorism authorities available to prevent the use of cryptocurrency for illicit purposes. Engagements with external stakeholders play a critical role in the U.S. Department of the Treasury’s work to identify and mitigate terrorist financing.

As part of a whole-of-government effort, the U.S. Department of the Treasury is taking all steps necessary to deny the ability of illicit actors to raise and use funds worldwide for terrorist activities. Under Secretary of the Treasury for Terrorism and Financial Intelligence Brian E. Nelson recently convened a roundtable with money services businesses to highlight Treasury actions to counter illicit finance, including in the virtual asset ecosystem, and to hear the group’s perspectives on techniques used by terrorist groups like Hamas to raise and move funds. And earlier this month, FinCEN hosted a FinCEN Exchange focused on the threat posed by the illicit use of convertible virtual currency in light of Hamas’ brutal terrorist attacks in Israel. Additionally, in late October, FinCEN issued an alert to aid financial institutions in identifying and reporting suspicious activity relating to financing Hamas.

FinCEN appreciates the critical support that financial institutions provide to law enforcement and national security agencies in fighting illicit activities through their suspicious activity reporting, and strongly encourages all financial institutions to register under USA PATRIOT Act Section 314(b) and to form associations to engage in voluntary information sharing. The U.S. Department of the Treasury will continue to use all available tools to identify and stop terrorist financing funding channels.

Nov. 10, 2023: Digital Finance & Privacy Articles


California Enacts Landmark Crypto Licensing Law

Courtesy of Jeremy M. McLaughlin & Jennifer L. Crowder, K&L Gates

To date, crypto companies have been able to operate in California without a license, but that will change effective July 2025 under the state’s newly-enacted “Digital Financial Assets Law” (the Law), signed by Governor Newsom on 13 October. The Law is California’s first comprehensive framework to regulate the digital asset market in the state, including specific provisions for stablecoins.

Under the Law, unless exempted, businesses must obtain a license and comply with various prudential requirements, recordkeeping rules, and disclosure requirements to engage in (or hold themselves out as engaging in) “digital financial asset business activity” with or on behalf of a California resident (as defined). The term digital financial asset is defined as digital mediums of exchange, units of account, or stores of value. The definition of digital financial asset business activity—i.e., those activities that trigger the licensing requirement—is expansive: “(1) [e]xchanging, transferring, or storing a digital financial asset or engaging in digital financial asset administration, whether directly or through an agreement with a digital financial asset control services vendor[;] (2) [h]olding electronic precious metals or electronic certificates representing interests in precious metals on behalf of another person or issuing shares or electronic certificates representing interests in precious metals[;] (3) [e]xchanging one or more digital representations of value used within one or more online games, game platforms, or family of games for either of the following: (A) [a] digital financial asset offered by or on behalf of the same publisher from which the original digital representation of value was received [or] (B) [l]egal tender or bank or credit union credit outside the online game, game platform, or family of games offered by or on behalf of the same publisher from which the original digital representation of value was received.”

Some of the Law’s requirements are similar to existing money transmitter licensing requirements—the laws under which most states regulate digital asset activity—including surety bond requirements, net worth requirements, and recordkeeping obligations. Notably, however, there are several consequential provisions that relate to digital assets specifically.

Broad Enforcement Powers
The Law grants the Department of Financial Protection and Innovation (DFPI) broad and, frankly, worryingly vague enforcement power. DFPI can bring enforcement proceedings against an entity that “has engaged, is engaging, or is about to engage in digital financial asset business activity.” There is no further guidance on how close in time an entity must be to qualify as “about to” engage in any of the enumerated activities. Absent some limiting factor, the broad language of this provision could capture entities that are in the early-to-mid stages of developing a product. In his signing statement, Governor Newsom urged DFPI to engage in thoughtful rulemaking to, among other things, clarify ambiguities in the Law.

Robust Disclosure Requirements
The Law mandates that licensees provide extensive consumer disclosure requirements prior to engaging in the digital financial asset business activity. This pre-activity disclosure(s) must address 10 different categories of information, including: (1) a schedule of fees and charges that may be assessed, including how they will be calculated and when they will be imposed; (2) whether the product is covered by certain insurance protections; and (3) a description of error resolution rights and liability. In some instances, a licensee must also provide a per transaction confirmation that contains prescribed information. Read more


Fed’s Barr Says Private Crypto Stablecoins Pose Financial Risk

Courtesy of Katanga Johnson, Bloomberg News

The Federal Reserve’s top bank watchdog said crypto stablecoins could amount to private money that might be destabilizing for the US financial system if left unchecked. “There is interest in strong, federal regulation of stablecoins that makes sure the Federal Reserve can approve, regulate and enforce against stablecoin issuers, including wallets,” Michael Barr, vice chair for supervision, told attendees Tuesday at the DC Fintech Week event.

Barr was reiterating the central bank’s concern about private-industry crypto tokens pegged to assets like the US dollar and their potential to disrupt the broader financial world. “We need a strong framework,” he said. “It’s better if Congress can decide the rules of the road.” The Fed continues to study technologies that would underlie a digital currency backed by the central bank, Barr told the audience. He previously has said the Fed wouldn’t move ahead without consent from Congress and the executive branch.

Meanwhile, Nellie Liang, the under secretary for domestic finance at the Treasury Department, said at the event that it’s “hard to say” what role unbacked cryptocurrencies and stablecoins will play in the financial system going forward. Liang did, however, said she sees a use case for distributed ledger technology, in general. “The potential for that to bring efficiencies to payment and settlement seems pretty high,” Liang said.

Barr, who as a Fed governor votes on monetary policy, said the agency remains committed to taming US inflation. “It is really critical that we continue to do the work necessary to make sure we get inflation down to 2%,” he said. Another top banking regulator, Michael Hsu, the acting comptroller of the currency, made a distinction between crypto, which he said is plagued by chicanery, and tokenization, which promises genuine efficiencies.

Problem Solver
While the crypto world is retail-oriented, largely driven by the hope of making some money, Hsu said “it still remains replete with fraud, scams and hacks, and some of the largest players remain unregulated.” Tokenization, by contrast, is focused on solving an actual problem in finance, particularly with settlements. “There’s risk, there’s friction, there’s fees” in current set-ups, Hsu said in an interview with Bloomberg Television’s Romaine Bostick at the Washington event. “Tokenization holds the promise. It simplifies, if it’s done right.”

As for bank stability, Hsu said investors needn’t worry about the vulnerabilities that led to the industry’s March convulsions, because his agency has stepped up supervision.


U.S. Banking Watchdog Hsu Says Tokenization Promising, But Crypto Full of Fraud

Courtesy of Jesse Hamilton, CoinDesk

Michael Hsu, the acting chief of the U.S Office of the Comptroller of the Currency that oversees banks, is excited about the possibilities of tokenization to solve settlement problems. Tokenization of assets could be the answer to the risky complexities of settling the movement of funds and securities, said Michael Hsu, the acting head of the U.S. Office of the Comptroller of the Currency.

“Tokenization is focused on solving an actual problem, and that problem is settlement.” Hsu said at the DC Fintech Week event in Washington. “This is boring back-office stuff, but it’s super, super important.”

Any time an asset changes hands in the financial world, the transaction typically passes through multiple entities and checks of its validity before it’s cleared and settled to officially land in the hands of the recipient. At any of those layers – most of which carry their own expenses that may be added to what the customer pays – the transaction has some risk of failure.

“Tokenization holds the promise to collapse that and to simplify it, if it’s done right,” he said.

His OCC is so engaged on the idea of the tokenization of real-world financial assets and liabilities that it’s hosting a Feb. 8, 2024, all-day discussion on the topic at its Washington headquarters. But when it comes to the rest of the crypto space, Hsu remains suspicious.

“There seems to be more and more of a divide between crypto on one hand and tokenization,” he said. Crypto, he said, “tends to be driven by the hope for speculative gain.””It still remains replete with frauds, scams and hacks,” Hsu said.


Security Measures for Protecting Mobile Banking Users

Courtesy of Sam Coventry, PoundSterling

The digital era has ushered in the convenience of mobile banking, allowing users to manage their finances at their fingertips. However, with this convenience comes the responsibility of safeguarding sensitive financial data.

The statistics show that in the year 2022, there were 1,829 reported cyber incidents in the finance sector, with an average cost per data breach amounting to $4.45 million. So, the security of banking applications is something that is still to be worried about.

From this post, you’ll discover why financial mobile app security matters to users of software solutions as well as to businesses distributing them. You’ll also explore 11 key safety measures that will help build a secure application for your customers.

Why Is Protecting Mobile Banking Apps from Vulnerabilities Crucial?
Protecting mobile banking apps from vulnerabilities isn’t just about ensuring a smooth flow of financial transactions. It’s also about preserving the trust of millions of users and maintaining the integrity of the financial ecosystem. Here are some of the reasons indicating the utmost importance of adopting mobile banking security solutions:

  • Financial safety: Vulnerabilities can lead to unauthorized access, leading to potential financial loss for users.
  • Data protection: Personal and financial data can be exploited if not adequately protected, risking user privacy.
  • Reputation of financial institutions: Breaches can tarnish the reputation of banks, leading to a loss of trust and clientele.
  • Regulatory compliance: Many jurisdictions have strict regulations to ensure data protection, and non-compliance can result in hefty penalties.
  • Prevention of fraud: Secure apps reduce the chances of fraudulent activities, protecting both the user and the institution.

In case you don’t have expertise in creating secure software solutions for fintech in-house, consider building a banking app with a trusted service provider. Outsourcing mobile banking application development as well as the assurance of its high-level security, is the right call for most companies. Not only it’s cost-effective compared to building and running an in-house department, but it also brings external experience to your organization.

Now, let’s proceed with learning 11 key security measures and banking security standards. Their implementation promises to raise the security of your software solution to the whole other level, increasing your competitiveness and fostering trust between the organization and users of its services. Read more

 

Nov. 3, 2023: Digital Finance & Privacy Articles


How Americans View Data Privacy

The role of technology companies, AI, and regulation – plus personal experiences with data breaches, passwords, cybersecurity, and privacy policies

Courtesy of Colleen McClain, Michelle Faverio, Monica Anderson and Eugenie Park, Pew Research Center

In an era where every click, tap or keystroke leaves a digital trail, Americans remain uneasy and uncertain about their personal data and feel they have little control over how it’s used. This wariness is even ticking up in some areas like government data collection, according to a new Pew Research Center survey of U.S. adults conducted May 15-21, 2023.

Today, as in the past, most Americans are concerned about how companies and the government use their information. But there have been some changes in recent years:

Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)

The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.

Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).

We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.

In addition to the key findings covered on this page, the three chapters of this report provide more detail on:

Role of social media, tech companies, and government regulation

Trust in social media CEOs
Americans have little faith that social media executives will responsibly handle user privacy. Some 77% of Americans have little or no trust in leaders of social media companies to publicly admit mistakes and take responsibility for data misuse.

And they are no more optimistic about the government’s ability to rein them in: 71% have little to no trust that these tech leaders will be held accountable by the government for data missteps.

Related Report: Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information


Banks Adapt to Survive the Digital Shift

Courtesy of Fintech Nexus

As customer sentiment turns towards digitalization and ease of use, embedded finance has grown to meet their needs. 

Experiencing an uptick in interest, fintechs and big techs have risen to the challenge. In an EY survey from March 2023, 97% of respondents said they felt the key to the success of a financial product is the ability to meet customers’ requirements in real time. In the same study, 70% of respondents felt that the majority of financial products in the future would be offered via non-financial platforms as they shift to customers’ point of need.

Within this landscape, banks are at risk of being left behind. Often synonymous with roadblocks and clunky infrastructure, Banking has traditionally fallen short of its clients’ needs. With technology driving much of customers’ daily lives, fintech and big tech have been placed at the forefront of responding to the shift. Customers now expect personalized, digital experiences that ease interaction with their finances.

“When the young generations, who were born with phones in their hands, become clients, they will have much more of a focus on the technology,” says Schuyler Weiss, CEO of Alpian, a Swiss neobank, in a study commissioned by Temenos. “If you do not have modern technology, they will not bank with you. It doesn’t matter how long you’ve been around.”

The shift to a customer-centric digital ecosystem has blurred the lines between financial service offerings. Fintechs and tech-led brands have built solutions that sidestep direct engagement with banks. As finance becomes more embedded, banks are faced with a challenge. Staying within the confines of a brick-and-mortar incumbent institution could limit their ability to serve customers in the digital landscape.

Embedded Finance, Easing the way for a Bank-centric digital ecosystem.

The year 2023 showed a high percentage of bank executives incorporating fintech partnerships and acquisitions into their long-term strategy.

“New technology and customer demands are the top two trends expected to impact banking in the next five years,” said Jonathan Birdwell, Global Head of Policy and Insights at Economist Impact. “To maintain their direct connection with the consumer, banks are recognizing that they must become true digital ecosystems.”

The Temenos study showed that banks’ focus on creating a digital ecosystem had brought them to consider the development of their own digital products as well as third-party offerings. One in every five respondents said they are prioritizing building their own banking super app. Read more


ServiceNow Data Exposure: A Wake-Up Call for Companies

Courtesy of The Hacker News

Earlier this week, ServiceNow announced on its support site that misconfigurations within the platform could result in “unintended access” to sensitive data. For organizations that use ServiceNow, this security exposure is a critical concern that could have resulted in major data leakage of sensitive corporate data. ServiceNow has since taken steps to fix this issue.

This article fully analyzes the issue, explains why this critical application misconfiguration could have had serious consequences for businesses, and remediation steps companies would take, if not for the ServiceNow fix. (Although, recommended to double check that the fix has closed the organization’s exposure.)

In a Nutshell

ServiceNow is a cloud-based platform used for automating IT service management, IT operations management, and IT business management for customer service, as well as HR, security operations, and a wide variety of additional domains. This SaaS application is considered to be one of the top business-critical applications due to its infrastructural nature, extensibility as a development platform, and access to confidential and proprietary data throughout the organization.

Simple List is an interface widget that pulls data that is stored in tables and uses them in dashboards. The default configuration for Simple List allows the data in the tables to be accessed remotely by unauthenticated users. These tables include sensitive data, including content from IT tickets, internal classified knowledge bases, employee details, and more.

These misconfigurations have actually been in place since the introduction of Access Control Lists in 2015. To date, there were no reported incidents as a result. However, considering the recent publication of the data leakage research, leaving it unresolved could have exposed companies more than ever.

This exposure was the result of just one default configuration — and there are hundreds of configurations covering access control, data leakage, malware protection, and more that must be secured and maintained. For organizations using an SSPM (SaaS Security Posture Management solution), like Adaptive Shield, organizations can more easily identify risky misconfigurations and see if they are compliant or non-compliant (see image 1 below).

Inside the ServiceNow Misconfigurations

It’s important to reiterate that this issue was not caused by a vulnerability in ServiceNow’s code but by a configuration that exists within the platform. Read more


Generative AI: Vast Majority of Companies Concerned About Sharing Data with Third Parties, Cloudera Research Reveals

Courtesy of Cloudera, Inc.

Companies Worry it will be the like the Wild West when it Comes to Data Sharing for Generative AI

New research from Cloudera, the data company for trusted enterprise artificial intelligence (AI), has revealed that more than half of the organizations in the US (53%) currently use Generative AI technology and an additional 36% are in the early stages of exploring AI for potential implementation in the next year. But more than eight-in-ten decision makers for data strategy and management surveyed (84%) are concerned about sharing data with third parties for training or fine-tuning of Generative AI models, alluding to the perception of a still untamed, Wild West-like environment when it comes to data privacy, security, and compliance.  In addition, almost all respondents (95%) believe that full control of data during AI model training is key to trusting the AI outputs.

“Generative AI has taken center stage in boardroom discussions – Whilst analytical AI products have been worked on for decades, ChatGPT has accelerated Gen AI innovation and the road to human level performance has shortened across every industry,” said Abhas, Chief Strategy Officer at Cloudera. “Yet there are concerns regarding trust, compliance, authorization, and intellectual property. Organizations are apprehensive about the potential exposure of training models using publicly available data and/or receiving erroneous responses from AI models that have NOT been trained with relevant enterprise context. Our survey results confirm our understanding that data moats are real and organizations who have been successful in creating trusted and secure data sources will have an advantage in producing higher fidelity outputs with Generative AI applications.”

The survey polled 500 IT decision makers (ITDMs) and data scientists in the US regarding their organization’s status and plans for Generative AI. The results of the study “2023 Evolving Trends: Data, Analytics & AI” were published at the data conference Evolve New York on November 2.

Chatbots most relevant use case for Generative AI
Enhancement of customer communication with chatbots or other tools (55%), support for product development (44%), and concept development (44%) are cited as the main benefits generative AI offers organizations. Also named are support for data analysis (34%), software development (32%) or the automation of activities and processes (28%).

“The success of these initial use cases, such as chat Q&A, text summarization, and co-pilot productivity enhancements, relies on bringing the models to the data, at the point of its creation and origination, and not the data to the models! For example, a large financial institution is currently making 4 million decisions a day by processing all data through their trusted AI Lakehouse,” said Abhas. Read more

 

Oct. 27, 2023: Digital Finance & Privacy Articles


UK-US Data Bridge: An Extension to EU-US Data Privacy Framework

Courtesy of FinExtra

The UK government has published the Data Protection (Adequacy) (United States of America) Regulations 2023 (SI 2023/1028), the UK-US Data Bridge Regulations, which adopted an adequacy decision for the US (the UK-US Data Bridge) and will come into force on 12 October 2023.

The UK-US Data Bridge recognises the US as offering an adequate level of data protection where the transfer is to a US organisation that(i) is listed on the EU-US Data Privacy Framework (DPF), and (ii) participates in the UK Extension to the DPF.

On July 10, 2023, the European Commission adopted its adequacy decision for the DPF. The decision concluded that the DPF ensures an adequate level of protection for transferring personal data from the European Union to the United States. The UK-US Data Bridge is an extension of the DPF which was discussed in our prior updates.

What are the advantages of using the DPF and the UK-US Data Bridge?
Leveraging the DPF, recognised as an adequacy decision, provides organisations with a streamlined approach to data transfers. Within this context, companies that participate in the DPF are automatically deemed safe for data reception from the UK.

One of the prime benefits of the UK-US Data Bridge, built upon the DPF framework, is that participating organisations are exempted from the need to conduct transfer impact assessments (TIAs) or institute supplementary measures. In contrast, if companies rely on Standard Contractual Clauses (SCCs) or Binding Corporate Rules (BCRs), they are still mandated to implement supplementary measures. The UK-US Data Bridge facilitates a seamless transfer of data back and forth between the two territories. Furthermore, as the data protection landscape evolves, customers increasingly expect companies to actively participate in such data transfer framework, enhancing trust and compliance.

Are there any challenges?
However, not all is without complications. Both the Information Commissioner’s Office (ICO) and EU privacy activists have commented on the UK-US Data Bridge and the DPF.

The ICO noted in its opinion on the UK-US Data Bridge Regulations, that there are areas that could pose risks to UK data subjects if the protections identified are not properly applied. The opinion identifies several potential issues with the UK-US Data Bridge, including the following, serve as a basis to question the UK-US Data Bridge…Read more


FCC Aims to Investigate the Risk of AI-Enhanced Robocalls

Courtesy of Devin Coldewey, TechCrunch

As if robocalling wasn’t already enough of a problem, the advent of easily accessible, realistic AI-powered writing and synthetic voice could supercharge the practice. The FCC aims to preempt this by looking into how generated robocalls might fit under existing consumer protections.

A Notice of Inquiry has been proposed by Chairwoman Jessica Rosenworcel to be voted on at the agency’s next meeting. If the vote succeeds (as it is almost certain to), the FCC would formally look into how the Telephone Consumer Protection Act empowers them to act against scammers and spammers using AI technology.

But Rosenworcel was also careful to acknowledge that AI represents a potentially powerful tool for accessibility and responsiveness in phone-based interactions.

“While we are aware of the challenges AI can present, there is also significant potential to use this technology to benefit communications networks and their customers—including in the fight against junk robocalls and robotexts. We need to address these opportunities and risks thoughtfully, and the effort we are launching today will help us gain more insight on both fronts,” she said in a statement.

Any industry that involves a lot of voice, like customer service, is likely already looking into how automation and generative AI can be used to augment human agents’ effectiveness. Instead of responding with a canned response, for instance, a call center employee could have an AI consult a knowledge base and provide a script customized to a customer’s exact experience. Or an AI-powered triage system could improve the laborious “If you are calling for this, press 1… for this, press 2…” process that few enjoy.

But the same technologies that could make a tedious job more efficient, or an interface more intuitive, could be deployed in other ways to trick or inconvenience people. One can imagine (and indeed some likely don’t have to imagine) robocalls catering to one’s profession, age and location — the kind of tailored scams that took time to craft before but can now be automated.

It’s an emerging threat, and the FCC is ostensibly the cop on the beat; while they have hit robocallers before for record fines (though these are not always collected), they need to stay ahead of the game and this inquiry is intended to help them do that.

Specifically, Rosenworcel said that the effort would look at:

  • How AI technologies fit into the Commission’s statutory responsibilities under the Telephone Consumer Protection Act (TCPA);
  • If and when future AI technologies fall under the TCPA;
  • How AI impacts existing regulatory frameworks and future policy formulation;
  • If the Commission should consider ways to verify the authenticity of legitimately generated AI voice or text content from trusted sources; and,
  • What next steps, if any, are necessary to advance this inquiry.

If it sounds a little woolly, just remember that these inquiry-type efforts are what the agency and others like it rely on when performing actual rulemaking and justifying themselves in court.


SEC Exam Priorities Target AML

Courtesy of Brian N. Kearney, Ballard Spahr, Money Laundering News

Priorities Echo Prior Alerts and Enforcement Actions

The SEC’s Division of Examinations (the “Division”) released on October 16 a report on its “Examination Priorities” (the “Report”) for fiscal year 2024.  This release occurred earlier than in prior years, which the Report’s prefatory message characterizes as an example of the Division’s “intention to provide more transparency” and “to move forward together with investors and industry to promote compliance.”

The Report

The Report highlights four major areas of focus for the Division’s examinations in the coming year, which it terms “risk areas impacting various market participants”:

  1. Anti-money laundering (“AML”);
  2. Information security and “operational resilience”;
  3. Crypto and emerging financial technologies (“fintech”); and
  4. Regulation systems compliance and integrity (“SCI”).

As to AML, the Report first rehearses the requirement of the Bank Secrecy Act (“BSA”) for broker-dealers: namely, that they establish AML programs tailored to their unique risk profile – their location, size, customer base, menu of products and services, and method of delivery of those products and services. The Report further notes that such AML programs must be reasonably designed to achieve compliance with the BSA and related regulations, must undergo independent testing of their viability, and must include customer due diligence procedures and ongoing transaction monitoring – including, where appropriate, filing of Suspicious Activity Reports (“SARs”) with the Financial Crimes Enforcement Network (“FinCEN”).  Although the Report also references “certain registered investment companies,” investment advisers as a group are not (yet) subject to the BSA.

With that general foundation, the Report enunciates the Division’s 2024 priorities in examination of AML programs:

  • Whether the program is appropriately tailored to a firm’s business model and associated AML risks;
  • Whether the firm conducts independent testing of its AML program;
  • Whether the firm has an adequate customer identification program (which includes identifying the beneficial owners of legal entities);
  • Whether the firm meets its SAR filing obligations;
  • If applicable, whether the firm has adequate policies and procedures regarding oversight of financial intermediaries; and
  • Whether the firm ensures compliance with Office of Foreign Assets Control (“OFAC”) sanctions.

Prelude

While the emphasis is important, nothing here is actually new.  The SEC published on July 31, 2023 an alert outlining deficiencies the Division has observed in broker-dealers’ compliance with AML requirements.  Consistent with the Report, the alert focuses on deficiencies the Division has observed with regard to independent testing of broker dealers’ AML programs, personnel training and identification and verification of customers and their beneficial owners.

Read more

Oct. 20, 2023: Digital Finance & Privacy Articles


Digital Sovereignty is About the Fight to Control Data in the AI Era

How data privacy concerns drive users to seek technological countermeasures to reshape digital dynamics.

Courtesy of Brian Jackson, Spiceworks

In the last week of September, the News Media Alliance blitzed Capitol Hill to lobby for updated copyright protections for the AI era.

The alliance represents 2,000 publishers, from Vox Media to the smaller regional papers in the U.S., and it’s taking the position that any unlicensed use of content created by its members and journalists by generative AI companies equates to infringement of copyright, according to an Axios reportOpens a new window . The media industry is coming to grips with the threat of its content being used to train AI algorithms that will disrupt their role as information providers. In short, if anyone can simply ask ChatGPT for details about current or historical events, why would they bother searching for it on a news publisher’s site? Without website traffic, the news media’s business model would fall apart.

It’s just the latest example of creators banding together to push back against AI companies’ approach to training large language models (LLMs), with training sets comprised of massive amounts of data scraped from the web and other sources. Authors George R.R. Martin and John Grisham are propellants of one lawsuit, and comedian Sarah Silverman is involved in another. Software developers have a class action lawsuit targeted at GitHub Copilot, a coding assistant trained on the open-source code stored on the site. Getty Images, visual artists, and countless others wrapped up in class action lawsuits are also joining the fray. The battleground for the future of copyright will play out in courts over the next several months, with massive repercussions for how AI companies are allowed to build their algorithms in the future.

The lingering question about whether AI companies violate copyright is casting a pall over an otherwise hot market. AI-focused firms are valued at billions of dollars and draw huge interest from businesses and consumers. Microsoft took a step to get ahead of the legal challenges from scaring off customers by making its Copilot Copyright CommitmentOpens a new window , promising that Microsoft will defend its commercial customers if a third party sues them for copyright infringement related to the use of Microsoft Copilot, an Office 365 AI feature that’s powered by Open AI’s GPT-4 algorithm. Read more


OPINION: How Clear and Effective Crypto Regulations are Born

Courtesy of Zac Townsend, Coindesk

While the past year has seen numerous setbacks for crypto regulatory efforts, the industry’s collective labor will soon bear fruit, Meanwhile CEO Zac Townsend writes. The development of blockchain-based digital assets, and the multi-billion dollar market that has formed around them, has proved to be a tremendous challenge for global legislators and regulators alike.

The digital economy is continuing to surge forward — making it an urgent necessity for a proper legal framework to be created, one that strikes a balance between fostering innovation and maintaining the integrity of our financial system.

The key to achieving balanced rules and regulations lies in good faith actors following the rules where they are clear and pushing for clarity where they are opaque, all while ensuring that both industry participants and regulators alike work together to create a robust framework that promotes innovation and safeguards consumers.

It may seem that the last few years have been anything but productive in fulfilling this mission, but nonetheless, progress is being made. Well-intentioned actors within the crypto industry have worked tirelessly to comply with the legitimately outlined rules while also advocating for the much-needed clarity in areas where none exist. Their efforts, however, have been historically met with a degree of shrouded hostility from regulatory bodies, particularly the U.S. Securities and Exchange Commission (SEC).

The SEC has time and time again argued that there is no ambiguity in the rules, and that the existing laws are effective and clear when applied to digital assets. In the words of SEC Chairman Gary Gensler, “Some in the crypto industry have called for greater ‘guidance’ with respect to crypto tokens. For the past five years, though, the commission has spoken with a pretty clear voice here.”

However, for industry writ large, including public companies like Coinbase who have done their best to comply with the aforementioned existing rules, the only thing that is clear is that the current framework falls flat in protecting investors and companies alike. Read more


Maine Advocates Call for Strict Digital Privacy Protections, But Businesses Object

A legislative committee held a public hearing Tuesday on proposals to restrict the consumer information that technology companies can collect and sell.

Courtesy of Randy Billings, Portland Press Herald

Business groups argued against passage of a statewide digital privacy law during a public hearing Tuesday, saying the proposed limits on the amount of sensitive information they can collect and sell could lead to costly lawsuits and end popular customer loyalty programs.

Maine’s attorney general and other supporters said the law is needed to prevent companies from using and selling personal information without consent. Planned Parenthood of Northern New England said the protection is needed for people from states that don’t allow abortions or gender-affirming care who come to Maine to seek health care they need.

Members of the Legislature’s Judiciary Committee, which is meeting in between legislative sessions, did not make any decisions about how to move forward following Tuesday’s public hearing. The sponsors of competing bills said they would continue to work toward a compromise that could be supported by businesses and consumers.

Corporations such as Google, Meta and Amazon collect vast amounts of digital information about people based on their online shopping, viewing and reading habits. Data brokers collect information that is not otherwise public and sell it to companies that can use the information to sell products or change consumers’ behavior.

Michael Kebede, policy counsel for the ACLU of Maine, said technology companies can use the data to influence people “really in any way the highest bidder wants to modify our behavior.”

“The touchstone of behavior modifications depends on amassing as much data as possible from everything that’s connected to the internet,” Kebede said. “That means companies that are not interested in behavioral modification or data gathering … are massively incentivized to share our data behind our backs so larger, better-resourced firms can modify our behavior.” Read more


California Governor Approves Strict Crypto Regulatory Framework For 2025

The legislation — known as the Digital Financial Assets Law — will mandate individuals and businesses engaged in digital asset activities to obtain a regulatory license, similar to banks and money transmitters.

Courtesy of Assad Jafri, CryptoSlate

California State Capitol

California Governor Gavin Newsom has given the green light to a new cryptocurrency regulation bill that aims to establish a stricter regulatory framework for crypto businesses — set to take effect in July 2025.

The legislation — known as the Digital Financial Assets Law — will mandate individuals and businesses engaged in digital asset activities to obtain a Department of Financial Protection and Innovation (DFPI) license if they want to continue operating in California.

Digital Financial Assets Law

The new regulation builds upon the state’s existing money transmission laws, which currently prohibit banking and transfer services from operating without a valid license issued by the DFPI commissioner.

The Digital Financial Assets Law introduces additional measures by empowering the DFPI to impose rigorous audit requirements on cryptocurrency firms and obliging them to maintain comprehensive financial records.

Specifically, the bill stipulates that licensees must maintain records for a period of at least five years following the date of any activity. These records must include a detailed general ledger updated at least monthly, encompassing all assets, liabilities, capital, income, and expenses of the licensee.

Failure to adhere to these requirements will result in enforcement measures against non-compliant firms.

Newsom shifts stance amid evolving regulatory landscape

Approval of the crypto regulation bill marks a significant shift from Governor Newsom’s previous perspective on the matter. Read more

Oct. 13, 2023: Digital Finance & Privacy Articles


Generative Finance, or GeFi for Short

We’ve had DeFi, CeFi, ReFi and more, but now it’s time for GeFi.

Courtesy of Chris Skinner, TheFinanser.com

If those phrases confuse you, we have Decentralised Finance versus Centralised Finance; we have Regenerative Finance, and now we have Generative Finance, or GeFi for short.

What is GeFi? It’s the integration of artificial intelligence (AI) into finance. This all started with ChatGPT which, just to remind you, stands for Chat Generative Pre-trained Transformer (GPT). GPT is a large language model (LLM) framework for generative artificial intelligence.

Wow. I’m tired already with so many TLAs (Three Letter Acronyms) although, just to point this out, I’ve only used two TLAs (GPT and LLM).

Either way, my test is always to go back to the person on the Clapham Omnibus. Do you remember that test? The person on the Clapham omnibus is used by the courts in English law to decide whether a party has acted as a reasonable person would. So, let’s imagine this person on a bus in London and ask them if they understood any of the above? Answer is probably no. Do they need to? Answer is also no, as the idea is to offer a service to the person on the Clapham omnibus that makes their life better and easier.

The theme of GeFi is that you can integrate intelligence into everything you do with money. Today, we have got part of the way there. Most challenger banks offer integration of transactions with Google Maps for example, so you can see where you were when you bought that cappuccino. Tomorrow, it will go far further.

For example, you are travelling. Lots of foreign exchange transactions. Your GeFi app tells you that you are being irresponsible and could have saved $10 if you had paid in local rather than your home country’s currency. Imagine how this could change things? Read more


Data Privacy Law Seen as Needed Precursor to AI Regulation

The U.S. effort lags Europe in the transparency of companies’ personal data use

Courtesy of Gopal Ratnam, Roll Call

While artificial intelligence appears to be a shiny new bauble full of promises and perils, lawmakers in both parties acknowledge that they must first resolve a less trendy but more fundamental problem: data privacy and protection.

With dozens of hearings on data privacy held in the past five years, lawmakers in both chambers have proposed several bills, but Congress has enacted no federal standard as dickering over state-preemption has stymied any advances.

“I agree that data privacy is going to be the foundation of that trust that everyone believes” is essential to widespread adoption of AI systems, Sen. John Hickenlooper, D-Colo., said in an interview. “We have got to know who owns what data and that the use of data is not harmful.”

Hickenlooper, chairman of the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security, said full committee Chair Maria Cantwell, D-Wash., “feels the same commitment to making sure that at some point, perhaps not this year, but with a sense of urgency, we get a data privacy bill because that is going to underpin so much of what’s going to happen in terms of creating trust in AI.”

A spokeswoman for the Senate Commerce Committee didn’t respond to questions about Cantwell’s plans. Rep. Cathy McMorris Rodgers, R-Wash., who chairs the House Energy and Commerce Committee, echoed those sentiments when she addressed the fifth annual Data Privacy Conference USA in Washington, D.C., last week.

“As CEOs travel to Washington, D.C., to discuss issues like artificial intelligence, I worry lawmakers might lose focus on what should be the foundation for any AI efforts, which is establishing comprehensive protections on the collection, transfer and the storage of our data,” Rodgers told the gathering.

Sean Kelly, a spokesman for Rodgers, said she’s working with lawmakers to stress the importance of data privacy. Read more


Opinion: Privacy Remains Sticking Point in America’s Ongoing CBDC Debate

A CBDC in the US could erode financial privacy and “upend the commercial banking system,” witnesses argued at a Thursday hearing

Courtesy of Ben Strack, Blockworks

Debate among members of Congress — as well as financial and law professionals — about a central bank digital currency continued just days after an anti-CBDC bill was introduced.

Some of the witnesses testifying before the House Subcommittee on Digital Assets, Financial Technology and Inclusion Thursday argued that a CBDC could erode privacy and upend the commercial banking system. Another said it offers a chance to bolster financial security on public rails.

A wrongly-structured system of money is “perhaps the biggest existential threat to Western civilization,” US Rep. Warren Davidson, R-Ohio, said during the hearing. But Rep. Stephen Lynch, D-Mass., argued during the session that a government-issued digital dollar could be designed to promote financial inclusion and protect privacy while streamlining payments.

Lynch noted that there has been “fear mongering” — fueled in part by the crypto industry — about a CBDC being weaponized as a tool for government surveillance or control. Those narratives can shut down discussions, Lynch warned, as other countries make progress toward potentially implementing such a digital currency.

China’s digital yuan pilot is ongoing while Russia is experimenting with a digital ruble. The European Central Bank has planned a digital euro pilot that could lead to a possible launch in 2028. Although concerns about data privacy and government surveillance are real, Lynch said, a CBDC can be designed in a way to protect personal data while also including features to counter money laundering and terrorist financing.

“It is counterintuitive that my colleagues should be raising concerns about data privacy while thousands of private companies — domestic and foreign — are surveilling, aggregating and selling consumer data each and every day,” the Massachusetts Democrat added. “As policymakers, we should be asking questions about how a digital dollar could be designed to maximize privacy and prevent exploitation of personal data.” Read more


Fiserv and Plaid’s Landmark Collaboration: Transforming Secure Data Sharing in Finance

Courtesy of FinTech Global

Fiserv, a global juggernaut in payments and financial services technology, has teamed up with Plaid, a pioneering data network steering the digital financial era.

Addressing the escalating consumer demand to access their financial data with flexibility, this alliance promises secure and dependable data sharing through application programming interfaces (APIs).

Fiserv, a Fortune 500 entity, is committed to reshaping the way money and information mobilise the world. Specialising in payments, financial technology, and a plethora of other services, Fiserv stands tall as a beacon of innovation in digital banking, e-commerce, and more.

On the flip side, Plaid’s global data network underpins tools that millions trust for a sound financial life. By simplifying payments, transforming lending, and spearheading the fight against fraud, Plaid’s influence stretches across FinTech giants, Fortune 500 firms, and major banks.

In this groundbreaking collaboration, customers banking with Fiserv’s extensive network of nearly 3,000 bank and credit union clients can seamlessly connect to over 8,000 apps and services on the Plaid network via AllData Connect from Fiserv. This venture is unmatched in its magnitude, broadening the horizon for direct data sharing between financial institutions and their associated third parties.

AllData Connect from Fiserv, an integral part of the AllData® Aggregation suite, empowers credential-free data sharing. It ensures consumers can willingly share financial data with third-party apps sans sharing login details. By validating consumers directly with their banks or credit unions, AllData Connect issues tokens that third parties use to access and refresh consumer data. Beyond Plaid, Fiserv also collaborates with Akoya, Finicity, and MX to facilitate data sharing.

Fiserv President of Digital Payments Matt Wilcox said, “Our partnership with Plaid allows banks and credit unions to empower consumers to access their financial information beyond the financial institution, while maintaining their trusted role at the center of people’s financial lives. By facilitating access to a broad range of capabilities and experiences through third-party apps and services we are charting a course towards an open finance ecosystem that prioritizes data privacy, consumer access, and choice.”

Aly Yarris, Financial Access Partnerships at Plaid, remarked, “Financial institutions regardless of size, location, or capital should be able to power these digital experiences for their consumers via APIs. We’re proud to partner with Fiserv to bring secure, reliable API connectivity to thousands of financial institutions on behalf of their many customers.”

Oct. 6, 2023: Digital Finance & Privacy Articles


Joint Statement on the U.S.-UK Financial Regulatory Working Group

The eighth official meeting of the U.S.-UK Financial Regulatory Working Group took place in Washington, D.C.

Officials and senior staff from HM Treasury and the U.S. Department of the Treasury were joined by representatives from independent regulatory agencies, including the Bank of England, Financial Conduct Authority, Board of Governors of the Federal Reserve System, Commodity Futures Trading Commission, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and Securities and Exchange Commission. Agency participation varied across themes with participants expressing views on issues in their respective areas of responsibility.

The Working Group meeting focused on several key themes, including:

  1. economic and financial stability outlook
  2. international banking issues
  3. developments in the non-bank sector
  4. climate-related financial risks and sustainable finance
  5. international engagement and
  6. digital finance

The meeting opened with a broad discussion of the UK and U.S. economic and financial stability outlook, with participants taking stock of current economic trends and market conditions and considering broader global factors.

On banking issues, participants offered an overview of developments in their domestic banking systems. The Working Group acknowledged work being led by the Financial Stability Board (FSB) and Basel Committee on Banking Supervision in response to events in March 2023, including the FSB’s forthcoming report on preliminary lessons learned in resolution. Discussions were held on each jurisdiction’s banking regulation, including resolution-related updates, with participants discussing the importance of implementing reforms consistent with Basel III, and of ongoing dialogue among international partners when implementing these reforms. Read more


CTA Round-Up: FinCEN Proposes Extended CTA Filing Deadline, Revised Reporting Form, and Privacy Act Exemption; Expands CTA FAQs; and Requests Comments on FinCEN Identifier

Courtesy of Peter D. Hardy, Scott Diamond, Siana Danch & Kaley Schafer, Money Laundering News

The Financial Crimes Enforcement Network (“FinCEN”) has issued a flurry of publications relating to the Corporate Transparency Act (“CTA”).  They pertain, in part, to a proposed extension of the filing deadline for certain reports of Beneficial Ownership Information (“BOI”); a proposed revision to the BOI reporting form; and expanded FAQs.  We discuss each in turn.

First, on September 28, 2023, FinCEN proposed a rule to extend the filing deadline for reports of BOI by entities created or registered on or after the CTA’s effective date of January 1, 2024.  Specifically, the proposed rule would extend the filing deadline from 30 to 90 days for both domestic and foreign entities created or registered on or after January 1, 2024 and before January 1, 2025.  Written comments to this proposed rule are due on October 30, 2023.  The 30-day filing deadline for pre-existing entities, which must file BOI reports by January 1, 2025, would not be affected by this proposed rule.

FinCEN has explained in the Federal Register that this extension “will provide new reporting companies additional time to obtain the information necessary to complete their initial BOI reports,]” and “will give reporting companies more time to resolve questions that may arise in the process of completing their initial BOI reports.”  This proposed delay is not very surprising. FinCEN has been criticized regarding its slow roll-out of the CTA regulations and any guidance, given the looming January 1, 2024 effective date – including by members of Congress, who pressed FinCEN in June 2023 for more public clarity on exactly how the CTA will apply.

Second, on September 29, 2023, FinCEN proposed a revised BOI reporting form.  Written comments are due October 30, 2023.  As FinCEN acknowledges in the Federal Register, the previously proposed BOI reporting form was roundly criticized:

  • Notably, commentators were uniformly critical of the checkboxes that would allow a reporting company to indicate if certain information about a beneficial owner or company applicant is “unknown,” or if the reporting company is unable to identify information about a beneficial owner or company applicant….
  • A significant number of these comments expressed concern that the checkboxes would incorrectly suggest to filers that it is optional to report required information, and that reporting companies need not conduct a diligent inquiry to comply with their reporting obligations.  These commentators requested that FinCEN remove such checkboxes. Read more

 The Digitization Cycle: Preparing for the Next Wave of Innovation

Courtesy of Yeojin Kim, Sekou Bermiss, and Subriena Persaud, Filene Research Institute

Executive Summary

While digital innovation drives competition and shapes the landscape in a profound manner within the consumer finance sector, credit unions are slow to digitize their activities and offerings. In this brief, we diagnose the state of credit unions’ digitization activities and recommend strategies credit unions can implement to brace for the next wave of innovation.

Credit Union Implications

Considering the cycle of digitization, credit unions now need to prepare for the next wave of digitization. Currently, product digitization has been largely nonexistent within the credit union system. However, in the coming years, the innovation focus is likely to shift towards product digitization and product differentiation. Credit unions’ current vast effort focusing on distribution digitization is a good starting point for future innovative activities. Recommendation is to progressively expand their distribution channels through Information and Communication Technologies (ICTs) to allow members to better interact with their financial instruments and to provide seamless access points for all members.

As a part of a series of reports on innovation within the credit union system, this report provides a foundational framework for credit unions to utilize in building their digitization strategy, and serves as a first step in their innovation journey. Download the full brief to assess where your credit union is in the digitization cycle and stay tuned for the additional innovation reports to help you continue to develop and enhance your digital innovation strategy. Read more (login required)


Crypto Regulation Does Not Matter in Some SBF Trial Allegations, DOJ Says

The government also pushed back against SBF’s argument it was “common” for crypto companies to misappropriate funds.

Courtesy of Katherine Ross, Blockworks

The trial is barely underway, and yet the US government and Sam Bankman-Fried’s legal team continue to file letter after letter in an unending back and forth.

Essentially, both the US government and Sam Bankman-Fried’s team are working to iron out what arguments may or may not be used in front of the jury once the selection has been finalized.

Even Judge Lewis Kaplan, who is overseeing the case, asked both the prosecution and defense if he should expect “another midnight filing” before he adjourned court on Tuesday.

In the prosecution’s most recent letter, the US government’s lawyers pushed back against arguments from Sam Bankman-Fried’s legal team. They disputed that the regulatory status of crypto in the US is relevant because the government alleged that SBF misappropriated FTX funds.

“The Government has not alleged that there are any laws or regulations prohibiting cryptocurrency exchanges from using funds originating in customer deposits for their own purposes — as is commonly done by financial institutions such as banks and digital payment platforms — or providing any relevant guidance as to what may be done with customer deposits,” Bankman-Fried’s team argued in a letter Monday. Read more