AI in Banking and Finance, June 15, 2024

This semi-monthly column highlights news, government reports, NGO/IGO papers, industry white papers, academic papers and speeches on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions.


FedScoop, June 14, 2024. Financial regulators have ‘insufficiently’ addressed hedge funds’ use of AI, report says. SEC and CFTC oversight of how investment vehicles use AI to inform trading decisions is lacking and poses risks to market stability, a report from the majority staff of the Senate Homeland Security and Governmental Affairs Committee finds. Federal financial regulators are playing catch-up in their oversight of how hedge funds are using artificial intelligence to guide trading decisions, presenting a serious risk to market stability, the majority staff of the Senate Homeland Security and Governmental Affairs Committee warned Friday. In a 45-page report, shared exclusively with FedScoop and led by HSGAC Chair Gary Peters, D-Mich., staffers found the combination of recent regulatory actions and the current federal legal framework to “insufficiently address the evolving uses of AI in the financial services sector, including by hedge funds in trading decisions.” “The report also finds that the risks of future impacts resulting from AI use in investment decisions go beyond potential harm to individual investors and could have an impact on wider financial stability,” the authors wrote. “Use of more complex approaches to AI in the financial sector, absent any uniform basic guiding principles and industry-wide rules, increases risks to both individual investors and markets.” The surging prevalence of AI across the financial services sector has been especially pronounced among hedge funds, the report noted of the investment vehicles that are responsible for more than $5 trillion in assets under management in the country.

Business Insider [unpaywalled], June 11, 2024 – Here’s what payment companies like Visa, Block, and PayPal are paying AI workers to combat fraud and improve customer experience. The payments industry, which facilitates transactions via credit cards and digital wallets, has long relied on large amounts of data to operate. In recent years, these companies have used this data to power all parts of the business, from fraud prevention to underwriting for buy-now-pay-later products. Now, generative AI is promising to harness that data in new ways. At Visa, one AI tool that detects fraud prevented $27 billion in fraudulent purchases in 2022, the company told Business Insider. Discover uses generative AI tools to arm its customer support staff with faster and more complete answers to customers’ questions, while other firms like Capital One have built up an AI research infrastructure that’s stacking up AI patents. In the payments industry, much of the promise comes from generative AI’s ability to quickly summarize trends and information. This can be useful for back-end operations, like customer-service staff, while also creating front-end applications that can provide customers insights into their spending or even allow them to make purchases entirely via a chatbot.

Fintech, June 11, 2024 – AI in Banking Presents Both Risks and Opportunities. Despite the promised benefits of improved customer support and personalization, the use of artificial intelligence (AI) in banking also introduces several disadvantages and risks, including data privacy and security issues as well as fraud risks, a new survey by Glassbox found. The study, which polled 1,000 US consumers aged 21+ in May 2024, found that 60% of respondents believe AI in banking presents equal parts benefit and risk. Notably, 47% of respondents identified security risks as their primary concern regarding AI in banking.Security is considered an extreme priority in digital banking by over half of the respondents, with 90% stating that the security of personal information is important or extremely important. This underscores the urgent need for banks to prioritize security and reliability in their digital services. There is also a clear demand for transparent and proactive communication about AI use and related security measures, with 85% of consumers expecting proactive communication from their banks. Moreover, more than half of the respondents indicated they would switch banks if they were victims of AI-related fraud.

Newswire, June 5, 2024. IBM Study: Banking and Financial Markets CEOs are betting on generative AI to stay competitive, yet workforce and culture challenges persist. New findings from the IBM (NYSE: IBM) Institute for Business Value revealed that banking and financial markets (BFM) CEOs are facing workforce and culture and challenges as they act quickly to implement and scale generative AI across their organizations.    The findings are part of an annual global cross-industry study that surveyed more than 3,000 CEOs from over 30 countries and 26 industries, which included 297 BFM CEOs representing retail, corporate, commercial and investment banks and financial markets. The survey found that generative AI is perceived as the key to unlocking competitiveness. 57% of BFM CEOs surveyed stated that gaining a competitive advantage in the sector will depend on who has the most advanced generative AI. The findings also revealed that CEOs are navigating complex issues around culture in the era of AI. 59% of surveyed BFM CEOs stated that cultural change within a business is more important than overcoming technical challenges when becoming a data-driven business, with 65% also believing success with AI will depend more on people’s adoption than the technology itself.

Geekwire, June 4, 2024. The rise of AI at JPMorgan Chase — and how Jamie Dimon used a chatbot to prepare for Elon Musk. JPMorgan Chase CEO Jamie Dimon doesn’t need to be sold on the value of AI. The world’s largest bank has been working on artificial intelligence for more than a decade. It employs more than 2,000 machine learning and AI experts and data scientists worldwide, with than 400 AI use cases already deployed in areas such as marketing, fraud, and risk, as detailed in Dimon’s latest letter to shareholders. AI scours the $10 trillion that JPMorgan moves through the world’s financial systems every day, looking for signs of potential fraud. Voice recognition confirms the ID of customers when they call about their accounts. AI assists bankers and customer-service reps, and eventually, Dimon says, AI agents will solve customer problems. “AI is real,” Dimon said during a briefing with reporters at a hotel conference room in Seattle on Monday evening, after spending the day with JPMorgan tech leaders in meetings with Microsoft and Amazon. “I don’t know exactly the pace at which AI is going to change everything, but it will change a lot.

American Banker, BankThink, June 4, 2024. Senior bank leaders must articulate a clear strategy for adopting AI

Reuters, June 5, 2024. Yellen To Warn of ‘Significant Risks’ From Use of AI in Finance – U.S. Treasury Secretary Janet Yellen will warn that the use of AI in finance could lower transaction costs, but carries “significant risks,” according to excerpts from a speech to be delivered on Thursday. From a report: In the remarks to a Financial Stability Oversight Council and Brookings Institution AI conference, Yellen says AI-related risks have moved towards the top of the regulatory council’s agenda. “Specific vulnerabilities may arise from the complexity and opacity of AI models, inadequate risk management frameworks to account for AI risks and interconnections that emerge as many market participants rely on the same data and models,” Yellen says in the excerpts. She also notes that concentration among the vendors that develop AI models and that provide data and cloud services may also introduce risks that could amplify existing third-party service provider risks. “And insufficient or faulty data could also perpetuate or introduce new biases in financial decision-making,” according to Yellen. [NOTE: see the webcast of the 2024 Conference on Artificial Intelligence & Financial Stability. June 6-7, 2024]


Hartzog, Woodrow, Two AI Truths and a Lie (May 24, 2024). 26 Yale Journal of Law and Technology Special Issue (forthcoming 2024), Available at SSRN: Industry will take everything it can in developing Artificial Intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good. Given these inevitabilities, lawmakers will need to change their usual approach to regulating technology. Procedural approaches requiring transparency and consent will not be enough. Merely regulating use of data ignores how information collection and the affordances of tools bestow and exercise power. A better approach involves duties, design rules, defaults, and dead ends. This layered approach will more squarely address dangerous extraction, harmful normalization, and adversarial self-dealing to better ensure that deployments of AI advance the public good.

NBER.  The Simple Macroeconomics of AI. Daron Acemoglu. Working Paper 32487 DOI10.3386/w32487. Issue Date. May 2024. This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value.

NBER. Artificial Intelligence and the Skill Premium. David E. Bloom, Klaus Prettner, Jamel Saadaoui & Mario Veruete Working Paper 32430. DOI 10.3386/w32430. Issue Date How will the emergence of ChatGPT and other forms of artificial intelligence (AI) affect the skill premium? To address this question, we propose a nested constant elasticity of substitution production function that distinguishes among three types of capital: traditional physical capital (machines, assembly lines), industrial robots, and AI. Following the literature, we assume that industrial robots predominantly substitute for low-skill workers, whereas AI mainly helps to perform the tasks of high-skill workers. We show that AI reduces the skill premium as long as it is more substitutable for high-skill workers than low-skill workers are for high-skill workers.


2024 Conference on Artificial Intelligence & Financial Stability. June 6-7, 2024. The Financial Stability Oversight Council (FSOC) in partnership with the Brookings Institution, will host a two-day conference on Artificial Intelligence (AI) and Financial Stability.  AI in financial services has grown rapidly.  Innovations in AI can offer many benefits, such as reducing costs and improving efficiencies, but they can also introduce or exacerbate risks to the financial system.  This conference will be an opportunity for the public and private sectors to convene to discuss potential systemic risks posed by AI in financial services, to explore the balance between encouraging innovation and mitigating risks, and to share insights on effective oversight of AI-related risks to financial stability. The first day of the conference will be held at the U.S. Department of the Treasury on June 6, 2024, and the second day of the conference will be held at the Brookings Institution on June 7, 2024.  A live webcast of the conference is available to the public.

Posted in: AI in Banking and Finance, Cybercrime, Cybersecurity, Economy, Education