Washington Post, January 13, 2024. AI fears creep into finance, business and law. From Davos to Wall Street to the U.S. Supreme Court, AI risks are top of mind. In the past week, the Financial Industry Regulatory Authority (FINRA), the securities industry self-regulator, labeled AI an “emerging risk” and the World Economic Forum in Davos, Switzerland, released a survey that concluded AI-fueled misinformation poses the biggest near-term threat to the global economy. Those reports came just weeks after the Financial Stability Oversight Council in Washington said AI could result in “direct consumer harm” and Gary Gensler, the chairman of the Securities and Exchange Commission (SEC), warned publicly of the threat to financial stability from numerous investment firms relying on similar AI models to make buy and sell decisions. “AI may play a central role in the after-action reports of a future financial crisis,” he said in a December speech.
Bloomberg, January 11, 2023. AI Tool Helps Fix Faulty Trades Amid Shift to Faster Settlement Times. Technology identifies failed trades, offers possible solutions. Finance firms are grappling with move to shorter settlements. As the financial industry grapples with the shift to shorter settlement times, banks and broker-dealers will soon have a new artificial-intelligence tool to fix and prevent trades that go awry during the settlement process.
Bloomberg, January 11, 2024. OpenAI Signs Up 260 Businesses for Corporate Version of ChatGPT. Startup launched ChatGPT Enterprise barely four months ago. Product is part of push to boost revenue from AI chatbot
NextGov/FCW, January 10, 2024. DOD’s AI adoption efforts are starting to pay off, Pentagon official says – The Department of Defense is working to embrace artificial intelligence technologies by empowering new offices and better harnessing the technical prowess of the commercial sector, although early use cases of the emerging tools are still primarily limited to more mundane tasks, a Pentagon official said during an event hosted by the Center for Strategic and International Studies on Tuesday. Michael C. Horowitz, deputy assistant secretary of defense for Force Development and Emerging Capabilities — or FDEC — said “we’ve launched initiatives designed to improve our [AI] adoption capacity, and I think we’re really starting to see them pay off.” Horowitz referenced a variety of Pentagon decisions that he said “have been disruptive, but are important for prioritizing AI and autonomy.” This includes efforts to streamline department oversight, as well as initiatives to better embrace the technical capabilities of the private sector. “We’ve created organizations like the Office of Strategic Capital to better work with companies, including potentially in the AI and autonomy space,” Horowitz said, while also pointing to the Pentagon’s April 2023 decision to require the Defense Innovation Unit to report directly to the Defense Secretary as an acknowledgment of the need to more effectively adopt commercial technologies.
Better Markets, January 10, 2024. SEC’s Approval of a Bitcoin Crypto ETF Is an Historic Mistake That Will Harm Investors, Markets, and Financial Stability WASHINGTON, D.C.— Dennis M. Kelleher, Co-founder, President, and CEO of Better Markets issued the following statement in response to the U.S. Securities and Exchange Commission’s (SEC) approval of a Bitcoin ETF: “With the flagrantly lawless crypto industry crashing and burning due to a mountain of arrests, criminal convictions, bankruptcies, lawsuits, scandals, massive losses, and millions of investor and customer victims, who would have thought that the SEC would come to its rescue by approving a trusted and familiar investment vehicle that will enable the mass marketing of a known worthless, volatile, and fraud-filled financial product to Main Street Americans. Bitcoin and crypto are worse than the chips you can buy at a casino because at least the casino is regulated; the spot Bitcoin market is not and that’s what the ETF is going to be pricing. There will be no SEC regulation or policing of Bitcoin. “Make no mistake about it: the SEC’s actions today are not required by or supported by the facts or law, including not required by the Grayscale court decision, as we spelled out in our comment letters here and here. The court in Grayscale merely said that the SEC failed to sufficiently explain its prior rejection. The SEC could — and should have — rejected the ETF applications and better detailed why it did so, importantly including a showing that “as much as 77.5% of the total trading volume on unregulated exchanges was due to wash trading” and as much as 95% of Bitcoin trading “could be due to wash trading.” “That’s why this approval of a Bitcoin ETF is an historic mistake that will not only unleash crypto predators on tens of millions of investors and retirees but will also likely undermine financial stability. It will be interpreted and spun as a de facto SEC – if not U.S. government – endorsement of crypto generally. The crypto industry’s marketing machinery can be expected to claim or imply that this decision legitimizes crypto as a safe and appropriate investment for hardworking Main Street retail investors and those saving for retirement. However, the SEC’s action today has changed nothing about this worthless financial product: Bitcoin and crypto still have no legitimate use; remain the preferred product of speculators, gamblers, predators, and criminals; and continue to be cesspools of fraud, manipulation, and criminality.
Wall Street Journal, January 9, 2024. FINRA flags concerns about AI, crypto, cybersecurity – The Financial Industry Regulatory Authority, Wall Street’s self-regulator, has classified artificial intelligence an “emerging risk” in its annual regulatory report, saying that deploying AI in the industry could affect virtually all aspects of a broker-dealer’s operations. Finra said firms looking to deploy such technology should focus on the regulatory implications of doing so, particularly in areas such as anti-money-laundering, public communication and cybersecurity, among others, and in model risk management, including testing and data integrity and governance. The financial industry’s use of AI, either in-house or through third parties, can provide efficiencies that help member firms better serve customers, Ornella Bergeron, a senior vice president in member supervision with Finra’s risk monitoring program, said in a podcast accompanying the report. The use of AI comes with concerns over accuracy, privacy and bias, she added. Bergeron said Finra has seen member firms so far take a careful approach to using AI. “In risk monitoring, we have been actively engaging with firms to better understand their current initiatives, as well as their future plans related to generative AI and large language models,” said Bergeron. “And honestly, from what we’ve been hearing so far, firms are being very cautious and they’re being very thoughtful when considering the use of AI tools as well as before deploying these technologies.” AI poses an emerging risk to the financial industry, according to a report from the Financial Industry Regulatory Authority. FINRA urges firms using AI to focus on regulatory implications, especially regarding cybersecurity, money laundering and public communication. FINRA also says that cybersecurity remains a top concern and that it is monitoring compliance risks related to cryptocurrency.
Emerging Risk: Artificial Intelligence X Artificial Intelligence (AI) technology is rapidly evolving, most recently with the emergence of generative AI tools. As in other industries, broker-dealers and other financial services industry firms are exploring and deploying these technologies—either with in-house solutions or through third parties—to create operational efficiencies and better serve their customers. While these tools may present promising opportunities, their development has been marked by concerns about accuracy, privacy, bias and intellectual property, among others. As member firms continue to consider the use of new technologies, including generative AI tools, they should be mindful of how these technologies may implicate their regulatory obligations.
The use of AI tools could implicate virtually every aspect of a member firm’s regulatory obligations, and firms should consider these broad implications before deploying such technologies. Member firms may consider paying particular focus to the following areas when considering their use of AI:
- Anti-Money Laundering
- Books and Records
- Business Continuity
- Communications With the Public
- Customer Information Protection
- Model Risk Management (including testing, data integrity and governance, and explainability)
- SEC Regulation Best Interest
- Vendor Management
In addition to existing rules and regulatory obligations, member firms should be mindful that the regulatory landscape may change as this area continues to develop. For additional guidance, member firms may also consider:
FINRA’s FinTech Key Topics Page
2023 FINRA Annual Conference: Leveraging Regulatory Technology For Your Firm 5
FINRA Report: Artificial Intelligence (AI) in the Securities Industry (June 2020)
FINRA Unscripted Podcast: AI Virtual Conference: Industry Views on the State of Artificial Intelligence (November 24, 2020)
National Institute of Standards and Technology (NIST): Artificial Intelligence Risk Management Framework (AI RMF 1.0) (January 2023)
FT.com, January 7, 2023: FT.com – read free: Deloitte is rolling out a generative artificial intelligence chatbot to 75,000 employees across Europe and the Middle East to create power point presentations and write emails and code in an attempt to boost productivity. The Big Four accounting and consulting firm first launched the internal tool, called “PairD”, in the UK in October, in the latest sign of professional services firms rushing to adopt AI. However, in a sign that the fledgling technology remains a work in progress, staff were cautioned that the new tool may produce inaccurate information about people, places and facts. Users have been told to perform their own due diligence and quality assurance to validate the “accuracy and completeness” of the chatbot’s output before using it for work, said a person familiar with the matter. Unlike rival firms, which have teamed up with major market players such as ChatGPT maker OpenAI and Harvey, Deloitte’s AI chatbot was developed internally by the firm’s AI institute. The roll out highlights how the professional services industry is increasingly adopting generative AI to automate tasks…”
Better Markets, January 10, 2024. SEC’s Approval of a Bitcoin Crypto ETF Is an Historic Mistake That Will Harm Investors, Markets, and Financial Stability. WASHINGTON, D.C.— Dennis M. Kelleher, Co-founder, President, and CEO of Better Markets issued the following statement in response to the U.S. Securities and Exchange Commission’s (SEC) approval of a Bitcoin ETF: “With the flagrantly lawless crypto industry crashing and burning due to a mountain of arrests, criminal convictions, bankruptcies, lawsuits, scandals, massive losses, and millions of investor and customer victims, who would have thought that the SEC would come to its rescue by approving a trusted and familiar investment vehicle that will enable the mass marketing of a known worthless, volatile, and fraud-filled financial product to Main Street Americans. Bitcoin and crypto are worse than the chips you can buy at a casino because at least the casino is regulated; the spot Bitcoin market is not and that’s what the ETF is going to be pricing. There will be no SEC regulation or policing of Bitcoin. “Make no mistake about it: the SEC’s actions today are not required by or supported by the facts or law, including not required by the Grayscale court decision, as we spelled out in our comment letters here and here. The court in Grayscale merely said that the SEC failed to sufficiently explain its prior rejection. The SEC could — and should have — rejected the ETF applications and better detailed why it did so, importantly including a showing that “as much as 77.5% of the total trading volume on unregulated exchanges was due to wash trading” and as much as 95% of Bitcoin trading “could be due to wash trading.” “That’s why this approval of a Bitcoin ETF is an historic mistake that will not only unleash crypto predators on tens of millions of investors and retirees but will also likely undermine financial stability. It will be interpreted and spun as a de facto SEC – if not U.S. government – endorsement of crypto generally. The crypto industry’s marketing machinery can be expected to claim or imply that this decision legitimizes crypto as a safe and appropriate investment for hardworking Main Street retail investors and those saving for retirement. However, the SEC’s action today has changed nothing about this worthless financial product: Bitcoin and crypto still have no legitimate use; remain the preferred product of speculators, gamblers, predators, and criminals; and continue to be cesspools of fraud, manipulation, and criminality.
IMF, January 14, 2024. Gen-AI: Artificial Intelligence and the Future of Work. Mauro Cazzaniga; Florence Jaumotte; Longji Li; Giovanni Melina; Augustus J Panton; Carlo Pizzinelli; Emma J Rockall; Marina Mendes Tavares. Artificial Intelligence (AI) has the potential to reshape the global economy, especially in the realm of labor markets. Advanced economies will experience the benefits and pitfalls of AI sooner than emerging market and developing economies, largely due to their employment structure focused on cognitive-intensive roles. There are some consistent patterns concerning AI exposure, with women and college-educated individuals more exposed but also better poised to reap AI benefits, and older workers potentially less able to adapt to the new technology. Labor income inequality may increase if the complementarity between AI and high-income workers is strong, while capital returns will increase wealth inequality. However, if productivity gains are sufficiently large, income levels could surge for most workers. In this evolving landscape, advanced economies and more developed emerging markets need to focus on upgrading regulatory frameworks and supporting labor reallocation, while safeguarding those adversely affected. Emerging market and developing economies should prioritize developing digital infrastructure and digital skills.
Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks. Xianzhi Li, Samuel Chan, Xiaodan Zhu, Yulong Pei, Zhiqiang Ma, Xiaomo Liu, Sameena Shah. October 10, 2023. The most recent large language models(LLMs) such as ChatGPT and GPT-4 have shown exceptional capabilities of generalist models, achieving state-of-the-art performance on a wide range of NLP tasks with little or no adaptation. How effective are such models in the financial domain? Understanding this basic question would have a significant impact on many downstream financial analytical tasks. In this paper, we conduct an empirical study and provide experimental evidences of their performance on a wide variety of financial text analytical problems, using eight benchmark datasets from five categories of tasks. We report both the strengths and limitations of the current models by comparing them to the state-of-the-art fine-tuned approaches and the recently released domain-specific pretrained models. We hope our study can help understand the capability of the existing models in the financial domain and facilitate further improvements. In this paper, we perform an empirical study and provide experimental evidence for the effectiveness of the most recent LLMs on a variety of financial text analytical problems, involving eight benchmark datasets from five typical tasks. These datasets are from a range of financial topics and sub-domains such as stock market analysis, financial news, and investment strategies. We report both the strengths and limitations of ChatGPT and GPT-4 by comparing them with the state-of-the-art domain-specific fine-tuned models in finance, e.g., FinBert (Araci, 2019) and FinQANet (Chen et al., 2022a), as well as the recently pretrained model such as BloombergGPT (Wu et al., 2023).
What does AI mean for cybersecurity?. In 2023, the Aspen Institute’s US and Global Cybersecurity Groups led a working group with cybersecurity leaders in industry, government, and civil society. The goal was to think through what the uses and misuses of artificial intelligence (AI) could mean for cybersecurity in terms of both risks and benefits. We began in the future. Our discussions with the working group involved articulating and defining two extreme, yet realistic, scenarios: a “good place” in which these emerging technology tools disproportionately benefit cybersecurity defenders and a “bad place” in which the benefits instead disproportionately go to the attacker. After establishing both possible worlds, these experts then developed recommendations, “carrots and sticks,” for nudging society toward the good place”With the insights from these scenarios, Aspen Digital is excited to unveil Envisioning Cyber Futures with AI, a resource that defines the forces that can drive the world in either direction and more importantly that provides a roadmap for what both the private and public sector can do to get to the “good place.”
IMF, December 2023. The Macroeconomics of Artificial Intelligence. ERIK BRYNJOLFSSON, GABRIEL UNGER. Economists have a poor track record of predicting the future. And Silicon Valley repeatedly cycles through hope and disappointment over the next big technology. So a healthy skepticism toward any pronouncements about how artificial intelligence (AI) will change the economy is justified. Nonetheless, there are good reasons to take seriously the growing potential of AI—systems that exhibit intelligent behavior, such as learning, reasoning, and problem-solving—to transform the economy, especially given the astonishing technical advances of the past year. AI may affect society in a number of areas besides the economy—including national security, politics, and culture. But in this article, we focus on the implications of AI on three broad areas of macroeconomic interest: productivity growth, the labor market, and industrial concentration. AI does not have a predetermined future. It can develop in very different directions. The particular future that emerges will be a consequence of many things, including technological and policy decisions made today. For each area, we present a fork in the road: two paths that lead to very different futures for AI and the economy. In each case, the bad future is the path of least resistance. Getting to the better future will require good policy—including
- Creative policy experiments
- A set of positive goals for what society wants from AI, not just negative outcomes to be avoided
- Understanding that the technological possibilities of AI are deeply uncertain and rapidly evolving and that society must be flexible in evolving with them
IMF, December 2023. Scenario Planning for an A(G)I Future. AI may be on a trajectory to surpass human intelligence; we should be prepared. Artificial intelligence (AI) is rapidly advancing, and the pace of progress has accelerated in recent years. ChatGPT, released in November 2022, surprised users by generating human-quality text and code, seamlessly translating languages, writing creative content, and answering questions in an informative way, all at a level previously unseen. Yet in the background, the foundation models that underlie generative AI have been advancing rapidly for more than a decade. The amount of computational resources (or, in short, “compute”) used to train the most cutting-edge AI systems has doubled every six months over the past decade. What today’s leading generative AI models can do was unthinkable just a few years ago: they can deliver significant productivity gains for the world’s premier consultants, for programmers, and even for economists (Korinek 2023).
IMF, December 2023. Rebalancing AI. The drive toward automation is perilous—to support shared prosperity, AI must complement workers, not replace them. Optimistic forecasts regarding the growth implications of AI abound. AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period and raise global GDP by 7 percent ($7 trillion in additional output), according to Goldman Sachs. Industry insiders offer even more excited estimates, including a supposed 10 percent chance of an “explosive growth” scenario, with global output rising more than 30 percent a year. All this techno-optimism draws on the “productivity bandwagon”: a deep-rooted belief that technological change—including automation—drives higher productivity, which raises net wages and generates shared prosperity. Such optimism is at odds with the historical record and seems particularly inappropriate for the current path of “just let AI happen,” which focuses primarily on automation (replacing people). We must recognize that there is no singular, inevitable path of development for new technology. And, assuming that the goal is to sustainably improve economic outcomes for more people, what policies would put AI development on the right path, with greater focus on enhancing what all workers can do?
European Securities and Market Authority, TRV article: Artificial intelligence in EU securities markets, ESMA50-164-6247. The use of artificial intelligence (AI) in finance is under increasing scrutiny from regulators and supervisors interested in examining its development and the related potential risks. This article contributes by providing an overview of AI use cases across securities markets in the EU and assessing the degree of adoption of AI-based tools. In asset management, an increasing number of managers leverage AI in investment strategies, risk management and compliance. However, only a few of them have developed a fully AI-based investment process and publicly promote the use of AI. In trading, AI models allow traders, brokers, and financial institutions to optimise trade execution and post-trade processes, reducing the market impact of large orders and minimising settlement failures. In other parts of the market, some credit rating agencies, proxy advisory firms and other financial market participants also use AI tools, mostly to enhance information sourcing and data analysis. Overall, although AI is increasingly adopted to support and optimise certain activities, this does not seem to be leading to a fast and disruptive overhaul of business processes. A widespread use of AI comes with risks. In particular, increased uptake may lead to the concentration of systems and models among a few ‘big players’. These circumstances warrant further attention and monitoring to continue ensuring that AI developments and the related potential risks are well understood and taken into account.