This semi-monthly column by Sabrina I. Pacifici highlights news, government reports, industry white papers, academic papers and speeches on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title and abstract. Four highlights from this week: European Central Bank Is Experimenting With a New Tool: A.I.; UM expert testifies on the dangers of AI in banking; 80% of Large Enterprise Finance Teams Will Use Internal AI Platforms by 2026.; and Five Use Cases for CFOs with Generative AI. Q&A with Alex Bant.
The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.
If you spend any time online, you probably have some idea that the digital ad industry is constantly collecting data about you, including a lot of personal information, and sorting you into specialized categories so you’re more likely to buy the things they advertise to you. But in a rare look at just how deep—and weird—the rabbit hole of targeted advertising gets, Investigative Data Journalist Jon Keegan and Visualizations Engineer Joel Eastwood of the The Markup analyzed a database of 650,000 of these audience segments, newly unearthed on the website of Microsoft’s ad platform Xandr. The trove of data indicates that advertisers could also target people based on sensitive information like being “heavy purchasers” of pregnancy test kits, having an interest in brain tumors, being prone to depression, visiting places of worship, or feeling “easily deflated” or that they “get a raw deal out of life.”
Nicole A. Cooke, Augusta Baker Endowed Chair and a Professor at the School of Library and Information Science, at the University of South Carolina, identifies the significant and socially charged work of librarians who are defending the rights of readers and writers in the battles raging across the U.S. over censorship, book challenges and book bans. Cooke states, “as long as there have been book challenges, there have been those who defend intellectual freedom and the right to read freely. Librarians and library workers have long been crucial players in the defense of books and ideas. At the 2023 annual American Library Association Conference, scholar Ibram X. Kendi praised library professionals and reminded them that “if you’re fighting book bans, if you’re fighting against censorship, then you are a freedom fighter.”
Iantha Haight writes that her library recently hosted a guest speaker, David Wingate, a professor in BYU’s computer science department who does research on large language models, for a faculty lunch and learn. The entire presentation was fascinating, but the most intriguing part for me and many of the law faculty in attendance was the idea that generative AI systems will become so good they will be able to replace human subjects in answering research surveys. How? Generative neural networks trained on huge amounts of data—terabytes and even petabytes—ingest enough information about people that they can answer survey questions as if they were members of the survey population.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: It’s Their Content, You’re Just Licensing it; Understanding the NIST Cybersecurity Framework; Here’s how Google Maps cracked down on fake contributions last year; and Clearview AI scraped 30 billion images from Facebook and gave them to cops.
Manhattan grand jury votes to indict Donald Trump, showing he, like all other presidents, is not an imperial king
Following news that a Manhattan grand jury had voted to indict Donald Trump, CNN’s John Miller announced on Thursday evening March 30, 3023: “I am told by my sources that this is 34 counts of falsification of business records, which is probably a lot of charges involving each document, each thing that was submitted, as a separate count.” Prof. Shannon Bow O’Brien, a presidency scholar, takes on the concept of the imperial presidency: “Throughout history, many presidents have pushed the boundaries of power for their own personal preferences or political gain. However, Americans do have the right to push back and hold these leaders accountable to the country’s laws. Presidents have never been monarchs. If they ever act in that manner, I believe that the people have to remind them of who they are and whom they serve.”
This guide by Marcus P. Zillman is a selected list of free and fee based (some require subscriptions), people finding resources, from a range of providers. A significant number of free sources on this subject matter are sourced from public records obtained by a group of companies who initially offer free information to establish your interest, from which point a more extensive report requires a fee to obtain. It is important to note that can be many errors in these data, including the inability to correctly de-duplicated individuals with the same common names. Also note that each service targets a different mix of identifying data such as: name, address, date of birth, phone numbers, email addresses, relatives, education, employment, criminal records. social media accounts, income. As we conduct research throughout the day it is useful to employ both impromptu and planned searches about individuals that are referenced.
Professor Nicole Gillespie, and Research Fellows Caitlin Curtis, Javad Pool and Steven Lockey, discuss their new 17-country study involving over 17,000 people reveals how much and in what ways we trust AI in the workplace, how we view the risks and benefits, and what is expected for AI to be trusted. They find that only one in two employees are willing to trust AI at work. Their attitude depends on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous in their expectations of what needs to be in place for AI to be trusted.
In a recent paper, Prof. Chantelle Gray coined the term “algopopulism”: algorithmically aided politics. The political content in our personal feeds not only represents the world and politics to us. It creates new, sometimes “alternative”, realities. It changes how we encounter and understand politics and even how we understand reality itself.