This new bi-monthly column by Sabrina I. Pacifici highlights news, reports, government and industry documents, and academic papers on the subject of AI’s fast paced impact on many facets of the global financial system. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title, abstract and tags. Pacifici is also compiling a list of actionable subject matter resources at the end of each column that will be updated regularly.
Kevin Novack, digital strategist and CEO with extensive experience digitizing disparate collections at the Library of Congress, discusses the increasing importance of acknowledging and incorporating social proof into your marketing strategies to showcase the power of your brands and services. The recent wave of digital tools that are built to influence decisions have come under increasing scrutiny as we have learned, they may not be all that trustworthy. Examples include TikTok and its power to influence and even change the behaviors of impressionable next gens. Or Instagram’s role in enabling body shaming and mocking others. And more recently the overwhelming impact of ChatGPT, and the fascination with and growing use of thousands of apps and services built on OpenAI. Novack asks – but can you trust it? And responds – probably about as much as you can trust all online listings and crowdsourced input, which are the sources of GPT’s recommendations. From the user perspective, discerning fact from fiction, when interacting with your organization, is only becoming more critical.
Is the implementation of generative AI simply a new flavor of outsourcing? How does this digital revolution reflect on our interpretation of the American Bar Association’s (ABA) ethical guidelines? How can we ensure that we maintain the sacrosanct standards of our profession as we step into this exciting future? Josh Kubicki, Business Designer, Entrepreneur, University of Richmond School of Law Professor, presents a starting point to explore potential ethics considerations surrounding the use of generative AI.
As a technology ethics educator and researcher, Carey Fiesler has thought about AI systems amplifying harmful biases and stereotypes, students using AI deceptively, privacy concerns, people being fooled by misinformation, and labor exploitation. Fiesler characterizes this not at technical debt but as accruing ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.
In the fourth article in his series on presentations, Jerry Lawson advises us on creating compelling presentations. He advises that if the audience is not understood, not engaged, not brought into the conversation, the session usually dies on the vine. Asking the audience questions is one way to improve your training sessions.
Jordan Furlong writes the legal profession is about to go through what manufacturing already has. In the next few years, legally trained generative AI will replace lawyer labour on a scale we’ve never seen before. An enormous amount of lawyer activity consists of researching, analyzing, writing, developing arguments, critiquing counter-claims, and drafting responses. A machine has now come along that does most of these things, much faster than we do. Today, the machine needs lawyers to carefully review its efforts. Within two years, I doubt it will.
Why is poor legal writing so prevalent? Jerry Lawson identifies three key reasons: fear, time, and lack of skills, and addresses directly a course to solve the lack of skills issue.
Jordan Furlong, Legal Sector Analyst and Forecaster, presents an engaging and actionable plan for figuring out how law firms are going to work in future. Furlong states this will occupy countless partnership meetings, conference agendas, and consulting engagements all over the legal industry throughout the next several years. Don’t worry if you don’t have all the answers — nobody else does, either he says. We’re all just getting started. What he suggest though is that figuring out what law firms are going to become requires first letting go of what they used to be. A good start towards accomplishing that would be to abandon the antiquated titles and categories into which we’ve been cramming law firm personnel for the last hundred years.
The premise of this article by COO and legal technologist Kenneth Jones is that individual capabilities and excellence (either legal or technical) standing alone are not enough to ensure long-term, sustainable success. No superstar technologist or lawyer is equipped to do it all, as there are too many specialties and functional roles which need to be filled. Rather, a better approach is to construct team-based, cross-functional units that offer greater operational efficiency while building in layers of redundancy that reduce the potential for surprises, errors, or disruption. This comprehensive and actionable guide validates deploying the cross-functional team approach across the enterprise.
Patrick J. McKenna is an internationally recognized author, lecturer, strategist and seasoned advisor to the leaders of premier law firms. McKenna’s deep dive into law firm strategic planning delivers a detailed guide on the major errors to circumvent to establish a winning competitive position going forward.