Jim Calloway, Director of the Oklahoma Bar Association’s Management Assistance Program and Julie Bays, OBA Practice Management Advisor, aiding attorneys in using technology and other tools to efficiently manage their offices, recommend that now is a good time to experiment with specific AI-powered tools and suggest the best techniques for using them.
Legaltech Hub’s Nicola Shaver discusses why it is time to level-set about advanced AI: it can’t do everything. Or perhaps more practically, a large language model can’t replace all of the other technology you already have. One of the main reasons for this is the importance of an interface and a built-out user experience (UX) that offers a journey through the system that is aligned with the way users actually work. There are other reasons a large language model (LLM) won’t replace all of your technology (one of which being advanced AI is simply unnecessary to do all things), but this article will focus on UX.
Jerry Lawson recommends the new book, Design Your Law Practice: Using Design Thinking To Get Next Level Results, to any law firm or lawyer interested in innovation that will make their practice more profitable and attract more clients.
Will Generative AI destroy law firms? Jordan Furlong argues this may only occur if lawyers are too fixed in their ways to see the possibilities that lie beyond who we’ve always been and what we’ve always done.
Elizabeth Southerland writes that Jerry Lawson’s essay Plain English for Lawyers: The Way to a C-Level Executive’s Heart has some good ideas about the best ways to communicate with senior executives. However, there is a key imperative that is not addressed: The purpose of an executive summary is to boil this down to a few sentences that tell the leader what they want to know.
This new bi-monthly column by Sabrina I. Pacifici highlights news, reports, government and industry documents, and academic papers on the subject of AI’s fast paced impact on many facets of the global financial system. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title, abstract and tags. Pacifici is also compiling a list of actionable subject matter resources at the end of each column that will be updated regularly.
Kevin Novack, digital strategist and CEO with extensive experience digitizing disparate collections at the Library of Congress, discusses the increasing importance of acknowledging and incorporating social proof into your marketing strategies to showcase the power of your brands and services. The recent wave of digital tools that are built to influence decisions have come under increasing scrutiny as we have learned, they may not be all that trustworthy. Examples include TikTok and its power to influence and even change the behaviors of impressionable next gens. Or Instagram’s role in enabling body shaming and mocking others. And more recently the overwhelming impact of ChatGPT, and the fascination with and growing use of thousands of apps and services built on OpenAI. Novack asks – but can you trust it? And responds – probably about as much as you can trust all online listings and crowdsourced input, which are the sources of GPT’s recommendations. From the user perspective, discerning fact from fiction, when interacting with your organization, is only becoming more critical.
Is the implementation of generative AI simply a new flavor of outsourcing? How does this digital revolution reflect on our interpretation of the American Bar Association’s (ABA) ethical guidelines? How can we ensure that we maintain the sacrosanct standards of our profession as we step into this exciting future? Josh Kubicki, Business Designer, Entrepreneur, University of Richmond School of Law Professor, presents a starting point to explore potential ethics considerations surrounding the use of generative AI.
As a technology ethics educator and researcher, Carey Fiesler has thought about AI systems amplifying harmful biases and stereotypes, students using AI deceptively, privacy concerns, people being fooled by misinformation, and labor exploitation. Fiesler characterizes this not at technical debt but as accruing ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.
In the fourth article in his series on presentations, Jerry Lawson advises us on creating compelling presentations. He advises that if the audience is not understood, not engaged, not brought into the conversation, the session usually dies on the vine. Asking the audience questions is one way to improve your training sessions.