LLRX June 2023 Issue

Articles and Columns for June 2023

  • Brevity is the Soul of Profit: What Lawyers Need to Know About Executive SummariesElizabeth Southerland writes that Jerry Lawson’s essay Plain English for Lawyers: The Way to a C-Level Executive’s Heart has some good ideas about the best ways to communicate with senior executives. However, there is a key imperative that is not addressed: The purpose of an executive summary is to boil this down to a few sentences that tell the leader what they want to know.
  • AI and the Financial System – June 28, 2023 – This new monthly column by Sabrina I. Pacifici highlights news, reports, government and industry documents, and academic papers on the subject of AI’s fast paced impact on many facets of the global financial system. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title, abstract and tags. Pacifici is also compiling a list of actionable subject matter resources at the end of each column that will be updated regularly.
  • Is A Crypto Collapse Inevitable?Jerry Lawson and Elizabeth Southerland identify technical reasons why the crypto bubble is bursting, including the fact that theoretically unbreakable encryption schemes like those underpinning blockchain have proven to be less than impermeable in practice, as users of Coinbase discovered upon losing fortunes. Attackers go after the weakest link in the chain, usually the way in which the algorithm is implemented. Perhaps more ominous, the emergence of a new class of devices called quantum computers threatens to eat the algorithms that underlie crypto for lunch.
  • The Digital Psychology of Persuasion Kevin Novack, digital strategist and CEO with extensive experience digitizing disparate collections at the Library of Congress, discusses the increasing importance of acknowledging and incorporating social proof into your marketing strategies to showcase the power of your brands and services. The recent wave of digital tools that are built to influence decisions have come under increasing scrutiny as we have learned, they may not be all that trustworthy. Examples include TikTok and its power to influence and even change the behaviors of impressionable next gens. Or Instagram’s role in enabling body shaming and mocking others. And more recently the overwhelming impact of ChatGPT, and the fascination with and growing use of thousands of apps and services built on OpenAI. Novack asks – but can you trust it? And responds – probably about as much as you can trust all online listings and crowdsourced input, which are the sources of GPT’s recommendations. From the user perspective, discerning fact from fiction, when interacting with your organization, is only becoming more critical.
  • Conspiracy theories aren’t on the rise – we need to stop panicking – Several polls in the past couple of years (including from Ipsos, YouGov and most recently Savanta on behalf of Kings College Policy Institute and the BBC) have been examining the kinds of conspiratorial beliefs people have. The findings have led to a lot of concern and discussion. There are several revealing aspects of these polls. Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business School, is interested in what claims are considered conspiratorial and how these are phrased. But she is also interested in the widespread belief that conspiracy theories are apparently on the rise, thanks to the internet and social media. Is this true and how concerned should we really be about conspiracy theories? – Iantha Haight writes that her library recently hosted a guest speaker, David Wingate, a professor in BYU’s computer science department who does research on large language models, for a faculty lunch and learn. The entire presentation was fascinating, but the most intriguing part for me and many of the law faculty in attendance was the idea that generative AI systems will become so good they will be able to replace human subjects in answering research surveys. How? Generative neural networks trained on huge amounts of data—terabytes and even petabytes—ingest enough information about people that they can answer survey questions as if they were members of the survey population.
  • Examining The Inner Workings of Law Firm LeadershipWe all know that the law firm leader’s job is unlike any other in the firm. One way of envisioning its multiple responsibilities is to map them by the constituencies one must address. Today’s leader must be an ambassador to the outside world as well as chief cheerleader, challenger of the status quo, and an implementer of their partners’ collective aspirations within the firm. Patrick J. McKenna, McKenna Associates Inc., together with Michael Rynowecer, President of The BTI Consulting Group, distributed a survey containing over 35 questions to a group of some 250 law firm leaders. Their data uncovered some surprising and insightful findings. For example, they found that for 56% of today’s firm leaders, irrespective of firm size, this is a full-time commitment; with a total of 81% reporting that they “perceive the challenges that they face as being far more complex than a few years back” and 13% even freely admitting they were, “almost overwhelming at times.” Perhaps surprising to some, it is not an exaggeration to state that we have leaders of America’s largest firms managing hundred-million to billion dollar businesses, all too often thrust into the role with 67% of them having no clear job description and one in five reporting it to be a “pretty much sink or swim” exercise. Ironically, having served as an office managing partner, or even as a practice or industry group leader seemed to have absolutely minimal value in preparing one for taking on the responsibility of being leader of the entire firm.
  • Suicide Hotlines Promise Anonymity. Dozens of Their Websites Send Sensitive Data to Facebook – Reporter Colin Lecher and Data Journalist Jon Kreeger discuss how websites for mental health crisis resources across the country—which promise anonymity for visitors, many of whom are at a desperate moment in their lives—have been quietly sending sensitive visitor data to Facebook. Dozens of websites tied to the national mental health crisis 988 hotline, which launched last summer, transmit the data through a tool called the Meta Pixel, according to testing conducted by The Markup. That data often included signals to Facebook when visitors attempted to dial for mental health emergencies by tapping on dedicated call buttons on the websites.
  • How AI could take over elections and undermine democracyArchon Fung, Professor of Citizenship and Self-Government, Harvard Kennedy School, and Lawrence Lessig, Professor of Law and Leadership, Harvard University, pose the question: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways? Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters. Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election. While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.
  • Pete Recommends – Weekly highlights on cyber security issues, June 30, 2023Four highlights from this week: Amazon delays virtual care service’s unveiling after senators raised privacy concern; FBI launches national ‘swatting’ database amid rising incidents; How Do Some Companies Get Compromised Again and Again?; and Does ChatGPT Save My Data? OpenAI’s Privacy Policy Explained.
  • Pete Recommends – Weekly highlights on cyber security issues, June 24, 2023Four highlights from this week: How Your New Car Tracks You; Democratic senators concerned Amazon health platform ‘harvesting consumer health data from patients’; How generative AI is creating new classes of security threats; and US cyber ambassador says China can win on AI, cloud.
  • Pete Recommends – Weekly highlights on cyber security issues, June 17, 2023Four highlights from this week: The dos and don’ts of using home security cameras that see everything; Social Engineering And The Disinformation Threat In Cybersecurity; The US Is Openly Stockpiling Dirt on All Its Citizens; and The Expert’s Guide to Online Privacy in 2023.
  • Pete Recommends – Weekly highlights on cyber security issues, June 11 , 2023Four highlights from this week: Top 5 Most Common Text Message Scams & How to Avoid Them; From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You; Service Rents Email Addresses for Account Signups; and FTC Slams Amazon with $30.8M Fine for Privacy Violations Involving Alexa and Ring.
  • Pete Recommends – Weekly highlights on cyber security issues, June 3 , 2023Four highlights from this week: New EPIC Report Sheds Light on Generative A.I. Harms; Don’t Store Your Money on Venmo, U.S. Govt Agency Warns (or PayPal); FTC Says Ring Employees Illegally Surveilled Customers, Failed to Stop Hackers from Taking Control of Users’ Cameras; and Twitter withdraws from EU’s disinformation code as bloc warns against hiding from liability.

LLRX.com® – the free web journal on law, technology, knowledge discovery and research for Librarians, Lawyers, Researchers, Academics, and Journalists. Founded in 1996.

Posted in: KM