Professor Aarushi Bhandari was taken aback when she learned that not a single student had heard that the Writers Guild of America had reached a deal with the Alliance of Motion Picture and Television Producers, or AMPTP, after a nearly 150-day strike. This historic deal includes significant raises, improvements in health care and pension support, and – unique to our times – protections against the use of artificial intelligence to write screenplays. Across online media platforms, the WGA announcement on Sept. 24, 2023, ended up buried under headlines and posts about the celebrity duo of Taylor Swift and Chiefs tight end Travis Kelce. To Bhandari, this disconnect felt like a microcosm of the entire online media ecosystem.
Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.
Human factors engineer James Intriligator makes a clear and important distinction for researchers: that unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer. Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy; Microsoft AI team accidentally leaks 38TB of private company data; California legislature passes ‘Delete Act’ to protect consumer data; and Starlink lost over 200 satellites in two months.
Jim Calloway, Director of the Oklahoma Bar Association’s Management Assistance Program and Julie Bays, OBA Practice Management Advisor, aiding attorneys in using technology and other tools to efficiently manage their offices, recommend that now is a good time to experiment with specific AI-powered tools and suggest the best techniques for using them.
Legaltech Hub’s Nicola Shaver discusses why it is time to level-set about advanced AI: it can’t do everything. Or perhaps more practically, a large language model can’t replace all of the other technology you already have. One of the main reasons for this is the importance of an interface and a built-out user experience (UX) that offers a journey through the system that is aligned with the way users actually work. There are other reasons a large language model (LLM) won’t replace all of your technology (one of which being advanced AI is simply unnecessary to do all things), but this article will focus on UX.
Late last week, Google announced something called the Privacy Sandbox has been rolled out to a “majority” of Chrome users, and will reach 100% of users in the coming months. But what is it, exactly? The new suite of features represents a fundamental shift in how Chrome will track user data for the benefit of advertisers. Erica Mealy explains that Instead of third-party cookies, Chrome can now tap directly into your browsing history to gather information on advertising “topics.” Understanding how it works – and whether you want to opt in or out – is important, since Chrome remains the most widely used browser in the world, with a 63% market share as of May 2023.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy; Artificial Intelligence’s Use and Rapid Growth Highlight Its Possibilities and Perils; How To Stop Facebook Using Your Personal Data To Train AI; and CBP Tells Airports Its New Facial Recognition Target is 75% of Passengers Leaving the US.
Articles and Columns for August 2023 The Case For Large Language Model Optimism in Legal Research From A Law & Technology Librarian – The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift. This article by Sean Harrington critically evaluates the advent and fine-tuning of Law-Specific LLMs, such as those offered by …
The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift. This article by Sean Harrington critically evaluates the advent and fine-tuning of Law-Specific LLMs, such as those offered by Casetext, Westlaw, and Lexis. Unlike generalized models, these specialized LLMs draw from databases enriched with authoritative legal resources, ensuring accuracy and relevance. Harrington highlights the importance of advanced prompting techniques and the innovative utilization of embeddings and vector databases, which enable semantic searching, a critical aspect in retrieving nuanced legal information. Furthermore, the article addresses the ‘Black Box Problem’ and explores remedies for transparency. It also discusses the potential of crowdsourcing secondary materials as a means to democratize legal knowledge. In conclusion, this article emphasizes that Law-Specific LLMs, with proper development and ethical considerations, can revolutionize legal research and practice, while calling for active engagement from the legal community in shaping this emerging technology.