Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: How to Make Your Web Searches More Secure and Private; OpenAI’s Custom Chatbots Are Leaking Their Secrets; Inside the Operation to Bring Down Trump’s Truth Social; and Hamas-Linked Group Revives SysJoker Malware, Leverages OneDrive.
Health care organizations, the federal government, academics, and various entities within the medical sector maintain a plethora of sites specific to health care issues. This guide by Marcus P. Zillman focuses on Healthcare Search Engines and Selected Bots and includes 7 Health Forums Online for Expert Support. Zillman’s guide incorporates both Eastern and Western medical practices.
Why Google, Bing and other search engines’ embrace of generative AI threatens $68 billion SEO industry
Dr. Ravi Sen discusses how Google, Microsoft and others boast that generative artificial intelligence tools like ChatGPT will make searching the internet better than ever for users. For example, rather than having to wade through a sea of URLs, users will be able to just get an answer combed from the entire internet. There are also some concerns with the rise of AI-fueled search engines, such as the opacity over where information comes from, the potential for “hallucinated” answers and copyright issues. But one other consequence is that I believe it may destroy the US$68 billion search engine optimization industry that companies like Google helped create.
Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.
Human factors engineer James Intriligator makes a clear and important distinction for researchers: that unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer. Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy; Microsoft AI team accidentally leaks 38TB of private company data; California legislature passes ‘Delete Act’ to protect consumer data; and Starlink lost over 200 satellites in two months.
Legaltech Hub’s Nicola Shaver discusses why it is time to level-set about advanced AI: it can’t do everything. Or perhaps more practically, a large language model can’t replace all of the other technology you already have. One of the main reasons for this is the importance of an interface and a built-out user experience (UX) that offers a journey through the system that is aligned with the way users actually work. There are other reasons a large language model (LLM) won’t replace all of your technology (one of which being advanced AI is simply unnecessary to do all things), but this article will focus on UX.
Late last week, Google announced something called the Privacy Sandbox has been rolled out to a “majority” of Chrome users, and will reach 100% of users in the coming months. But what is it, exactly? The new suite of features represents a fundamental shift in how Chrome will track user data for the benefit of advertisers. Erica Mealy explains that Instead of third-party cookies, Chrome can now tap directly into your browsing history to gather information on advertising “topics.” Understanding how it works – and whether you want to opt in or out – is important, since Chrome remains the most widely used browser in the world, with a 63% market share as of May 2023.
The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift. This article by Sean Harrington critically evaluates the advent and fine-tuning of Law-Specific LLMs, such as those offered by Casetext, Westlaw, and Lexis. Unlike generalized models, these specialized LLMs draw from databases enriched with authoritative legal resources, ensuring accuracy and relevance. Harrington highlights the importance of advanced prompting techniques and the innovative utilization of embeddings and vector databases, which enable semantic searching, a critical aspect in retrieving nuanced legal information. Furthermore, the article addresses the ‘Black Box Problem’ and explores remedies for transparency. It also discusses the potential of crowdsourcing secondary materials as a means to democratize legal knowledge. In conclusion, this article emphasizes that Law-Specific LLMs, with proper development and ethical considerations, can revolutionize legal research and practice, while calling for active engagement from the legal community in shaping this emerging technology.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Artificial Intelligence: Key Practices to Help Ensure Accountability in Federal Use; Don’t get scammed by fake ChatGPT apps: Here’s what to look out for; Apple Employees Forbidden From Using ChatGPT; and How to Enable Advanced Data Protection on iOS, and Why You Should.