Category «Legal Technology»

Pete Recommends – Weekly highlights on cyber security issues, June 24, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: How Your New Car Tracks You; Democratic senators concerned Amazon health platform ‘harvesting consumer health data from patients’; How generative AI is creating new classes of security threats; and US cyber ambassador says China can win on AI, cloud.

Subjects: AI, Cybercrime, Cybersecurity, Data Mining, Healthcare, Intellectual Property, KM, Legal Research, Privacy, RSS Newsfeeds

The Digital Psychology of Persuasion

Kevin Novack, digital strategist and CEO with extensive experience digitizing disparate collections at the Library of Congress, discusses the increasing importance of acknowledging and incorporating social proof into your marketing strategies to showcase the power of your brands and services. The recent wave of digital tools that are built to influence decisions have come under increasing scrutiny as we have learned, they may not be all that trustworthy. Examples include TikTok and its power to influence and even change the behaviors of impressionable next gens. Or Instagram’s role in enabling body shaming and mocking others. And more recently the overwhelming impact of ChatGPT, and the fascination with and growing use of thousands of apps and services built on OpenAI. Novack asks – but can you trust it? And responds – probably about as much as you can trust all online listings and crowdsourced input, which are the sources of GPT’s recommendations. From the user perspective, discerning fact from fiction, when interacting with your organization, is only becoming more critical.

Subjects: Communication Skills, Ethics, Internet Trends, KM, Social Media

Conspiracy theories aren’t on the rise – we need to stop panicking

Several polls in the past couple of years (including from Ipsos, YouGov and most recently Savanta on behalf of Kings College Policy Institute and the BBC) have been examining the kinds of conspiratorial beliefs people have. The findings have led to a lot of concern and discussion. There are several revealing aspects of these polls. Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business School, is interested in what claims are considered conspiratorial and how these are phrased. But she is also interested in the widespread belief that conspiracy theories are apparently on the rise, thanks to the internet and social media. Is this true and how concerned should we really be about conspiracy theories?

Subjects: Climate Change, Education, Internet Trends, KM, Social Media

Suicide Hotlines Promise Anonymity. Dozens of Their Websites Send Sensitive Data to Facebook

Reporter Colin Lecher and Data Journalist Jon Kreeger discuss how websites for mental health crisis resources across the country—which promise anonymity for visitors, many of whom are at a desperate moment in their lives—have been quietly sending sensitive visitor data to Facebook. Dozens of websites tied to the national mental health crisis 988 hotline, which launched last summer, transmit the data through a tool called the Meta Pixel, according to testing conducted by The Markup. That data often included signals to Facebook when visitors attempted to dial for mental health emergencies by tapping on dedicated call buttons on the websites.

Subjects: Big Data, Health, Healthcare, KM, Legal Research, Social Media

Pete Recommends – Weekly highlights on cyber security issues, June 11, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Top 5 Most Common Text Message Scams & How to Avoid Them; From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You; Service Rents Email Addresses for Account Signups; and FTC Slams Amazon with $30.8M Fine for Privacy Violations Involving Alexa and Ring.

Subjects: AI, Communications, Congress, Cybercrime, Cybersecurity, Email Security, Healthcare, KM, Legal Research, Privacy, Social Media

How AI could take over elections and undermine democracy

Archon Fung, Professor of Citizenship and Self-Government, Harvard Kennedy School, and Lawrence Lessig, Professor of Law and Leadership, Harvard University, pose the question: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways? Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters. Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election. While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

Subjects: AI, Communications Law, Congress, Constitutional Law, KM, Legal Research, Social Media

Pete Recommends – Weekly highlights on cyber security issues, June 3, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: New EPIC Report Sheds Light on Generative A.I. Harms; Don’t Store Your Money on Venmo, U.S. Govt Agency Warns (or PayPal); FTC Says Ring Employees Illegally Surveilled Customers, Failed to Stop Hackers from Taking Control of Users’ Cameras; and Twitter withdraws from EU’s disinformation code as bloc warns against hiding from liability.

Subjects: AI, Cybercrime, Cybersecurity, Economy, Financial System, KM, Privacy, Spyware, United States Law

LLRX May 2023 Issue

Articles and Columns for May 2023 Is using Generative AI just another form of outsourcing?– Is the implementation of generative AI simply a new flavor of outsourcing? How does this digital revolution reflect on our interpretation of the American Bar Association’s (ABA) ethical guidelines? How can we ensure that we maintain the sacrosanct standards of …

Subjects: KM

Is using Generative AI just another form of outsourcing?

Is the implementation of generative AI simply a new flavor of outsourcing? How does this digital revolution reflect on our interpretation of the American Bar Association’s (ABA) ethical guidelines? How can we ensure that we maintain the sacrosanct standards of our profession as we step into this exciting future? Josh Kubicki⁠, Business Designer, Entrepreneur, University of Richmond School of Law Professor, presents a starting point to explore potential ethics considerations surrounding the use of generative AI.

Subjects: AI, Cybersecurity, Ethics, KM, Legal Ethics, Legal Marketing

AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt’

As a technology ethics educator and researcher, Carey Fiesler has thought about AI systems amplifying harmful biases and stereotypes, students using AI deceptively, privacy concerns, people being fooled by misinformation, and labor exploitation. Fiesler characterizes this not at technical debt but as accruing ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.

Subjects: AI, Cyberlaw, Education, Ethics, Human Rights, KM, Legal Ethics, Technology Trends