Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Sen. Wyden Releases Documents Confirming the NSA Buys Americans’ Internet Browsing Records; Inside a Global Phone Spy Tool Monitoring Billions; AT&T is trying to kill all landlines in California, which would have devastating effects; and the Continued Threat to Personal Data: Key Factors Behind the 2023 Increase.
Professor Aarushi Bhandari was taken aback when she learned that not a single student had heard that the Writers Guild of America had reached a deal with the Alliance of Motion Picture and Television Producers, or AMPTP, after a nearly 150-day strike. This historic deal includes significant raises, improvements in health care and pension support, and – unique to our times – protections against the use of artificial intelligence to write screenplays. Across online media platforms, the WGA announcement on Sept. 24, 2023, ended up buried under headlines and posts about the celebrity duo of Taylor Swift and Chiefs tight end Travis Kelce. To Bhandari, this disconnect felt like a microcosm of the entire online media ecosystem.
Kevin Novack, digital strategist and CEO with extensive experience digitizing disparate collections at the Library of Congress, discusses the increasing importance of acknowledging and incorporating social proof into your marketing strategies to showcase the power of your brands and services. The recent wave of digital tools that are built to influence decisions have come under increasing scrutiny as we have learned, they may not be all that trustworthy. Examples include TikTok and its power to influence and even change the behaviors of impressionable next gens. Or Instagram’s role in enabling body shaming and mocking others. And more recently the overwhelming impact of ChatGPT, and the fascination with and growing use of thousands of apps and services built on OpenAI. Novack asks – but can you trust it? And responds – probably about as much as you can trust all online listings and crowdsourced input, which are the sources of GPT’s recommendations. From the user perspective, discerning fact from fiction, when interacting with your organization, is only becoming more critical.
Several polls in the past couple of years (including from Ipsos, YouGov and most recently Savanta on behalf of Kings College Policy Institute and the BBC) have been examining the kinds of conspiratorial beliefs people have. The findings have led to a lot of concern and discussion. There are several revealing aspects of these polls. Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business School, is interested in what claims are considered conspiratorial and how these are phrased. But she is also interested in the widespread belief that conspiracy theories are apparently on the rise, thanks to the internet and social media. Is this true and how concerned should we really be about conspiracy theories?
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Privacy Guides – Search Engines; The true numbers behind deepfake fraud; 6 riskiest medical devices for cybersecurity; and ‘As an AI language model’: the phrase that shows how AI is polluting the web.
Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University writes: AI tools can help us create content, learn about the world and (perhaps) eliminate the more mundane tasks in life – but they aren’t perfect. They’ve been shown to hallucinate information, use other people’s work without consent, and embed social conventions, including apologies, to gain users’ trust. For example, certain AI chatbots, such as “companion” bots, are often developed with the intent to have empathetic responses. This makes them seem particularly believable. Despite our awe and wonder, we must be critical consumers of these tools – or risk being misled. Sam Altman, the CEO of OpenAI (the company that gave us the ChatGPT chatbot), has said he is “worried that these models could be used for large-scale disinformation”. As someone who studies how humans use technology to access information, so am I.
ChatGPT has taken the world by storm. Within two months of its release it reached 100 million active users, making it the fastest-growing consumer application ever launched. Users are attracted to the tool’s advanced capabilities – and concerned by its potential to cause disruption in various sectors. A much less discussed implication is the privacy risks ChatGPT poses to each and every one of us. Just yesterday, Google unveiled its own conversational AI called Bard, and others will surely follow. Technology companies working on AI have well and truly entered an arms race. Uri Gal identifies a significant issue not discussed in the current hype – this technology is fuelled by our personal data.
Experts grade Facebook, TikTok, Twitter, YouTube on readiness to handle midterm election misinformation
Professors Dam Hee Kim, Anjana Susarla and Scott Shackelford are experts on social media. They were asked to grade how ready Facebook, TikTok, Twitter and YouTube are to handle the task of misinformation and disinformation in the upcoming election cycles. Social media companies have announced plans to deal with misinformation in the 2022 midterm elections, but the companies vary in their approaches and effectiveness and the result promises to be another jarring challenge to democracy in America.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Why your organization should plan for deepfake fraud before it happens; FTC Sues Broker Kochava Over Geolocation Data Sales; Google Chrome Bug Lets Sites Silently Overwrite System Clipboard Content; and Chrome extensions with 1.4 million installs steal browsing data.
Don’t be too quick to blame social media for America’s polarization – cable news has a bigger effect, study finds
Homa Hosseinmardi and a group of researchers from Stanford University, the University of Pennsylvania and Microsoft Research tracked the TV news consumption habits of tens of thousands of American adults each month from 2016 through 2019. They discovered four aspects of news consumption that, when taken together, paint an unsettling picture of the TV news ecosystem.