Pete Recommends – Weekly highlights on cyber security issues, July 8, 2023

Subject: 3 Billion Chrome Users Will See This Privacy Sandbox Pop-Up
Source: Gizmodo
https://gizmodo.com/3-billion-chrome-users-are-about-to-see-this-privacy-sa-1850595391

Google Chrome users will soon see a pop-up when they update their browser to version 115, initiating the first phase of Google’s years-long Privacy Sandbox project. The prompt, which describes the changes as “Enhanced ad privacy in Chrome,” is the first stage in Google’s astonishingly complicated plan to kill third-party cookies. Some users are seeing it already, but the prompt won’t hit everyone at the same time. According to Google, the pop-up starts rolling out on a larger scale in mid July, and will hit every browser gradually over the course of the following weeks.

Let’s be clear though: what you’re looking at here is three new ways for companies to track you online. It’s more privacy than how you were tracked in the past, but if you would like to be tracked less, then you want to turn this off. If you really don’t like being spied on, you can go even further and disable third party cookies right now. However, if privacy is very important to you, then you probably shouldn’t be using Chrome in the first place, a browser that is made by a gigantic advertising data company that made $224 billion in ad money in 2022. There is a long list of alternatives.


Subject: Barred From Grocery Stores by Facial Recognition
Source: NYT via Yahoo! News
https://news.yahoo.com/barred-grocery-stores-facial-recognition-120312610.html

Use of facial recognition technology by police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

But the use by retailers has drawn criticism as a disproportionate solution for minor crimes. Individuals have little way of knowing they are on the watchlist or how to appeal. In a legal complaint last year, Big Brother Watch, a civil society group, called it “Orwellian in the extreme.” …‘Mistakes Are Rare but Do Happen’…


Subject: How secure are voice authentication systems really?
Source: University of Waterloo News
https://uwaterloo.ca/news/media/how-secure-are-voice-authentication-systems-really

Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries. Voice authentication – which allows companies to verify the identity of their clients via a supposedly unique “voiceprint” – has increasingly been used in remote banking, call centers and other security-critical scenarios.

“When enrolling in voice authentication, you are asked to repeat a certain phrase in your own voice. The system then extracts a unique vocal signature (voiceprint) from this provided phrase and stores it on a server,” said Andre Kassis, a Computer Security and Privacy PhD candidate and the lead author of a study detailing the research.

“For future authentication attempts, you are asked to repeat a different phrase and the features extracted from it are compared to the voiceprint you have saved in the system to determine whether access should be granted.”

After the concept of voiceprints was introduced, malicious actors quickly realized they could use machine learning-enabled “deepfake” software to generate convincing copies of a victim’s voice using as little as five minutes of recorded audio.

The Waterloo researchers have developed a method that evades spoofing countermeasures and can fool most voice authentication systems within six attempts. They identified the markers in deepfake audio that betray it is computer-generated, and wrote a program that removes these markers, making it indistinguishable from authentic audio.

RSS: https://uwaterloo.ca/news/taxonomy/term/28/feed


Subject: US authorities warn on China’s new counter-espionage
Source: The Register
https://www.theregister.com/2023/07/03/china_espionage_law_update_warning/

The United States’ National Counterintelligence and Security Center (NCSC) has warned that China’s updated Counter-Espionage law – which came into effect on July 1 – is dangerously ambiguous and could pose a risk to global business.The NCSC publishes non-classified bulletins titled “Safeguarding Our Future” on an ad hoc schedule to “provide a brief overview of a specific foreign intelligence threat, as well as impacts of that threat and steps for mitigation.”

On June 30 it issued a new one [PDF] titled “US Business Risk: People’s Republic of China (PRC) Laws Expand Beijing’s Oversight of Foreign and Domestic Companies.” The first item discussed is China’s recently revised Counter-Espionage Law, on grounds it “Expands the definition of espionage from covering state secrets and intelligence to any documents, data, materials, or items related to national security interests, without defining terms.”

That vagueness, the Center argues, means “Any documents, data, materials, or items could be considered relevant to PRC national security due to ambiguities in the law” and adds up to potential “legal risks or uncertainty for foreign companies.”

Should you therefore think hard before reading email from your China office, or Chinese partners?

Filed: https://www.theregister.com/security/

More Context:


Subject: Google Says It’ll Scrape Everything You Post Online for AI
Source: Gizmodo
https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

An update to Google’s privacy policy suggests that the entire public internet is fair game for it’s AI projects.Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

One of the less obvious complications of the post ChatGPT world is the question of where data-hungry chatbots sourced their information. Companies including Google and OpenAI scraped vast portions of the internet to fuel their robot habits. It’s not at all clear that this is legal, and the next few years will see the courts wrestle with copyright questions that would have seemed like science fiction a few years ago. In the meantime, the phenomenon already affects consumers in some unexpected ways.`


Subject: Canadian, U.S. authorities issue updated cybersecurity advisory on malware
Source: UPI.com
https://www.upi.com/Top_News/World-News/2023/07/06/authorities-issue-cyber-security-hacker-warning-truebot/7481688684757/

July 6 (UPI) — Canadian and American authorities on Thursday issued an advisory to both countries related to new variants of malware.Hackers have been leveraging “newly identified Truebot malware variants against organizations,” in both Canada and the United States, the U.S. Cybersecurity and Infrastructure Security Agency said in a statement.

“Truebot is a botnet used by malicious cyber groups to collect and exfiltrate sensitive data from its target victims for financial gain,” the Canadian agency said in a statement.

Previously known variants were mainly delivered by email. The latest inception exploits a weakness or vulnerability in the Netwrix Auditor application.

The CISA also issued guidance on how to deal with the threat, including applying appropriate vendor patches.


Subject: Threads—Exactly How Private Is Meta’s New Twitter Challenger?
Source: Forbes
https://www.forbes.com/sites/kateoflahertyuk/2023/07/07/threads-exactly-how-private-is-metas-new-twitter-challenger/

Here’s everything you need to know about data collection and privacy on Threads, including what happens to your Instagram account if you decide to delete the new app.

How Much Data Does Threads Collect?

Threads collects a lot of data. According to the privacy label, which I sourced from Apple’s App store, data linked to you includes your location, health and fitness information, browsing history, “sensitive information” and search history. This also includes your contact information such as physical address, email and phone number.

Data including your search history and browsing history can also be used for advertising and marketing and “personalisation”—in other words, showing you ads. Threads isn’t showing ads just yet, but make no mistake, with this amount of lucrative data available, it will.

How Bad Is Threads Privacy, Compared to Instagram? On the face of it, Threads data collection is a horror show, but it isn’t any worse than Instagram. “At first glance the privacy nutrition label of Threads looks awful, but as soon as you compare it to Instagram’s label, you figure out that Instagram isn’t any better,” says security researcher Tommy Mysk.

The only difference is that Threads does not use any collected data to track users. “On the other hand, Instagram discloses that it uses contact information and other identifiers to track users,” Mysk explains.


Subject: Scammers using AI voice technology to commit crimes
Source: Help Net Security
https://www.helpnetsecurity.com/2023/07/07/ai-voice-cloning-scams/

The usage of platforms like Cash App, Zelle, and Venmo for peer-to-peer payments has experienced a significant surge, with scams increasing by over 58%. Additionally, there has been a corresponding rise of 44% in scams stemming from the theft of personal documents, according to IDIQ. AI voice technology – The report also highlights the rise of AI voice scams as a significant trend in 2023. AI voice technology enables scammers to create remarkably realistic voices and convincingly imitate family members, friends and other trusted individuals.

“AI voice cloning scams are the scariest thing I have seen in the last 20 years,” said Scott Hermann, CEO of IDIQ and a cybersecurity and financial expert. Protecting against AI voice cloning scams.

Ways the public can help protect themselves from these scams:

  • Being cautious of unsolicited offers, requests, and calls
  • Always verifying identities, including having a family “password”
  • Using strong cybersecurity practices, including unique passwords, multi-factor authentication and VPN
  • Protecting and monitoring personal information
  • Educating themselves on the latest scams and trends as new scams continue to arise

Tags:

Posted in: Big Data, Computer Security, Cybersecurity, Email Security, Government Resources, KM, Privacy, Social Media