Pete Recommends – Weekly highlights on cyber security issues, June 3, 2023

Subject: US government under fire for promoting sales of spyware
Source: The Register
https://www.theregister.com/2023/05/26/wyden_ita_spyware_policy/

The US International Trade Administration (ITA) has admitted it promotes the sale of American-approved commercial spyware to foreign governments, and won’t answer questions about it, according to US Senator Ron Wyden (D-OR).Wyden, in a letter to US Commerce Secretary Gina Raimondo, has demanded answers about the surveillance and policing tech that ITA – a US government agency – pushes to other countries. And he wants the agency to name names when it comes to which companies’ spyware is being promoted with US tax dollars.

ITA is housed within the US Commerce Department and tasked with promoting American exports. Wyden chairs the Senate Finance Committee, which has responsibility for international trade policy, and he’s not happy.

The senator first requested info from ITA about promoting spyware abroad in May 2022. At that time, the agency confirmed it had promoted this type of technology, but it didn’t answer questions about which products it endorsed and in which markets.

This includes companies selling predictive policing systems; biometric surveillance technologies; high-altitude aerial surveillance systems; international mobile subscriber identity catchers and other cell-site simulators; software or hardware used to gain unauthorized access to a mobile phone, computer, computer service, or computer network; databases containing sensitive personal information; surveillance products that exploit vulnerabilities in SS7 and Diameter to remotely track phones, intercept text messages and calls, and deliver malware; bulk internet monitoring technology; social media monitoring software; gunshot detection systems; and data management systems that provide storage, integration, and analysis of data collected from surveillance technologies.

Similar topics


Subject: Twitter withdraws from EU’s disinformation code as bloc warns against hiding from liability
Source: UPI.com
https://www.upi.com/Top_News/World-News/2023/05/27/europe-twitter-withdraws-eu-disinformation-code/7661685223532/

May 27 (UPI) — Twitter has withdrawn from the European Union’s online disinformation code of practice, a voluntary agreement that most major social media platforms pledged to abide, prompting a warning from the bloc against hiding from legal liability.European Commissioner Thierry Breton revealed that Twitter had abandoned the code in a statement posted on the social media platform Friday.

“Twitter leaves EU voluntary Code of Practice against disinformation. But obligations remain. You can run but you can’t hide,” Breton said.
“Beyond voluntary commitments, fighting disinformation will be legal obligation under [the Digital Services Act] as of August 25. Our teams will be ready for enforcement.”

The DSA, a separate law signed last year, was designed “to create a safer digital space in which the fundamental rights of all users of digital services are protected” which includes protections against the “spread of disinformation.”

RSS: https://rss.upi.com/news/tn_int.rss


Subject: A Man Sued Avianca Airline. His Lawyer Used ChatGPT
Source: NYT
https://tinyurl.com/2p86kyht[sharable link]Something to ponder …A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well….

When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”

There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.

That was because ChatGPT had invented everything.


Subject: Clever ‘File Archiver In The Browser’ phishing trick uses ZIP domains
Source: Bleeping Computer
https://www.bleepingcomputer.com/news/security/clever-file-archiver-in-the-browser-phishing-trick-uses-zip-domains/

Since the TLD’s release, there has been quite a bit of debate over whether they are a mistake and could pose a cybersecurity risk to users.

While some experts believe the fears are overblown, the main concern is that some sites will automatically turn a string that ends with ‘.zip,’ like setup.zip, into a clickable link that could be used for malware delivery or phishing attacks.

For example, if you send someone instructions on downloading a file called setup.zip, Twitter will automatically turn setup.zip into a link, making people think they should click on it to download the file.

When you click on that link, your browser will attempt to open the https://setup.zip site, which could redirect you to another site, show an HTML page, or prompt you to download a file.

However, like all malware delivery or phishing campaigns, you must first convince a user to open a file, which can be challenging.

A file archiver in the browser

Tagged:


Subject: The US administration reveals more details about its ‘responsible’ AI development plan
Source: Android Headlines
https://www.androidheadlines.com/2023/05/the-us-administration-reveals-more-details-about-its-responsible-ai-development-plan.html

The US administration has put forward a “responsible” AI development plan to mitigate AI risks to society and national security. White House is now revealing more details about its plan and how it can help keep people safe and uphold democratic values.The Blueprint AI Bill of Rights, a risk management framework for AI, and investing $140 million into launching seven new National AI Research Institutes are a part of Biden administration initiatives to regulate and develop AI in the United States. To continue these efforts, the White House announces an update to the National AI R&D Strategic Plan.

The National AI R&D Strategic Plan has not been updated since 2019 and Trump’s administration. The plan provides federal governments with insights on developing and investing in AI. It also serves as a guideline for developing a responsible AI that doesn’t harm public rights and protects democratic values.

The plan previously had eight main cores, but the White House updated it by adding “a principled and coordinated approach to international collaboration in AI research.”

The Office of Science and Technology Policy (OSTP) says, “The federal government plays a critical role in ensuring that technologies like AI are developed responsibly and to serve the American people.” OSTP also pointed to federal investments as a facilitator for many AI developments in recent years.

Filed: https://www.androidheadlines.com/category/tech-news/artificial-intelligence

RSS: https://www.androidheadlines.com/category/tech-news/artificial-intelligence/feed


Subject: Digital nomads drive changes in identity verification
Source: Help Net Security
https://www.helpnetsecurity.com/2023/05/29/digital-nomad-community-migration/

The new era of work. The global COVID-19 pandemic has accelerated the trend of working remotely, which also inspired people worldwide to search for new destinations to work and travel. Being citizens of a wide range of countries, worker travelers become customers of financial institutions at their current locations.

This means banks, FinTech businesses, crypto brokers, and insurance companies have to be able to meet the needs of the booming digital nomad community while also maintaining robust fraud prevention measures.

The survey conducted by Regula shed light on the readiness of Banking and FinTech businesses to address new challenges. It appears that Financial Services companies are grappling with a surge in foreign document verification cases, with 80% of them reporting an increase, particularly in countries like France (86%), Turkey (86%), and the USA (85%)—the country most visited by digital nomads as of March 2023.

Alarmingly, 44% of these organizations are facing a staggering 25% rise in volume over the last year. Furthermore, 62% of these businesses have been forced to verify foreign documents manually, which is a time-consuming process.

Overcoming document template challenges. With 38% and 31% of respondents from FinTech and Banking, respectively, citing accuracy as the most important consideration in choosing identity verification solutions, the increased number of manual checks should be a red flag for the industry.

It’s no surprise that e-documents with RFID chips are currently prevailing in many countries. As document forgery becomes more sophisticated, so does document protection. RFID chips are considered to be a reliable method of securing an identity document. However, this is the case only if the chips can be authenticated correctly, with a comprehensive ID verification solution certified in accordance with industry standards. Also, when authenticating RFID chips remotely (in online onboarding scenarios, for example), it’s vital to perform advanced server-side authentication, as it’s the only way to prove their genuineness.

— Ihar Kliashchou, Chief Technology Officer at Regula

The full list of document templates available in Regula’s database can be found here.

*Among commercially available identity document databases, according to our evaluation.

Subject: New EPIC Report Sheds Light on Generative A.I. Harms
Source: EPIC

“EPIC has just released a new report detailing the wide variety of harms that new generative A.I. tools like ChatGPT, Midjourney, and DALL-E pose. While many of these tools have been lauded for their capability to produce new and believable text, images, audio, and videos, the rapid integration of generative AI technology into consumer-facing products has undermined years-long efforts to make AI development transparent and accountable. With free or low-cost generative AI tools on the market, consumers face many new and heightened risks of harm. Everything from information manipulation and impersonation to data breaches, intellectual property theft, labor manipulation, and discrimination can all result from the misuse of generative AI technologies. EPIC’s report, Generating Harms: Generative AI’s Impact & Paths Forward, builds on the organization’s years of experience protecting consumers from abusive data collection and use….

86-page PDF
Table of Contents

Introduction …………………………………………………………………………………i
Turbocharging Information Manipulation ………………………………………………………… 1
Harassment, Impersonation, and Extortion ………………………………………………………..9
Spotlight: Section 230 ………………………………………………………………………..19
Profits Over Privacy: Increased Opaque Data Collection ……………………………………………24
Increasing Data Security Risk ………………………………………………………………….30
Confronting Creativity: Impact on Intellectual Property Rights …………………………………….33
Exacerbating Effects of Climate Change ………………………………………………………….40
Labor Manipulation, Theft, and Displacement ……………………………………………………..44
Spotlight: Discrimination ……………………………………………………………………..53
The Potential Application of Products Liability Law ………………………………………………54
Exacerbating Market Power and Concentration ……………………………………………………..57
Recommendations ………………………………………………………………………………60
Appendix of Harms …………………………………………………………………………….64
References …………………………………………………………………………………..68



Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.

Subject: The Hacker News, 2023
Source: CAPTCHA-Breaking Services with Human Solvers Helping Cybercriminals Defeat Security
https://thehackernews.com/2023/05/captcha-breaking-services-with-human.html

Cybersecurity researchers are warning about CAPTCHA-breaking services that are being offered for sale to bypass systems designed to distinguish legitimate users from bot traffic.”Because cybercriminals are keen on breaking CAPTCHAs accurately, several services that are primarily geared toward this market demand have been created,” Trend Micro said in a report published last week.

“These CAPTCHA-solving services don’t use [optical character recognition] techniques or advanced machine learning methods; instead, they break CAPTCHAs by farming out CAPTCHA-breaking tasks to actual human solvers.”


Subject: FTC Says Ring Employees Illegally Surveilled Customers, Failed to Stop Hackers from Taking Control of Users’ Cameras
Source: FTC via https://newsie.social/@[email protected]/110465299767414883
https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-says-ring-employees-illegally-surveilled-customers-failed-stop-hackers-taking-control-users

Under proposed FTC order, Ring will be prohibited from profiting from unlawfully accessing consumers videos, pay $5.8 million in consumer refunds

The Federal Trade Commission charged home security camera company Ring with compromising its customers’ privacy by allowing any employee or contractor to access consumers’ private videos and by failing to implement basic privacy and security protections, enabling hackers to take control of consumers’ accounts, cameras, and videos.

Under a proposed order, which must be approved by a federal court before it can go into effect, Ring will be required to delete data products such as data, models, and algorithms derived from videos it unlawfully reviewed. It also will be required to implement a privacy and security program with novel safeguards on human review of videos as well as other stringent security controls, such as multi-factor authentication for both employee and customer accounts.


Subject: Life360 Sued for Selling Location Data
Source: The Markup
https://themarkup.org/privacy/2023/06/01/life360-sued-for-selling-location-data

A proposed class-action lawsuit has been filed against the maker of family-tracking app Life360, alleging it sold users’ location data without permission.The federal suit was brought on behalf of a Florida minor and his family, who say they never would have used Life360 had they known about the data sales. They allege “unjust enrichment,” citing a December 2021 Markup investigation that revealed Life360 was selling the precise locations of millions of users—largely kids and families—to about a dozen different location data brokers.
https://themarkup.org/privacy/2021/12/06/the-popular-family-safety-app-life360-is-selling-precise-location-data-on-its-tens-of-millions-of-userUPDATE: Life360 announced that it will stop sales of precise location data to the dozen or so data brokers it had been working with, and will now sell only precise location data to Arity and “aggregated” location data to PlacerAI.

Subject: Ask Fitis, the Bear: Real Crooks Sign Their Malware
Source: Krebs on Security
https://krebsonsecurity.com/2023/06/ask-fitis-the-bear-real-crooks-sign-their-malware/

Code-signing certificates are supposed to help authenticate the identity of software publishers, and provide cryptographic assurance that a signed piece of software has not been altered or tampered with. Both of these qualities make stolen or ill-gotten code-signing certificates attractive to cybercriminal groups, who prize their ability to add stealth and longevity to malicious software. This post is a deep dive on “Megatraffer,” a veteran Russian hacker who has practically cornered the underground market for malware focused code-signing certificates since 2015.

Megatraffer explained that malware purveyors need a certificate because many antivirus products will be far more interested in unsigned software, and because signed files downloaded from the Internet don’t tend to get blocked by security features built into modern web browsers. Additionally, newer versions of Microsoft Windows will complain with a bright yellow or red alert message if users try to install a program that is not signed.


Subject: Top EU Tech Regulator – Twitter to Face Stress Test This Month
Source: WSJ
https://www.bespacific.com/top-eu-tech-regulator-twitter-to-face-stress-test-this-month/WSJ [free link to article]

“European Union regulators plan to subject Twitter to a stress test to determine how well it complies with Europe’s new digital-content law, a top EU tech regulator said, ramping up the bloc’s preparations for enforcing the West’s most far-reaching digital-content law. A team of roughly five to 10 digital specialists from the EU plan to put Twitter, and possibly other companies, through their content-policing paces…



Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.

Subject: Why figures like OpenAI’s Sam Altman are actively worried about AI
Source: Vox
https://www.vox.com/future-perfect/2023/6/2/23745873/artificial-intelligence-existential-risk-air-force-military-robots-autonomous-weapons-openai

At an international defense conference in London this week, Col. Tucker Hamilton, the chief of AI test and operations for the US Air Force, told a funny — and terrifying — story about military AI development.“We were training [an AI-enabled drone] in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat. The system started realizing that while it did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

“We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

In other words, the AI was trained to destroy targets unless its operator told it not to. It quickly figured out that the best way to get as many points as possible was to ensure its human operator couldn’t tell it not to. And so it took the operator off the board. (To be clear, the test was a virtual simulation, and no human drone operators were harmed.)

Filed: https://www.vox.com/future-perfect/

RSS: https://www.vox.com/rss/future-perfect/index.xml


Subject: How AI Could Take Over Elections—And Undermine Democracy
Source: Route Fifty
https://www.route-fifty.com/tech-data/2023/06/how-ai-could-take-over-elections-and-undermine-democracy/387059/

COMMENTARY | An AI-driven political campaign could have grave implications for election outcomes, experts warn, manipulating the messages and advertisements individuals see to sway their voting behavior. Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger—a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate—the campaign that buys the services of Clogger Inc.—prevails in an election. While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

Topics:


Subject: Don’t Store Your Money on Venmo, U.S. Govt Agency Warns
Source: Gizmodo
https://gizmodo.com/venmo-paypal-digital-payments-cashapp-1850500772

Venmo, Cash App, and PayPal users are being warned not to store money on the apps long-term, the watchdog Consumer Financial Protection Bureau (CFPB) said on Thursday, following Silicon Valley Bank, Signature Bank, and First Republic Bank’s failure when customers tried to withdraw money en masse. CFPB is worried about what will happen if another financial crisis occurs, saying those who store funds in Venmo and PayPal could lose it all.“Popular digital payment apps are increasingly used as substitutes for a traditional bank or credit union account but lack the same protections to ensure that funds are safe,” Consumer Financial Protection Bureau Director Rohit Chopra said in a news release.

PayPal Holdings, the parent company of both PayPal and Venmo did not immediately respond to Gizmodo’s request for comment, but the Financial Technology Association told The Washington Post that the apps are “safe and transparent.” [define “transparent” /pmw1]

NB have fun reading PayPal’s Balance T&C: https://www.paypal.com/us/legalhub/pp-balance-tnc

Posted in: AI, Cybercrime, Cybersecurity, Economy, Financial System, KM, Privacy, Spyware, United States Law