Pete Recommends – Weekly highlights on cyber security issues, February 17, 2024

Subject: CMS revises policy on texting patient orders
Source: Becker’s Health IT

CMS updated its policy for clinical teams texting patient orders.The Feb. 8 memorandum summary states: “Texting patient information and the texting of patient orders among members of the health care team is permissible, if accomplished through a HIPAA compliant secure texting platform (STP) and in compliance with the Conditions of Participation (CoPs).”

The policy is updated from a 2018 memorandum prohibiting clinicians from texting patient orders due to patient privacy and security concerns. CMS said when the policy was originally developed, many hospitals didn’t have secure texting platforms to integrate information into medical records.


Latest articles on Cybersecurity:

Fitch: Credit hit ‘unlikely’ from Lurie Children’s cyberattack
How long can hospitals survive a tech blackout?

Feds warn of new ransomware gang targeting healthcare

Subject: ExpressVPN bug has been leaking some DNS requests for years
Source: Bleeping Computer

ExpressVPN has removed the split tunneling feature from the latest version of its software after finding that a bug exposed the domains users were visiting to configured DNS servers.The bug was introduced in ExpressVPN Windows versions 12.23.1 – 12.72.0, published between May 19, 2022, and Feb. 7, 2024, and only affected those using the split tunneling feature.

However, this bug caused some DNS queries to be sent to the DNS server configured on the computer, usually a server at the user’s ISP, allowing the server to track a user’s browsing habits.

Having a DNS request leak like the one disclosed by ExpressVPN means that Windows users with active split tunneling potentially expose their browsing history to third parties, breaking a core promise of VPN products.



Subject: TikTok’s attempt to stall DMA antitrust rules rejected by EU court
Source: The Verge

TikTok’s attempt to stall the EU from designating it as a “gatekeeper” — companies with platforms powerful enough that they must follow strict Digital Markets Act (DMA) antitrust rules — has been rejected by a court. Bloomberg reports that the EU’s General Court has dismissed owner ByteDance’s request for an interim measure that would effectively would buy TikTok some more time to implement the regulations, finding that the company “failed to demonstrate the urgency” required.
Although TikTok is appealing the EU’s gatekeeper designation, the bloc still hasn’t reached a final decision yet on the appeal. ByteDance asked for an interim measure in December so it wouldn’t have to comply with the regulations before the EU decided the outcome of the appeal. Today’s decision is a rejection of that request, meaning that TikTok will have to at least temporarily comply with DMA rules that go into effect in March, even if the EU decides later to approve the appeal.

“ByteDance has not shown that there is a real risk of disclosure of confidential information or that such a risk would give rise to serious and irreparable harm,” judges said.

TikTok’s status as a gatekeeper means the platform will join other large tech companies like Apple, Meta, Amazon, and Google in making a series of changes for EU users, including allowing third-party businesses access to their services and requiring consent for personalized advertising. It also means millions of euros in fines for TikTok and all other gatekeeper companies, if they ever break DMA rules. (For a full account of Big Tech’s ongoing battle with the EU over the DMA, check out our StoryStream.)


Subject: Inside the Underground Site Where ‘Neural Networks’ Churn Out Fake IDs
Source: 404 Media

An underground website called OnlyFake is claiming to use “neural networks” to generate realistic looking photos of fake IDs for just $15, radically disrupting the marketplace for fake identities and cybersecurity more generally. This technology, which 404 Media has verified produces fake IDs nearly instantly, could streamline everything from bank fraud to laundering stolen funds.In our own tests, OnlyFake created a highly convincing California driver’s license, complete with whatever arbitrary name, biographical information, address, expiration date, and signature we wanted. The photo even gives the appearance that the ID card is laying on a fluffy carpet, as if someone has placed it on the floor and snapped a picture, which many sites require for verification purposes. 404 Media then used another fake ID generated by this site to successfully step through the identity verification process on OKX. OKX is a cryptocurrency exchange that has recently appeared in multiple court records because of its use by criminals.

[more info for registered/paid subscribers … I am not /pmw1]

Subject: Drone surveillance case in Michigan Supreme Court tests privacy rights

A case in the Michigan Supreme Court is raising new questions about the right to privacy: Can a township’s unmanned drone surveil a homeowner’s property without violating the Fourth Amendment?

But the township did so without getting a warrant first – and Mr. Maxon and his legal team say that infringed his constitutional right against unreasonable searches.

Mr. Maxon said the township had also used drones to enforce zoning rules on others in the community.

The township has said the drone images were of areas beyond the Maxons’ home and hence not protected by the Constitution’s Fourth Amendment, which guarantees “the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches” without probable cause.

The Maxons’ attorney rejected that logic.

Subject: I Stopped Using Passwords. It’s Great—and a Total Mess
Source: WIRED

Passkeys are here to replace passwords. When they work, it’s a seamless vision of the future. But don’t ditch your old logins just yet.

I use a password manager to generate and store all the login details for the 337 accounts I’ve made—from pizza delivery and airlines to social media and online shopping—over more than a decade online. However, using a password manager compulsively and having hundreds of strong passwords likely puts me in the minority…

For the past month, I’ve been converting as many of my accounts as possible—around a dozen for now—to use passkeys and start the move away from the password for good. Spoiler: When passkeys work seamlessly, it’s a glimpse of a more secure future for millions, if not billions, of people, and a reinvention of how we sign in to websites and services. But getting there for every account across the internet is still likely to prove a minefield and take some time.

Subject: 5 Steps to Improve Your Security Posture in Microsoft Teams
Source: Bleeping Computer[Sponsored content by Adaptive Shield]

The cybersecurity risks of SaaS chat apps, such as Microsoft Teams or Slack, often go underestimated. Employees feel secure when communicating on apps that are connected to their corporate network. It’s exactly this misplaced trust within intra-organizational messaging that opens the door to sophisticated attacks by criminal threat actors using a wide range of malicious activities.By contacting employees who are off-guard in SaaS chat apps, threat actors can conduct phishing campaigns, launch malware attacks, and employ sophisticated social engineering tactics.

These sophisticated tactics make it challenging for security teams to detect threats. Employees also lack education when it comes to cybersecurity awareness around messaging apps, as cyber training mainly focuses on phishing via email.

Microsoft Teams chats is a platform that is susceptible to a growing number of incidents as its massive user base is an attractive target for cybercriminals.

In the most recently reported case, AT&T Cybersecurity discovered phishing conducted against its Managed Detection and Response (MDR) customers over Microsoft Teams in a DarkGate malware attack.

This article will shed light on the sources of this attack, draw parallels with previously identified vulnerabilities, and provide actionable remediation steps to fortify your organization against threats of this nature.


Site RSS:

Subject: Your AI Girlfriend Is a Data-Harvesting Horror Show
Source: Gizmodo

Lonely on Valentine’s Day? AI can help. At least, that’s what a number of companies hawking “romantic” chatbots will tell you. But as your robot love story unfolds, there’s a tradeoff you may not realize you’re making. According to a new study from Mozilla’s *Privacy Not Included project, AI girlfriends and boyfriends harvest shockingly personal information, and almost all of them sell or share the data they collect.“To be perfectly blunt, AI girlfriends and boyfriends are not your friends,” said Misha Rykov, a Mozilla Researcher, in a press statement. “Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

One of the more striking findings came when Mozilla counted the trackers in these apps, little bits of code that collect data and share them with other companies for advertising and other purposes. Mozilla found the AI girlfriend apps used an average of 2,663 trackers per minute, though that number was driven up by Romantic AI, which called a whopping 24,354 trackers in just one minute of using the app.

NB see also:

Not only does Google explicitly warn users not to give Gemini any sensitive information they wouldn’t want a human reviewer to read, but Google is also retaining many of your questions to help make their tools better. In fact, everything you tell Gemini might be kept by the company for up to three years—even if you delete all your information from the app.

… Google explains in the latest version of Gemini’s privacy policy.

“We now have the ability to mine the dataset for ourselves, as well as to partner with other groups,” Wojcicki said in an interview with Wired. “It’s a real resource that we could apply to a number of different organizations for their own drug discovery.”

Subject: Gmail could start rejecting suspicious emails even before they reach your inbox
Source: Android Central

What you need to know

  • New email sender guidelines are set to reduce the risk of phishing.
  • The changes are set to impact bulk emails being sent to personal Gmail accounts.
  • Senders of 5,000 emails or more will be required to authenticate their outgoing messages.

Significantly, these new rules will only impact bulk emails being sent to personal Gmail accounts (but not to messages sent to Google Workspace accounts). A bulk sender is, “any email sender that sends 5,000 messages or more to personal Gmail accounts within a 24 hour period”.


Subject: SEC chair: Existing financial law can be applied to AI regulatory debate
Source: Nextgov/FCW

“We at the SEC take investor education very seriously,” he said. “The robo-advising platforms and brokerage platforms affirmatively put educational material on their site. What their obligation is, is to make sure that when they do that, it’s accurate and not misleading.”

He clarified that it is likely not within the SEC’s purview, drawing on existing securities law, to require financial firms to offer education if they decide not to. But should investment firms use AI and machine learning models to aid in certain decisions, Gensler said they should abide by basic disclosures with clients.

“Investor protection requires that the humans who deploy a model …put in place appropriate guardrails,” he said. “If you deploy a model…you’ve got to make sure that it complies with the law.”v

Despite the relatively emerging nature of AI in certain industries, Gensler maintained that disclosure obligations regarding the use of AI systems still broadly apply. This encompasses disclosing risks and benefits.

One potential conflict comes in the form of using predictive data analytics created by certain algorithms to optimize either financial gains for the investor or for a given broker. He confirmed that the problems would lie within the AI system prioritizing the platform or brokerage ahead of the customer.

Subject: Prudential Financial breached in data theft cyberattack
Source: BleepingComputer

[h/t Sabrina] Prudential Financial has disclosed that its network was breached last week, with the attackers stealing employee and contractor data before being blocked from compromised systems one day later.

This leading global financial services Fortune 500 company manages roughly $1.4 trillion in assets, and it provides insurance, retirement planning, as well as wealth and investment management services to over 50 million customers across the United States, Asia, Europe, and Latin America.

As the second-largest life insurance company in the U.S., it employs 40,000 people worldwide and reported revenues of more than $50 billion in 2023.

The personal information of over 320,000 Prudential customers whose data had been handled by third-party vendor Pension Benefit Information (PBI) was exposed in May 2023 after the Clop cybercrime gang breached PBI’s MOVEit Transfer file sharing platform.

“Our investigation determined that the following types of information related to you were present in the server at the time of the event: name, address, date of birth, phone number, and Social Security number,” PBI said at the time.


Related Articles:


Subject: Email extortion data breach affects 2.4 million patients; FBI seeks victims
Source: Becker’s Health IT

Nearly 2.4 million patients of Oklahoma City-based Integris Health were caught up in a data breach where the alleged hackers sent extortion emails directly to some of them.The health system reported Feb. 6 that it determined an “unauthorized party” had accessed or stolen patient data Nov. 28. Integris also said it learned Dec. 24 that a group claiming responsibility for the hack was reaching out to patients.

“We encourage anyone receiving such communications to NOT respond or contact the sender, or follow any of the instructions, including accessing any links,” the statement said. Integris told HHS that 2.39 million individuals were affected by the breach.

Subject: ‘AI Washing’ Is a Risk Amid Wall Street’s Craze, SEC Chief Gesler Says
Source: Business Insider — Markets

  • SEC chief Gary Gensler said companies must be clear about how they’re using AI.
  • Gensler highlighted the potential risks of AI to financial stability, warning of systemic risks from the tech.
  • His warnings come as Wall Street’s crazy for AI investments shows no sign of stopping.

Companies looking to cash in on the hype around artificial intelligence must avoid luring investors with grandiose or false claims about the impact of the nascent technology on their business, Securities and Exchange Commission chairman Gary Gensler said this week. Gensler said corporations are obligated to disclose operational, legal, and competitive risks associated with AI, along with the relevance of the technology for their business.

“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims from the Professor Hills of the day,” Gensler went on to say, referencing the movie “The Music Man”, in which a grifter comes to town and swindles the public. “If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.”

Setting aside the market frenzy, Gensler also warned about the potential risks of AI to financial stability, emphasizing “systemic” risks in a future financial crisis. His rationale underscores the interconnected nature of financial markets, which can be further exacerbated by the widespread use of the same underlying AI models.

Gensler has reiterated his concerns about AI risks on multiple occasions. At the New York Times’ DealBook conference last year, he warned of companies’ growing reliance on a few dominant AI models, and he echoed this sentiment again in a December interview with The Messenger.

Subject: What is the Digital Markets Act, and what does it mean for North American consumers?
Source: Android Central

The Digital Markets Act is legislation by the European Union that seeks to regulate digital gatekeepers—large online platforms that have a substantial impact on the market. These gatekeepers, often synonymous with major tech corporations, possess significant market power and control key access points for businesses and consumers.

The DMA was approved in late 2022 and went into effect, for the most part, in early 2023. It has provisions that regulate some hardware, but mostly, it is designed to make sure software platforms are treating everyone — including competitors — in a fair way. The software ecosystem and platform(s) that make modern smartphones appealing are going to change, hopefully, for the better.

There are four key provisions of the DMA. You can read the full set of rules and regulations here, but for consumers, these stand out.

Subject: OpenAI blocks state-sponsored hackers from using ChatGPT
Source: BleepingComputer

OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model (LLM) services for malicious purposes after receiving key information from Microsoft’s Threat Intelligence team.

In a separate report, Microsoft provides more details on how and why these advanced threat actors used ChatGPT.

Activity associated with the following threat groups was terminated on the platform:

  1. Forest Blizzard (Strontium) [Russia]: Utilized ChatGPT to conduct research into satellite and radar technologies pertinent to military operations and to optimize its cyber operations with scripting enhancements.
  2. Emerald Sleet (Thallium) [North Korea]: Leveraged ChatGPT for researching North Korea and generating spear-phishing content, alongside understanding vulnerabilities (like CVE-2022-30190 “Follina”) and troubleshooting web technologies.
  3. Crimson Sandstorm (Curium) [Iran]: Engaged with ChatGPT for social engineering assistance, error troubleshooting, .NET development, and developing evasion techniques.
  4. Charcoal Typhoon (Chromium) [China]: Interacted with ChatGPT to assist in tooling development, scripting, comprehending cybersecurity tools, and generating social engineering content.
  5. Salmon Typhoon (Sodium) [China]: Employed LLMs for exploratory inquiries on a wide range of topics, including sensitive information, high-profile individuals, and cybersecurity, to expand their intelligence-gathering tools and evaluate the potential of new technologies for information sourcing.

“Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards,” the company added.


Posted in: AI, Cybercrime, Cybersecurity, Email Security, Federal Legislative Research, Financial System, Firewalls, Healthcare, Legal Research, Privacy, Social Media, Spyware