Pete Recommends – Weekly highlights on cyber security issues, April 30, 2023

Subject: Privacy Guides – Search Engines
Source: Privacy Guides
https://www.bespacific.com/privacy-guides-search-engines/Recommended Search Engines

“Use a search engine that doesn’t build an advertising profile based on your searches. The recommendations here are based on the merits of each service’s privacy policy. There is no guarantee that these privacy policies are honored. Consider using a VPN or Tor if your threat model requires hiding your IP address from the search provider.”
[Note – Privacy Guides is a non-profit, socially motivated website that provides information for protecting your data security and privacy.”
Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.

Subject: 6 riskiest medical devices for cybersecurity
Source: Becker’s Hospital Review
https://www.beckershospitalreview.com/cybersecurity/6-riskiest-medical-devices-for-cybersecurity.html

Internet-connected medical devices that feed patient data into EHRs are expected to explode in use in the coming years, presenting a cybersecurity challenge to hospitals and health systems. These devices carry the greatest risks, according to an analysis by cybersecurity firm Armis published April 17.


Subject: Intel Let Google Cloud Hack Its New Secure Chips and Found 10 Bugs
Source: WIRED
https://www.wired.com/story/intel-google-cloud-chip-security/

To protect its Confidential Computing cloud infrastructure and gain critical insights, Google leans on its relationships with chipmakers.

Google Cloud and Intel released results today from a nine-month audit of Intel’s new hardware security product: Trust Domain Extensions (TDX). The analysis revealed 10 confirmed vulnerabilities, including two that researchers at both companies flagged as significant, as well as five findings that led to proactive changes to further harden TDX’s defenses. The review and fixes were all completed before the production of Intel’s fourth-generation Intel Xeon processors, known as “Sapphire Rapids,” which incorporate TDX.

The project is part of Google Cloud’s Confidential Computing initiative, a set of technical capabilities to keep customers’ data encrypted at all times and ensure that they have full access controls.

After years of scrambling to remediate the security fallout from design flaws in the processor feature known as “speculative execution,” chipmakers have invested more in advanced security testing. For TDX, Intel’s in-house hackers conducted their own audits, and the company also put TDX through its security paces by inviting researchers to vet the hardware as part of Intel’s bug bounty program.

Additionally, as part of the collaboration, Google worked with Intel to open source the TDX firmware, low-level code that coordinates between hardware and software. This way, Google Cloud customers and Intel TDX users around the world will have more insight into the product.


Subject: Your Messaging Service Should Not Be a DEA Informant
Source: EFF
https://www.eff.org/deeplinks/2023/04/your-messaging-service-should-not-be-dea-informant
A new U.S. Senate bill would require private messaging services, social media companies, and even cloud providers to report their users to the Drug Enforcement Administration (DEA) if they find out about certain illegal drug sales. This would lead to inaccurate reports and turn messaging services into government informants.The bill, named the Cooper Davis Act, is likely to result in a host of inaccurate reports and in companies sweeping up innocent conversations, including discussions about past drug use or treatment. While explicitly not required, it may also give internet companies incentive to conduct dragnet searches of private messages to find protected speech that is merely indicative of illegal behavior.
Most troubling, this bill is a template for legislators to try to force internet companies to report their users to law enforcement for other unfavorable conduct or speech….Filed: https://www.eff.org/deeplinks

Subject: Violent extremists are increasingly sharing tactics for attacking power stations, DHS warns
Source: CNN Polictics
https://www.cnn.com/2023/04/24/politics/dhs-violent-extremists-power-stations/index.html

(CNN) Domestic violent extremists have in the last year increasingly shared tactics with each other on using guns to attack electric power stations in a move that likely escalates the threat to US critical infrastructure, according to a Department of Homeland Security bulletin obtained by CNN.Following multiple high-profile attacks on US power substations last year, extremists have stepped up sharing of “online messaging and operational guidance promoting attacks against this sector,” says the DHS bulletin, which was distributed to US critical infrastructure operators on Monday.

The information and tactics shared by extremists online include “detailed diagrams, simplified tips for enhancing operational security, and procedures for disabling key components of substations and transformers,” DHS warned.

“Electric utilities have always taken substation security seriously, primarily because of safety,” said Patrick C. Miller, the CEO of Oregon-based Ampere Industrial Security, which works with utilities to boost their security.

“What hasn’t been considered as much, until recently, has been firearms, ballistics, drones and other threats from outside of the substation fence,” Miller told CNN, adding that it “takes some planning and investment to move to a more secure posture.”


Subject: U.S. regulators warn they already have the power to go after A.I. bias
Source: CNBC
https://www.cnbc.com/2023/04/25/us-regulators-warn-they-already-have-the-power-to-go-after-ai-bias.html

  • Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.
  • In a joint announcement from the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission and the Federal Trade Commission, regulators laid out some of the ways existing laws would allow them to take action against companies for their use of AI.

Still, the regulators acknowledged there’s also room for Congress to act.

Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.

The warning comes as Congress is grappling with how it should take action to protect Americans from potential risks stemming from AI. The urgency behind that push has increased as the technology has rapidly advanced with tools that are readily accessible to consumers, like OpenAI’s chatbot ChatGPT. Earlier this month, Senate Majority Leader Chuck Schumer, D-N.Y., announced he’s working toward a broad framework for AI legislation, indicating it’s an important priority in Congress.

For example, the CFPB is looking into so-called digital redlining, or housing discrimination that results from bias in lending or home-valuation algorithms, according to Rohit Chopra, the agency’s director. CFPB also plans to propose rules to ensure AI valuation models for residential real estate have safeguards against discrimination.

Khan added the FTC stands ready to hold companies accountable for their claims of what their AI technology can do, adding enforcing against deceptive marketing has long been part of the agency’s expertise.

Filed: https://www.cnbc.com/technology/


Subject: ‘As an AI language model’: the phrase that shows how AI is polluting the web
Source: The Verge
https://www.theverge.com/2023/4/25/23697218/ai-generated-spam-fake-user-reviews-as-an-ai-language-model

The phrase is a common disclaimer used by ChatGPT and reveals where AI is being used to generate spam, fake reviews, and other forms of low-grade text.

A big worry about the rise of AI language models is that the internet will soon be subsumed in a tidal wave of automated spam. So far, these predictions have not yet come to pass (if they prove true at all), but we are seeing early signs that tools like ChatGPT are being used to power bots, generate fake reviews, and stuff the web with low-grade textual filler. If you want proof, try searching Google or Twitter for the phrase “as an AI language model.” When talking to OpenAI’s ChatGPT, the system frequently uses this expression as a disclaimer, usually when it’s asked to generate banned content or give an opinion on something subjective and particularly human. Now, though, “as an AI language model” has become a shibboleth for machine learning spam, revealing where people have set up automated bots or copied and pasted AI content without paying attention to the output.

Search for the phrase on Twitter, for example, and you’ll find countless examples of malfunctioning spambots. (Though it’s worth noting that the most recent results tend to be jokes, with growing awareness of the phrase turning it into something of a meme.)

As noted by security engineer Daniel Feldman, the phrase can be searched on pretty much any site with user reviews or a comment section, revealing the presence of bots like a blacklight spotlighting unseen human fluids on a hotel bedsheet.

Filed: https://www.theverge.com/web

RSS: https://www.theverge.com/rss/web/index.xml


Subject: Google’s New Authenticator Isn’t End-to-End Encrypted: Test
Source: Gimodo
https://gizmodo.com/google-authenticator-two-factor-not-end-encrypted-1850377102

A new two-factor authentication tool from Google isn’t end-to-end encrypted, which could expose users to significant security risks, a test by security researchers found.Google’s Authenticator app provides unique codes that website logins may ask for as a second layer of security on top of passwords. On Monday, Google announced a long-awaited feature, which lets you sync Authenticator to a Google account and use it across multiple devices. That’s great news, because in the past, you could end up locked out of your account if you lost the phone with the authentication app installed.

But when app developers and security researchers at the software company Mysk took a look under the hood, they found the underlying data isn’t end-to-end encrypted.

You can use Google Authenticator without tying it to your Google account or syncing it across devices, which avoids this issue. Unfortunately, that means it might be best to avoid a useful feature that users spent years clamoring for. “The bottom line: although syncing 2FA secrets across devices is convenient, it comes at the expense of your privacy,” Mysk wrote. “We recommend using the app without the new syncing feature for now.”

“If Google servers were compromised, secrets would leak,” Mysk said. Adding insult to injury, QR codes involved with setting up two-factor authentication also contain the name of the account or service (Amazon or Twitter, for example). “The attacker can also know which accounts you have. This is particularly risky if you’re an activist and run other Twitter accounts anonymously.”

“End-to-End Encryption (E2EE) is a powerful feature that provides extra protections, but at the cost of enabling users to get locked out of their own data without recovery,” said Christiaan Brand, group product manager at Google. “To ensure that we’re offering a full set of options for users, we have also begun rolling out optional E2EE in some of our products, and we plan to offer E2EE for Google Authenticator in the future.” Braand posted a Twitter thread with more details.


Subject: NIST releases draft post-quantum encryption document
Source: GCN
https://gcn.com/emerging-tech/2023/04/nist-releases-draft-post-quantum-encryption-document/385639/

The latest step in post-quantum cryptography guidance is helping organizations identify where current public-key algorithms will need to be replaced, as the National Institute of Standards and Technology continues its push to fortify U.S. digital networks ahead of the maturity of quantum computing.A new draft document previews—and solicits public commentary on—NIST’s current post-quantum cryptography guidance.

Current goals outlined in the working draft include helping entities locate where and how public key algorithms are utilized in encryption schemes, developing a strategy to migrate these algorithms to quantum-resilient substitutes and performing interoperability and performance testing.

“Organizations are often unaware of the breadth and scope of application and functional dependencies on public-key cryptography within their products, services and operational environments,” the draft document reads.

A major theme of the document is to help organizations understand the security architecture in their networks so that they firmly grasp where post-quantum security measures will need to be implemented and where to prioritize modernization. NIST also aims to compile a definitive inventory of software vendors to support post-quantum cryptography migration.

The agency then announced partnerships with 12 private sector companies to help develop quantum-resilient algorithms and implement them nationwide, including Amazon Web Services and Microsoft.

Topics:


Subject: The true numbers behind deepfake fraud
Source: Help Net Security
https://www.helpnetsecurity.com/2023/04/27/deepfake-identity-fraud/

The use of artificial intelligence can result in the production of deepfakes that are becoming more realistic and challenging to differentiate from authentic content, according to Regula.Companies view fabricated biometric artifacts such as deepfake videos or voices as genuine menaces, with about 80% expressing concern. In the United States, this apprehension appears to be the highest, with approximately 91% of organizations believing it to be an escalating danger.

AI-generated deepfakes – The increasing accessibility of AI technology poses a new threat: it may become easier for individuals with malicious intent to create deepfakes, amplifying the threat to businesses and individuals alike.

“AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so. While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements, etc.,” says Ihar Kliashchou, CTO at Regula.

“Currently, it is difficult or even impossible to create deepfakes that display expected dynamic behavior, so verifying the liveliness of an object can give you an edge over fraudsters. In addition, cross-validating user information with biometric checks and recent transaction checks can help ensure a thorough verification process,” Kliashchou continued.

At the same time, advanced identity fraud is not only about AI-generated fakes. 46% of the organizations globally experienced synthetic identity fraud in the past year. Also known as “Frankenstein” identity, this is a type of scam where criminals combine real and fake ID information to create totally new and artificial identities. It’s usually used to open bank accounts or make fraudulent purchases.

Obviously, the Banking sector is the most vulnerable to such kind of identity fraud. 92% of the companies in the industry perceive synthetic fraud as a real threat, and 49% have recently come across this scam.

Nowadays, to prevent the majority of current identity fraud, companies should enable document verification in addition to comprehensive biometric checks.


Subject: EU names 19 large tech platforms that must follow Europe’s new Internet rules
Source: Ars Technica
https://www.bespacific.com/eu-names-19-large-tech-platforms-that-must-follow-europes-new-internet-rules/

Ars Technica: “The European Commission will require 19 large online platforms and search engines to comply with new online content regulations starting on August 25, European officials said. The EC specified which companies must comply with the rules for the first time, announcing today that it “adopted the first designation decisions under the Digital Services Act.” Five of the 19 platforms are run by Google, specifically YouTube, Google Search, the Google Play app and digital media store, Google Maps, and Google Shopping. Meta-owned Facebook and Instagram are on the list, as are Amazon’s online store, Apple’s App Store, Microsoft’s Bing search engine, TikTok, Twitter, and Wikipedia. These platforms were designated because they each reported having over 45 million active users in the EU …

Posted in: Cybercrime, Cyberlaw, Cybersecurity, Government Resources, Internet Trends, Legal Research, Privacy, Search Engines, Search Strategies, Social Media