Subject: Privacy Guides – Search Engines
Source: Privacy Guides
https://www.bespacific.com/privacy-guides-search-engines/Recommended Search Engines
Copyright © 2023 beSpacific, All rights reserved.
Source: Becker’s Hospital Review
Internet-connected medical devices that feed patient data into EHRs are expected to explode in use in the coming years, presenting a cybersecurity challenge to hospitals and health systems. These devices carry the greatest risks, according to an analysis by cybersecurity firm Armis published April 17.
To protect its Confidential Computing cloud infrastructure and gain critical insights, Google leans on its relationships with chipmakers.
Google Cloud and Intel released results today from a nine-month audit of Intel’s new hardware security product: Trust Domain Extensions (TDX). The analysis revealed 10 confirmed vulnerabilities, including two that researchers at both companies flagged as significant, as well as five findings that led to proactive changes to further harden TDX’s defenses. The review and fixes were all completed before the production of Intel’s fourth-generation Intel Xeon processors, known as “Sapphire Rapids,” which incorporate TDX.
The project is part of Google Cloud’s Confidential Computing initiative, a set of technical capabilities to keep customers’ data encrypted at all times and ensure that they have full access controls.
After years of scrambling to remediate the security fallout from design flaws in the processor feature known as “speculative execution,” chipmakers have invested more in advanced security testing. For TDX, Intel’s in-house hackers conducted their own audits, and the company also put TDX through its security paces by inviting researchers to vet the hardware as part of Intel’s bug bounty program.
Additionally, as part of the collaboration, Google worked with Intel to open source the TDX firmware, low-level code that coordinates between hardware and software. This way, Google Cloud customers and Intel TDX users around the world will have more insight into the product.
Most troubling, this bill is a template for legislators to try to force internet companies to report their users to law enforcement for other unfavorable conduct or speech….Filed: https://www.eff.org/deeplinks
Source: CNN Polictics
(CNN) Domestic violent extremists have in the last year increasingly shared tactics with each other on using guns to attack electric power stations in a move that likely escalates the threat to US critical infrastructure, according to a Department of Homeland Security bulletin obtained by CNN.Following multiple high-profile attacks on US power substations last year, extremists have stepped up sharing of “online messaging and operational guidance promoting attacks against this sector,” says the DHS bulletin, which was distributed to US critical infrastructure operators on Monday.
The information and tactics shared by extremists online include “detailed diagrams, simplified tips for enhancing operational security, and procedures for disabling key components of substations and transformers,” DHS warned.
“Electric utilities have always taken substation security seriously, primarily because of safety,” said Patrick C. Miller, the CEO of Oregon-based Ampere Industrial Security, which works with utilities to boost their security.
“What hasn’t been considered as much, until recently, has been firearms, ballistics, drones and other threats from outside of the substation fence,” Miller told CNN, adding that it “takes some planning and investment to move to a more secure posture.”
- Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.
- In a joint announcement from the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission and the Federal Trade Commission, regulators laid out some of the ways existing laws would allow them to take action against companies for their use of AI.
Still, the regulators acknowledged there’s also room for Congress to act.
Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.
The warning comes as Congress is grappling with how it should take action to protect Americans from potential risks stemming from AI. The urgency behind that push has increased as the technology has rapidly advanced with tools that are readily accessible to consumers, like OpenAI’s chatbot ChatGPT. Earlier this month, Senate Majority Leader Chuck Schumer, D-N.Y., announced he’s working toward a broad framework for AI legislation, indicating it’s an important priority in Congress.
For example, the CFPB is looking into so-called digital redlining, or housing discrimination that results from bias in lending or home-valuation algorithms, according to Rohit Chopra, the agency’s director. CFPB also plans to propose rules to ensure AI valuation models for residential real estate have safeguards against discrimination.
Khan added the FTC stands ready to hold companies accountable for their claims of what their AI technology can do, adding enforcing against deceptive marketing has long been part of the agency’s expertise.
Source: The Verge
The phrase is a common disclaimer used by ChatGPT and reveals where AI is being used to generate spam, fake reviews, and other forms of low-grade text.
Search for the phrase on Twitter, for example, and you’ll find countless examples of malfunctioning spambots. (Though it’s worth noting that the most recent results tend to be jokes, with growing awareness of the phrase turning it into something of a meme.)
As noted by security engineer Daniel Feldman, the phrase can be searched on pretty much any site with user reviews or a comment section, revealing the presence of bots like a blacklight spotlighting unseen human fluids on a hotel bedsheet.
A new two-factor authentication tool from Google isn’t end-to-end encrypted, which could expose users to significant security risks, a test by security researchers found.Google’s Authenticator app provides unique codes that website logins may ask for as a second layer of security on top of passwords. On Monday, Google announced a long-awaited feature, which lets you sync Authenticator to a Google account and use it across multiple devices. That’s great news, because in the past, you could end up locked out of your account if you lost the phone with the authentication app installed.
But when app developers and security researchers at the software company Mysk took a look under the hood, they found the underlying data isn’t end-to-end encrypted.
You can use Google Authenticator without tying it to your Google account or syncing it across devices, which avoids this issue. Unfortunately, that means it might be best to avoid a useful feature that users spent years clamoring for. “The bottom line: although syncing 2FA secrets across devices is convenient, it comes at the expense of your privacy,” Mysk wrote. “We recommend using the app without the new syncing feature for now.”
“If Google servers were compromised, secrets would leak,” Mysk said. Adding insult to injury, QR codes involved with setting up two-factor authentication also contain the name of the account or service (Amazon or Twitter, for example). “The attacker can also know which accounts you have. This is particularly risky if you’re an activist and run other Twitter accounts anonymously.”
“End-to-End Encryption (E2EE) is a powerful feature that provides extra protections, but at the cost of enabling users to get locked out of their own data without recovery,” said Christiaan Brand, group product manager at Google. “To ensure that we’re offering a full set of options for users, we have also begun rolling out optional E2EE in some of our products, and we plan to offer E2EE for Google Authenticator in the future.” Braand posted a Twitter thread with more details.
The latest step in post-quantum cryptography guidance is helping organizations identify where current public-key algorithms will need to be replaced, as the National Institute of Standards and Technology continues its push to fortify U.S. digital networks ahead of the maturity of quantum computing.A new draft document previews—and solicits public commentary on—NIST’s current post-quantum cryptography guidance.
Current goals outlined in the working draft include helping entities locate where and how public key algorithms are utilized in encryption schemes, developing a strategy to migrate these algorithms to quantum-resilient substitutes and performing interoperability and performance testing.
“Organizations are often unaware of the breadth and scope of application and functional dependencies on public-key cryptography within their products, services and operational environments,” the draft document reads.
A major theme of the document is to help organizations understand the security architecture in their networks so that they firmly grasp where post-quantum security measures will need to be implemented and where to prioritize modernization. NIST also aims to compile a definitive inventory of software vendors to support post-quantum cryptography migration.
The agency then announced partnerships with 12 private sector companies to help develop quantum-resilient algorithms and implement them nationwide, including Amazon Web Services and Microsoft.
Source: Help Net Security
The use of artificial intelligence can result in the production of deepfakes that are becoming more realistic and challenging to differentiate from authentic content, according to Regula.Companies view fabricated biometric artifacts such as deepfake videos or voices as genuine menaces, with about 80% expressing concern. In the United States, this apprehension appears to be the highest, with approximately 91% of organizations believing it to be an escalating danger.
AI-generated deepfakes – The increasing accessibility of AI technology poses a new threat: it may become easier for individuals with malicious intent to create deepfakes, amplifying the threat to businesses and individuals alike.
“AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so. While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements, etc.,” says Ihar Kliashchou, CTO at Regula.
“Currently, it is difficult or even impossible to create deepfakes that display expected dynamic behavior, so verifying the liveliness of an object can give you an edge over fraudsters. In addition, cross-validating user information with biometric checks and recent transaction checks can help ensure a thorough verification process,” Kliashchou continued.
At the same time, advanced identity fraud is not only about AI-generated fakes. 46% of the organizations globally experienced synthetic identity fraud in the past year. Also known as “Frankenstein” identity, this is a type of scam where criminals combine real and fake ID information to create totally new and artificial identities. It’s usually used to open bank accounts or make fraudulent purchases.
Obviously, the Banking sector is the most vulnerable to such kind of identity fraud. 92% of the companies in the industry perceive synthetic fraud as a real threat, and 49% have recently come across this scam.
Nowadays, to prevent the majority of current identity fraud, companies should enable document verification in addition to comprehensive biometric checks.
Source: Ars Technica
Ars Technica: “The European Commission will require 19 large online platforms and search engines to comply with new online content regulations starting on August 25, European officials said. The EC specified which companies must comply with the rules for the first time, announcing today that it “adopted the first designation decisions under the Digital Services Act.” Five of the 19 platforms are run by Google, specifically YouTube, Google Search, the Google Play app and digital media store, Google Maps, and Google Shopping. Meta-owned Facebook and Instagram are on the list, as are Amazon’s online store, Apple’s App Store, Microsoft’s Bing search engine, TikTok, Twitter, and Wikipedia. These platforms were designated because they each reported having over 45 million active users in the EU …