Subject: ‘This isn’t some random dude with a duffel bag’: To catch fentanyl traffickers, feds dig into crypto markets
Source: CNN Politics
Current and former law enforcement officials from across the federal government described to CNN the digital-first tactics the administration is developing to disrupt the fentanyl trade.
The Drug Enforcement Agency is investing in crypto-tracing software and identifying the cartels’ most sophisticated money launderers. The IRS has its most tech-savvy agents tracing payments on dark web forums. And a Department of Homeland Security investigations unit is leading a team of forensic specialists to pore over digital clues from stash houses near the Mexican border.
Federal agents have been tracking the cartels’ finances and supply routes for years, but DHS, in particular, has ramped up its surveillance efforts in recent weeks, multiple US officials told CNN.
Cryptocurrency has enhanced cartels’ ability to smuggle fentanyl into the US by allowing them to move vast sums of money instantaneously across a decentralized, digital banking system – all without having to deal with actual banks.
“The speed the criminals can muster, it’s very hard for law enforcement to keep up,” said one top DEA official, who spoke to CNN on condition of anonymity to describe the agency’s counter-narcotics work.
Cryptocurrency “eliminates the potential for hand-to-hand transactions,” said Koopman, whose team focuses on illicit financial flows, including dark-web purchases that are multiple steps removed from when the cartels get the drugs over the US border. “So now it’s … in a different world where some of the contacts might be online and we’re trying to facilitate or do transactions in a different manner.”
But digital money also leaves a trail that investigators can follow.
Source: Bleeping Computer
A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.When Zoom was used for training the sound classification algorithm, the prediction accuracy dropped to 93%, which is still dangerously high, and a record for that medium.
Such an attack severely affects the target’s data security, as it could leak people’s passwords, discussions, messages, or other sensitive information to malicious third parties.
Moreover, contrary to other side-channel attacks that require special conditions and are subject to data rate and distance limitations, acoustic attacks have become much simpler due to the abundance of microphone-bearing devices that can achieve high-quality audio captures.
This, combined with the rapid advancements in machine learning, makes sound-based side-channel attacks feasible and a lot more dangerous than previously anticipated.
Subject: White House announces cybersecurity plan to protect nation’s public schools
Aug. 7 (UPI) The White House on Monday announced a plan to strengthen cybersecurity in public schools amid a growing number of ransomware attacks targeting districts across the country.Administration officials — including first lady Jill Biden, Education Secretary Miguel Cardona, and Homeland Security Secretary Alejandro Mayorkas — will sit down at the White House Monday with a host of school administrators, teachers, big tech executives, and technology experts to discuss the growing need for better digital security in the nation’s schools.
“The commitments made today will help ensure the nation’s schools are in the best position to secure their networks to keep their students, educators, and employees safe,” the White House said.
Google has also released an updated K-12 Cybersecurity Guidebook to ensure the security of Google hardware and software applications used in schools.
Last month, the Federal Communications Commission announced a program under the Universal Service Fund that will provide up to $200 million over three years to strengthen cyber defenses in K-12 schools and libraries.
The National Cybersecurity Strategy seeks to make two fundamental changes in the government’s digital security protocols, including a plan to enlist more help from the private sector to mitigate cyber risks, and a program to boost federal incentives to companies that make long-term investments in cybersecurity.
Zoom’s Terms of Service say it can train its AI on your calls, videos, and other data. The company says you don’t have to worry about that.Zoom updated its Terms of Service on Monday after a controversy over the company’s policies about training AI on user data. Although the policy literally says that Zoom reserves the right to train AI on your calls without your explicit permission, the Terms of Service now include an additional line which says, essentially, we promise not to do that.
The company’s legal documents call your video, audio, and chat transcripts “Customer Content.” When you click through Zoom’s terms, you agree to give Zoom “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights” to use that Customer Content for “machine learning, artificial intelligence, training, testing,” and a variety of other product development purposes. The company reserves similar rights for “Service Generated Data,” which includes telemetry data, product usage data, diagnostic data, and other information it gets from analyzing your content and behavior.
However, an update to the Terms of Service now contains a new clause, which appears in bold: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”
Zoom’s AI policies flew under the radar until a post on the subject sparked outrage on the influential forum Hacker News over the weekend. On Monday morning, Zoom’s Chief Product Officer Smita Hashim published a blog post which says, essentially, that the company doesn’t do the things described in its Terms of Service. Hashim clarified that while the company does use data for some machine learning purposes, “For AI, we do not use audio, video, or chat content for training our models without customer consent.”
The company’s track record on keeping promises to consumers about their privacy isn’t great. In 2020, Zoom said it would offer end-to-end encryption only to paying users only to backtrack after outcry over offering privacy as a paid feature. A lawsuit alleged the company had claimed it already offered end-to-end encryption to everyone. In fact, Zoom was using a far less secure form of encryption, though it later fixed the issue. The company also shared user data with Google and Facebook without letting customers know, and Zoom agreed to an $85 million settlement over these and other issues in 2021.
Aug. 8 (UPI) — Wells Fargo will pay the bulk of $289 million in penalties for record-keeping violations stemming from the employee use of messaging platforms such as WhatsApp, the Securities and Exchange Commission said Tuesday. Wells Fargo and BNP Paribas are among 10 companies that agreed to charges related to “widespread and longstanding failures” related to unofficial communications used for official business.
The plaintiffs claimed Wells Fargo executives misled investors by saying regulators were satisfied with the bank’s progress under federal consent orders. But a House Financial Services Committee report in March of 2020 found that Wells Fargo was not in compliance.
Security researchers at IBM say they were able to successfully “hypnotize” prominent large language models like OpenAI’s ChatGPT into leaking confidential financial information, generating malicious code, encouraging users to pay ransoms, and even advising drivers to plow through red lights. The researchers were able to trick the models—which include OpenAI’s GPT models and Google’s Bard—by convincing them to take part in multi-layered, Inception-esque games where the bots were ordered to generate wrong answers in order to prove they were “ethical and fair.”“Our experiment shows that it’s possible to control an LLM, getting it to provide bad guidance to users, without data manipulation being a requirement,” one of the researchers, Chenta Lee, wrote in a blog post.
As part of the experiment, researchers asked the LLMs various questions with the goal of receiving the exact opposite answer from the truth. Like a puppy eager to please its owner, the LLMs dutifully complied.
The AI models tested varied in terms of how easy they were to hypnotize. Both OpenAI’s GPT 3.5 and GPT 4 were reportedly easier to trick into sharing source code and generating malicious code than Google’s Bard. Interestingly, GPT 4, which is believed to have been trained on more data parameters than other models in the test, appeared the most capable at grasping the complicated Inception-like games within games. That means newer, more advanced generative AI models, though more accurate and safer in some regards, also potentially have more avenues to be hypnotized.
Source: CNN Politics
A group of teenage hackers managed to breach some of the world’s biggest tech firms last year by exploiting systemic security weaknesses in US telecom carriers and the business supply chain, a US government review of the incidents has found, in what is a cautionary tale for America’s critical infrastructure.The Department of Homeland Security-led review of the hacks, which was shared exclusively with CNN, determined US regulators should penalize telecom firms with lax security practices and Congress should consider funding programs to steer American youth away from cybercrime.
The investigation of the hacks – which hit companies like Microsoft and Samsung – found that, in general, it was far too easy for the cybercriminals to intercept text messages that corporate employees use to log into systems.
“If richly resourced cybersecurity programs were so easily breached by a loosely organized threat actor group, which included several juveniles, how can organizations expect their programs to perform against well-resourced cybercrime syndicates and nation-state actors?” the Cyber Safety Review Board’s new report states.