Pete Recommends – Weekly highlights on cyber security issues, September 7, 2019

Subject: Threat of mass shootings give rise to AI-powered cameras
Source: AP via WHYY
https://whyy.org/articles/threat-of-mass-shootings-give-rise-to-ai-powered-cameras/

AI is transforming surveillance cameras from passive sentries into active observers that can identify people, suspicious behavior and guns, amassing large amounts of data that help them learn over time to recognize mannerisms, gait and dress. If the cameras have a previously captured image of someone who is banned from a building, the system can immediately alert officials if the person returns.

Retailers can spot shoplifters in real-time and alert security, or warn of a potential shoplifter. One company, Athena-Security, has cameras that spot when someone has a weapon. And in a bid to help retailers, it recently expanded its capabilities to help identify big spenders when they visit a store.

AI cameras have already been tested by some companies to evaluate consumers’ facial expressions to determine if they’re having a pleasant or unpleasant shopping experience and improve customer service, according to the Center for Democracy and Technology, a Washington nonprofit that advocates for privacy protections. Policy counsel Joseph Jerome said companies may someday use the cameras to estimate someone’s age, which might be useful for liquor stores, or facial-expression analysis to aid in job interviews.

The power of the systems has sparked privacy concerns.

Filed in:

sample RSS feed: https://whyy.org/secondary-categories/public-safety/feed/


Subject: Opinion: Worker-protection laws aren’t ready for an automated future
Source: Pennsylvania Capital-Star
https://www.penncapital-star.com/commentary/worker-protection-laws-arent-ready-for-an-automated-future-opinion/

Science fiction has long imagined a future in which humans constantly interact with robots and intelligent machines. This future is already happening in warehouses and manufacturing businesses. Other workers use virtual or augmented reality as part of their employment training, to assist them in performing their job or to interact with clients. And lots of workers are under automated surveillance from their employers.

All that automation yields data that can be used to analyze workers’ performance. Those analyses, whether done by humans or software programs, may affect who is hired, fired, promoted and given raises. Some artificial intelligence programs can mine and manipulate the data to predict future actions, such as who is likely to quit their job, or to diagnose medical conditions.

If your job doesn’t currently involve these types of technologies, it likely will in the very near future. This worries mea labor and employment law scholar who researches the role of technology in the workplace – because unless significant changes are made to American workplace laws, these sorts of surveillance and privacy invasions will be perfectly legal.

The laws about discrimination based on computer algorithms are unclear, just as other technologies stretch employment laws and regulations well beyond their clear applications. Without an update to the rules, more workers will continue to fall outside traditional worker protections – and may even be unaware how vulnerable they really are.

Jeffrey Hirsch is the Geneva Yeargan Rand Distinguished Professor of Law at the University of North Carolina at Chapel Hill. He wrote this piece for The Conversation, where it first appeared.

Filed in:

TAGS


Subject: Just How Bad is Amazon’s Banned Products Problem?
Source: Gizmodo
https://gizmodo.com/just-how-bad-is-amazons-banned-products-problem-1837778839

The average American purportedly has more trust in Amazon than their own government, which makes recent reports on thousands of potentially unsafe products making their way onto the company’s online marketplace particularly terrifying.

More than 4,100 products available on the site—everything from motorcycle helmets to children’s toys—were “declared unsafe by federal agencies, are deceptively labeled or are banned by federal regulators” according to an investigation by the Wall Street Journal last week. On Friday, Wired similarly reported that several of the top-selling listings for signal boosters on Amazon lacked federal certification, which is kind of important for a device that has the potential to seriously mess with nearby cell towers. Gizmodo reached out to Amazon and will update this article with the company’s response.

Just to give you a snapshot of how pervasive Amazon’s issue is, in a roughly four-month period, the Journal found:

The majority of the Amazon listings detailed in these reports came from the many thousands of third-party vendors on Amazon. Despite being responsible for most of the sales on the site, these types of sellers have been a constant headache for Amazon when it comes to keeping crappy bootleg products off its marketplace.


Subject: Protect Yourself From Healthcare Rip-Offs
Source: Consumer Reports
https://www.consumerreports.org/scams-fraud/protect-yourself-from-healthcare-rip-offs/

Avoid scams and fight back against surprise medical bills It might begin with what seems like a helpful phone call. Someone who says he’s from Medicare offers you a medical device to ease knee pain or suggests a DNA test to assess your cancer risk. All he needs is to confirm your Medicare number.

But this might be a plan to defraud you—or Medicare—by sending you a device or getting you to take a test you don’t need, and then billing Medicare for it. In fact, scams like this cost the system about $52 billion in 2017, driving up prices for everyone. The caller might also steal your medical identity, using the info you supply to impersonate you at a doctor’s office or even gain access to your bank account.

“Unfortunately, fraud is widespread,” says Christina Tetreault, senior policy counsel for Consumer Reports. And scams that target older adults are particularly common. Some charges that are technically legal can feel like a rip-off, too—like unexpected medical bills from providers you thought accepted your insurance or for services you thought were covered. Learning about health scams and rip-offs in advance can help you avoid them. Here’s what you should know about a few common cons.

More on Scams and Ripoffs

Robocall Scams Get More Sophisticated and Costly
Don’t Fall for These Money Tricks
A Growing Threat to Your Finances: Cell-Phone Account Fraud
The Rise of Medical Identity Theft


Subject: What you need to know ‘deepfakes’ and the 2020 election
Source: NPR via WHYY
https://whyy.org/npr_story_post/what-you-need-to-know-about-fake-video-audio-and-the-2020-election/

Security experts have warned about the prospect of a new era of high quality faked video or audio, which some commentators worry could have deeply corrosive effects on U.S. democracy.

Here’s what you need to know. What are “deepfakes?”…That’s the nickname given to computer-created artificial videos or other digital material in which images are combined to create new footage that depicts events that never actually happened. The term originates from the online message board Reddit…

site RSS https://whyy.org/feed/


Subject: Beware of web beacons that can secretly monitor your email
Source: via beSpacific
https://www.bespacific.com/beware-of-web-beacons-that-can-secretly-monitor-your-email/

Legal By the Bay – Joanna L. Storey: – “A twist in the recent prosecution of a Navy Seal charged with killing a prisoner in Iraq in 2017 brought to the forefront an ethics issue that has been squarely addressed by several jurisdictions, but not yet in California: the unethical surreptitious tracking of emails sent to opposing counsel using software embedded in a logo or other image. Also known as a web beacon, the tracking software is an invisible image no larger than a pixel that is placed in an email and, once activated, monitors such actions as when the email was opened, for how long, how many times, where, and whether the email was forwarded. The sender’s goal may be to determine how seriously you are considering a settlement demand that he attached to an email – the more you view the email, the more you may be inclined to accept the demand. Or, the sender may want to know to where you forward the email (e.g., you may forward the email to a client whose location is unknown to opposing counsel)….”


Subject: Fraud Alert: Genetic Testing Scam
Source: Office of Inspector General | U.S. Department of Health and Human Services
https://oig.hhs.gov/fraud/consumer-alerts/alerts/geneticscam.asp

The U.S. Department of Health and Human Services Office of Inspector General is alerting the public about a fraud scheme involving genetic testing. Genetic testing fraud occurs when Medicare is billed for a test or screening that was not medically necessary and/or was not ordered by a Medicare beneficiary’s treating physician.

Scammers are offering Medicare beneficiaries “free” screenings or cheek swabs for genetic testing to obtain their Medicare information for identity theft or fraudulent billing purposes. Fraudsters are targeting beneficiaries through telemarketing calls, booths at public events, health fairs, and door-to-door visits.

Related Material: Senior Medicare Patrol’s Information on Genetic Testing Fraud


Subject: Study finds Big Data eliminates confidentiality in court judgements
Source:  swissinfo via beSpacific
https://www.bespacific.com/study-finds-big-data-eliminates-confidentiality-in-court-judgements/

swissinfo: “Swiss researchers have found that algorithms that mine large swaths of data can eliminate anonymity in federal court rulings. This could have major ramifications for transparency and privacy protection. This is the result of a study by the University of Zurich’s Institute of Law, published in the legal journal “Jusletter” and shared by Swiss public television SRFexternal link on Monday. The study relied on a “web scraping technique” or mining of large swaths of data. The researchers created a database of all decisions of the Supreme Court available online from 2000 to 2018 – a total of 122,218 decisions. Additional decisions from the Federal Administrative Court and the Federal Office of Public Health were also added…”

beSpacific Subjects: AI, Civil Liberties, Courts, Government Documents, Knowledge Management, Legal Research, Privacy

swissinfo Tags

sample RSS feed:
https://www.swissinfo.ch/service/eng/rssxml/sci-tech/rss


Subject: Facebook to make deepfake videos with paid actors to help flag fakes
Source: Business Insider
https://www.businessinsider.com/facebook-creating-deepfake-videos-with-paid-actors-deepfake-challenge-2019-9

  • Facebook announced the Deepfake Detection Challenge on Thursday.
  • The social networking company said it will contribute $10 million to fund research and prizes to help detect and combat deepfake videos, which use AI to create footage that makes it seem like a person said something they didn’t, or appeared in a video they weren’t in.
  • Facebook said it will hire paid actors to create a library of facial characteristics and traits, which will help the industry detect deepfake videos.

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” Facebook Chief Technology Officer Mike Schroepfer said in a blog post on Thursday.


Subject: Amazon’s Ring camera raises civil liberties concerns: U.S. senator
Source: Reuters via Yahoo
https://news.yahoo.com/amazons-ring-camera-raises-civil-211456191.html

In a letter to Amazon Chief Executive Jeff Bezos, Markey said sharing information from Ring’s at-home camera systems with police departments “could easily create a surveillance network that places dangerous burdens on people of color” and stoke “racial anxieties” in communities where it works with law enforcement.

Markey, the ranking member on the Senate Commerce Subcommittee on Security, said he was “alarmed to learn that Ring is pursuing facial recognition technology” and that Amazon was marketing its facial recognition technology Rekognition to police departments.

Facial recognition technology has been shown to disproportionately misidentify people of color. In a 2018 American Civil Liberties Union study, Rekognition incorrectly matched 28 members of Congress, including Markey, to a database of 25,000 publicly available arrest photos.

Posted in: AI, Civil Liberties, Court Resources, Cybersecurity, Health, KM, Legal Research, Privacy, Social Media