- Apple launched a feature called Shortcuts in 2018, an app that lets you write scripts for the iPhone.
- One widely shared shortcut is called Police, which records police interactions and texts a predetermined contact that you’ve been pulled over.
- It also sends a video of the encounter to your contact.
- The creator says the shortcut can be adapted for other situations.
Once the shortcut is installed and configured, you just have to say, for example, “Hey Siri, I’m getting pulled over.” Then the program pauses music you may be playing, turns down the brightness on the iPhone, and turns on “do not disturb” mode.
In the battle to counter the modern-day scourge of weaponised online disinformation, Taiwan has locked on to a winning strategy – “nerd immunity”.Building public awareness and “inoculating” the 23-million-strong population to fake news from China has been one of the chief goals of Audrey Tang, Taiwan’s ground-breaking digital minister.
Sitting just 80 miles from China, Taiwan has been on the frontlines not only of the coronavirus pandemic but an “infodemic” of online disinformation. But the bombardment of Chinese state-sponsored influence has also made it a world leader in identifying and tackling disinformation.
“When the majority of the population have this exposure and this inoculation, this builds nerd immunity,” Ms Tang, one of the world’s top open source software developers, told the Sunday Telegraph in an interview last week.
“It’s the Taiwan model. Just like we fight the coronavirus with no lockdown, we fight the infodemic with no takedown,” said the former hacker who, in 2016, made history as Taiwan’s youngest ever, and first transgender, minister at 35.
Last year, the Taiwan Fact-Checker Centre, an independent non-profit body linked to the Poynter Institute’s International Fact-Checking Network, alerted the public to a social media post falsely claiming Hong Kong “thugs” were being offered money to kill the police.
Source: Business Insider via Yahoo
- IBM, Amazon, and Microsoft have all committed to not sell facial recognition to law enforcement at least temporarily.
- While activists have been campaigning for the companies to do this for years, the Black Lives Matter movement appears to have tipped the scale.
- As facial recognition becomes more widely used to catch criminals, illegal immigrants, or terrorists, there is mounting concern about how the technology might be abused.
- Each company has made subtly different promises on their sales bans.
It’s something of a u-turn, since activists and academics have been advocating for companies not to sell facial recognition to law enforcement on the basis it exacerbates racial injustice for years.
The argument long put forward by civil rights groups and AI experts is that facial recognition disproportionately affects people of color in two ways.
Firstly, like any policing tool operating by systemically racist societies or institutions, it will inevitably be used to target people of color more often.
Secondly, the data used to build facial recognition software ingrains it with racial bias which makes it more likely to misidentify women and people of color, which would in turn lead to more wrongful arrests. This is because the datasets used to train facial recognition algorithms are often predominantly made up of pictures of white men.
Source: Business Insider
- Activist group NYC Mesh is using New York City’s network of traffic cameras to track incidents of police brutality, as first reported by Vice.
- The city’s Department of Transportation makes footage available to the public, but only in real time.
- NYC Mesh has made a public archive of the footage “to make it easier for the public to identify police misconduct,” the group said in a blog post.
- Amid numerous incidents of police using force with largely peaceful protesters, the public has increasingly been calling for stronger oversight of law enforcement and accountability for those who abuse their power.
NYC Mesh, a group of activists who operate a free, community-owned internet service, have started archiving footage from the city’s vast network of traffic cameras and making it available to the public.
The tool works by saving an image from each camera whenever the feed updates, typically between one and 30 seconds, and then publishing them in batches, the blog post explained. Currently, the archive only includes feeds from Manhattan and Brooklyn — NYC Mesh is working to expand it to other boroughs, but the group is reportedly running into challenges expanding it.
Source: The Washington Post
The theft of top-secret computer hacking tools from the CIA in 2016 was the result of a workplace culture in which the agency’s elite computer hackers “prioritized building cyber weapons at the expense of securing their own systems,” according to an internal report prepared for then-director Mike Pompeo as well as his deputy, Gina Haspel, now the current director.
The breach — allegedly by a CIA employee — was discovered a year after it happened, when the information was published by WikiLeaks, in March 2017. The anti-secrecy group dubbed the release “Vault 7,” and U.S. officials have said it was the biggest unauthorized disclosure of classified information in the CIA’s history, causing the agency to shut down some intelligence operations and alerting foreign adversaries to the spy agency’s techniques.
The task force noted that it could not determine the precise size of the breach because the CIA hacking team did not require monitoring of who used its network, but concluded that the employee stole as much as 34terabytes of information, or about 2.2 billion pages.
Source: Business Insider
- North Korea has a cyber army of about 7,000, trained to find secrets, disrupt critical infrastructure, and steal money to circumvent sanctions.
- These cyberattacks are often difficult to pin on North Korea because they originate in countries like China and Russia, and a counterattack is almost impossible because of North Korea’s rudimentary internet.
- North Korea’s likely next targets are critical US infrastructure like power plants, dams, and electrical grids.
North Korea has been cultivating and has been investing in an elite cyber force under the control of its military, the Korean People’s Army and the Reconnaissance General Bureau — Kim Jong Un’s clandestine security apparatus. It’s estimated to comprise about 7,000 people who are trained pretty extensively, both in specialized domestic programs in North Korea, including in parts of their universities.
As more states and cities explore reopening amid the coronavirus pandemic, Apple and Google have struggled to keep up with the flood of third-party contact-tracing apps in their respective app stores. Now, New York Attorney General Letitia James is urging both companies to impose stricter protections for consumers.James has sent letters to each company urging them to restrict existing and future third-party contact-tracing apps from collecting and storing a user’s personal health information. She also called on the companies to clarify the difference between apps developed by public health authorities and those made by developers, who may not have consumers’ best interests in mind.
“As businesses open back up and Americans venture outdoors, technology can be an invaluable tool in helping us battle the coronavirus,” James said in a statement. “But some companies may seek to take advantage of consumers and use personal information to advertise, mine data, and unethically profit off this pandemic. Both Apple and Google can be invaluable partners in weeding out these bad actors and ensuring consumers are not taken advantage of by those seeking to capitalize on the fear around this public health crisis.”
That’s not necessarily true for third-party contact-tracing apps. Last week, a Wall Street Journal report revealed that several third-party apps currently available for iOS and Android not only allow advertising, but also share location data with third-parties. With no uniform standards in place, Apple and Google have become de-facto regulators of these covid-19 apps. Despite efforts by both companies to keep shady apps out of their stores, some are still available for download.
There are now more reasons than ever to hide faces and protect the privacy of people and bystanders in photos and videos being shared to social media. But some approaches are more effective than others, as researchers from Duke University have demonstrated with a new tool that can perfectly reveal faces that have been obscured through pixelation.
The typical approach to increasing the resolution of an image is to start with the low-res version and use intelligent algorithms to predict and add additional details and pixels in order to artificially generate a high-res version. But because a low-res version of an image can lack significant details, fine features are often lost in the process, resulting in, particularly with faces, an overly soft and smoothed out appearance in the results lacking fine details. The approach a team of researchers from Duke University has developed, called Pulse (Photo Upsampling via Latent Space Exploration), tackles the problem in an entirely different way by taking advantage of the startling progress made with machine learning in recent years.
It’s an unorthodox approach that would have likely been unheard of a decade ago, but given how quickly the technologies that make deepfake videos so lifelike and believable have evolved over just the past few years, they’re helping researchers tackle existing problems like this from entirely new angles. The high-res images Pulse creates aren’t quite perfect just yet; there’s definitely a noticeable difference between the high-res photos it generates compared to high-res photos captured by the researchers for testing purposes, which limits the uses for this tool in its current form. But as the tool is improved, it could potentially be used to help unlock the secrets of the universe. Our best photos of atoms are blurry blobs at best, but eventually, Pulse might even help reveal razor-sharp images of the building blocks of our existence.
Video chat service Zoom will finally add end-to-end encryption to calls placed by users on the free version of its service, weeks after announcing it would only be available to premium users who shelled out for the privilege.In April, Zoom faced a shareholder lawsuit alleging that the service boasted it used true end-to-end encryption, which prevents malicious parties from intercepting the contents of a conversation unless they have access to one of the devices involved. In reality it only used less-secure transport encryption, which allowed Zoom to monitor the content of calls and who was participating in them.
This annoyed some people in high places, not the least Senators Sherrod Brown and Richard Blumenthal, who chastised the company for shoddy security practices. It didn’t help that a later report indicated that some of its encryption keys were being generated by servers in China, where they could theoretically end up in the hands of state authorities. At the same time, Zoom was surging in new users thanks to the coronavirus pandemic and had proven itself vulnerable to “Zoombombing,” in which trolls hijacked video calls and broadcast gore and sexual imagery.
Zoom announced it would implement end-to-end encryption in May, but only for users paying for its $14.99 monthly plan. Then in June, it had to clarify remarks from CEO Eric Yuan that Zoom couldn’t offer the encryption to free tier users because that might get in the way of cooperation with the police and the FBI (the company had to reassure angry users that authorities would need to present a warrant before accessing any data). Per Bloomberg, after two petitions requesting Zoom switch to the higher tier of encryption for all users gathered 70,000 signatures, the company has finally caved and will roll out the feature to all users—regardless of whether they’re paying or not.
In a blog post on Wednesday, Yuan wrote that the company had sought the advice of “civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others” while making its determination. The tradeoff is that all Zoom users seeking to use the higher level of security will now need verify their identity with a text message or another method.
Source: Reuters via beSpacific