Sidebar

Privacy

privacy
Privacy c0mmando 8 months ago 53%
Binance delists Monero https://simplifiedprivacy.com/binance/

Huge price impact. XMR down 30% Withdrawal supported till May 20th Some swap sites temporarily can’t do XMR, such as ExCh Some nodes are having issues connecting. If ever there was a time to test your faith, this is it. Stay strong. Stay calm. Stay rational. Given that privacy is a fundamental human need, XMR will always have value. Flash back to Bitcoin when Silk Road shut down. Would you have sold then? The teams doing atomic swaps have made good progress. Re-read our previous post on Twisting Monero’s flaws. We said then “Monero has the most vibrant peer-to-peer markets. If Monero is banned, recent technology developments with two-way atomic swaps will allow it to continue globally regardless. The Particl team has developed bi-directional atomic swaps, Elizabeth Binks has developed ETH-XMR atomic swaps, and DarkFi has a unique and completely private method as well.” As you get anxious watching the numbers drop, you might be tempted to make a quick decision. According to Psychology Today, “When we are anxious, we naturally seek comfort and control over the situation. Some social psychologists believe that the state of our feelings provides a useful source of information for making decisions.” But ironically the reason you bought XMR to begin with is because the government made you nervous. The very anxiety you experience today, is the cure XMR seeks to provide, with uncertainty over a politically and economically unstable situation. The threat of a CBDC. The unstable budget that can only be repaired through printing of money. The loss of human rights and privacy violations. The anxiety you experience today of wanting to get out, will be what the US dollar holders will feel in the future. USD is the real shitcoin. It’s Monero’s stable and resilient code that can give you comfort in a world of dramatic emotion. Today we experience the effects of regulators turning their cross-hairs on the tools of freedom. But do you believe they will stop here? Do you believe this is their last bad act? Remember that Monero has value in the face of true tyranny. So if you believe we’re going in that direction, this might be the greatest buy the dip of all time

1
1
privacy
Privacy c0mmando 8 months ago 98%
Microsoft Edge Sucks Up Chrome Data Without Permission reclaimthenet.org

Most of those who use Microsoft products – Windows, and the Edge browser included – must by now be well conditioned to accept that the software they run and the data they believe they own isn’t really something they control. And perhaps this is why a “feature” as astonishing as Edge automatically importing tabs open in Google Chrome – even when Microsoft browser’s import tool is disabled – has been known for months, without getting fixed (assuming it was a bug). But it looks like a “bug” in the company’s thinking, one of many – not exactly a software one. It’s definitely a feature, after all. Both Edge and Chrome are based on the same Chromium engine, which should make the “operation” easier; and Microsoft and Google are birds of a feather when it comes to invasive and controversial practices and behavior toward end-users. And when they feel they have to be, they’re not particularly nice to each other, either. Some reports suggest that this “tab-stealing” feature is in fact “just” Microsoft’s way of trying to steal users from Chrome and get them to, willy-nilly, switch to Edge. A Verge [reporter](https://www.theverge.com/24054329/microsoft-edge-automatic-chrome-import-data-feature) and a Windows and Chrome (and occasionally Edge) user described the ordeal, the gist of which is that the tabs left open in their default Chrome browser got imported to Edge after a Windows update and a reboot. No surprise that the user was not prompted to consent to any of this. Here’s the Windows/Edge experience summed up in one sentence: “I didn’t even realize I was using Edge at first, and I was confused why all my (imported from Chrome) tabs were suddenly logged out.” And sure enough, the option at edge://settings/profiles/importBrowsingData was set to disable automatic access to “recent browsing data” i.e., the Borg-like assimilation of open Chrome tabs by Edge. But resistance is certainly not futile: use open-source operating systems and browsers, and root (deepest-level permissions access/decision-making) is all yours. Meanwhile, those (re)installing Windows these days will, at that stage, learn something about why the browser kerfuffle is happening. Reads an installation prompt: “With your confirmation, Microsoft Edge will regularly bring in data from other browsers available on your Windows device. This data includes your favorites, browsing history, cookies, autofill data, extensions, settings, and other browsing data.” With your confirmation – or, as users are reporting, without it. It’s Microsoft after all, and the tech dinosaur felt no urgency in responding to relevant media queries.

55
6
privacy
Privacy c0mmando 8 months ago 100%
New York Judge Rejects Madison Square Garden’s Bid To Dismiss Biometric Privacy Case Involving Facial Recognition reclaimthenet.org

A New York judge has [denied](https://docs.reclaimthenet.org/Gross-v-Madison-Square-Garden-jan-9-2024.pdf) (PDF) Madison Square Garden Entertainment’s motion to dismiss a [biometric privacy lawsuit](https://reclaimthenet.org/madison-square-garden-is-sued-facial-recognition). The litigation revolves around a contentious policy, enacted by MSGE, which deployed facial recognition technology to prohibit certain attorneys from gaining entry into the entertainment giant’s renowned venues. The lawsuit, had previously survived MSGE’s initial attempt to dismiss it. The entertainment firm once again finds itself rebuffed in the District Court for the Southern District of New York, despite raising multiple arguments pleading for a dismissal. In the core of the dispute lies MSGE’s use of facial scanning systems. Initially installed to deter individuals deemed dangerous, the company made adjustments to the dataset on June 22. Following these changes, the system [started pinpointing specified lawyers](https://reclaimthenet.org/new-york-investigates-msg-facial-recognition-entry), citing claims that these attorneys – typically tied to firms currently in litigation against MSGE—might be present on MSGE premises for lawsuit-related duties. The suit will move forward, as ruled by the presiding judge, focusing on whether MSGE’s tactics violate the city’s Biometric Identifier Information Protection Law. Even though the judge acknowledged MSGE’s rationale for wanting to dismiss the plaintiffs’ claims of civil rights violations and unjust enrichment, the alleged breach of the city’s biometrics statute remains a query. The judge found credence in the plaintiffs’ assertion that MSGE could “profit” from executing face scans on the specified attorneys, which might contravene the city’s biometric policy. This argument contends that any profits accrued by company executives from these scans – attributed to dissuading litigation and hence curbing MSGE’s significant legal costs – indeed poses a potential breach of the law.

13
0
privacy
Privacy c0mmando 8 months ago 100%
Russia To Expand Its Surveillance Network, Plans To Tap Into Private Surveillance Networks, Rollout Facial Recognition reclaimthenet.org

Russia is advancing towards a China-like extensive surveillance system. The Perm region is the first to mandate that private video camera owners must integrate their devices into a regional surveillance network, a practice poised to be replicated nationwide. The initiative, driven by [a decree from Perm’s Governor Dmitry Makhonin](https://archive.ph/1ajue), took effect on January 25. This move aligns with President Vladimir Putin’s martial law declaration in Ukraine’s occupied territories in October 2022, granting regional governors augmented powers to ensure the “security” of their areas. Russia’s citizen monitoring has intensified since its Ukraine invasion. Authorities are increasingly scrutinizing social media and employing surveillance cameras to track both activists citizens. Moscow recently trialed facial recognition traffic lights. Alexander Bykov, head of Moscow’s State Traffic Safety Inspectorate, has even suggested that providing biometric data should be obligatory. Facial recognition is a critical element in Russia’s surveillance strategy. It has been used for detaining opposition activists and identifying individuals who ignored military draft summons, with arrests reported in subways and train stations. Sergey Suchkov, CEO of NtechLab, reports that facial recognition is operational in 62 regions, contributing to the Ministry of Digital Development’s “Data Economy” project, aiming to compile a comprehensive profile of citizens’ activities. Currently, private cameras are inaccessible to regional authorities, and only half of the 1.2 million street cameras are state-owned, as reported by the Digital Development department in November 2023. A major goal is to centralize street surveillance, with private cameras playing a significant role.

8
0
privacy
Privacy freedomPusher 8 months ago 87%
Google Cache is being killed off www.engadget.com

cross-posted from: https://sopuli.xyz/post/8702045 > (⚠ Enshitification warning: The linked article has a cookie wall; just click “reject” and the article appears) > > Google is ending the public access to the cache of sites it indexes. AFAICT, these are the consequences: > > * People getting different treatment due to their geographic location will lose the cache they used as a remedy for access inclusion. > * People getting different treatment due to having a defensive browser will lose access. > * The 12ft.io service which serves those who suffer access inequality will be rendered useless. > * Google will continue to include paywalls in search results, but now consumers of Google search results will be led to a dead-end. > * The #InternetArchive #WaybackMachine will take on the full burden of global archival. > * Consumers will lose a very useful tool for circumventing web enshitification. > > Websites treat the Google crawler like a 1st class citizen. Paywalls give Google unpaid junk-free access. Then Google search results direct people to a website that treats humans differently (worse). So Google users are led to sites they cannot access. The heart of the problem is access inequality. Google effectively serves to refer people to sites that are not publicly accessible. > > I do not want to see search results I cannot access. Google cache was the equalizer that neutralizes that problem. Now that problem is back in our face. (cross-posting to privacy forums because cache access enables privacy seekers to reach content that otherwise requires them to step outside of Tor)

12
3
privacy
Privacy freedomPusher 8 months ago 100%
EDPB launches website auditing tool for GDPR compliance https://edpb.europa.eu/news/news/2024/edpb-launches-website-auditing-tool_en

cross-posted from: https://sopuli.xyz/post/8557194 > This is a FOSS tool that enables people to check a website for #GDPR compliance.

3
0
privacy
Privacy c0mmando 8 months ago 100%
Atlas of Surveillance atlasofsurveillance.org

Documenting Police Tech in Our Communities with Open Source Research. Explore 12,090 data-points in the U.S. collected by hundreds of researchers.

9
0
privacy
Privacy c0mmando 8 months ago 100%
EFF adds a Street Level Surveillance hub so Americans can check spying sls.eff.org

Welcome to the Field Guide to Police Surveillance. EFF’s Street-Level Surveillance project shines a light on the surveillance technologies that law enforcement agencies routinely deploy in our communities. These resources are designed for advocacy organizations, journalists, defense attorneys, policymakers, and members of the public who often are not getting the straight story from police representatives or the vendors marketing this equipment. Whether it’s phone-based location tracking, ubiquitous video recording, biometric data collection, or police access to people’s smart devices, law enforcement agencies follow closely behind their counterparts in the military and intelligence services in acquiring privacy-invasive technologies and getting access to consumer data. Just as analog surveillance historically has been used as a tool for oppression, we must understand the threat posed by emerging technologies to successfully defend civil liberties and civil rights in the digital age. The threats to privacy of these surveillance technologies are enormous, as law enforcement agencies at all levels of government use surveillance technologies to compile vast databases filled with our personal information or gain access to devices that can lay bare the intricacies of our daily lives. Use of these surveillance technologies can infringe on our constitutional rights, including to speak and associate freely under the First Amendment or be free from unlawful search and seizure under the Fourth Amendment. Law enforcement also tends to deploy surveillance technologies disproportionately against marginalized communities. These technologies are prone to abuse by rogue officers, and can be subject to error or vulnerability, causing damaging repercussions for those who interact with the criminal justice system. The resources contained in this hub bring together years of research, litigation, and advocacy by EFF staff and our allies, and will continue to grow as we obtain more information.

23
1
privacy
Privacy c0mmando 8 months ago 98%
Amazon to sunset tool that let law enforcement obtain footage from Ring doorbells therecord.media

Amazon announced Wednesday that they will make it harder for police departments to ask for footage generated from customers’ Ring video doorbells and surveillance cameras. The practice had long been under fire from civil liberties groups and [some politicians](https://www.markey.senate.gov/news/press-releases/senator-markeys-probe-into-amazon-ring-reveals-new-privacy-problems). Eric Kuhn, who helms the company’s Neighbors Platform, said Ring’s controversial “Request for Assistance” (RFA) function allowing law enforcement to ask for and obtain user footage will no longer appear in the Neighbors App. Kuhn said the company will remove the RFA button, deep-sixing a program that had let police obtain footage from users on a voluntary basis. Moving forward, law enforcement and fire departments will be forced to obtain a warrant to ask for the footage or produce evidence of an emergency unfolding in real time, [according to Bloomberg News](https://www.bloomberg.com/news/articles/2024-01-24/amazon-s-ring-to-stop-letting-police-request-video-from-users), which first reported the development.. Kuhn wrote a [blog post](https://blog.ring.com/about-ring/ring-announces-new-neighbors-app-features-sunsets-request-for-assistance-post/) saying that Amazon is “sunsetting the Request for Assistance (RFA) tool” and explaining the company’s decision. “Public safety agencies like fire and police departments can still use the Neighbors app to share helpful safety tips, updates, and community events,” Kuhn wrote. “They will no longer be able to use the RFA tool to request and receive video in the app.” The blog post noted that public safety agency posts remain public and that users will still be able to view their messages on the Neighbors app feed and on a given agency's profile. Kuhn’s post did not indicate why Ring is removing the RFA tool other than to say it is responding to customer feedback. A spokesperson for Ring told Bloomberg that the company had opted to instead invest more heavily in new Neighbors app experiences which are more in line with Amazon’s strategy for the product. The spokesperson told Bloomberg the Neighbors app will focus more on connecting communities, referring to new programs also announced Wednesday that will let consumers post clips and better inform fellow users about community happenings on a block-by-block basis. “In addition to making it easier to share different kinds of videos, photos, and stories, we’re also introducing a new feature that makes it easier for Ring customers to enjoy the most popular Ring videos from across the country,” the blog post said. It said the new tool, “Best of Ring,” will debut in the coming weeks and will be an “in-app tile featuring a curated selection of our favorite Ring videos.” Google also has recently curtailed law enforcement access to its product features, saying last month that it would no longer allow police to use its Google Maps location history feature to investigate crimes. Ring, which debuted in 2013 and was acquired by Amazon in 2018, has long been controversial on privacy grounds. The Electronic Frontier Foundation (EFF), a leader in drawing attention to Ring’s practice, celebrated Wednesday’s announced change, but said it remains “deeply skeptical” about how law enforcement and Ring will decide which footage should be disclosed without a warrant due to alleged emergencies. “This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC),” EFF said in a [blog post](https://www.eff.org/deeplinks/2024/01/ring-announces-it-will-no-longer-facilitate-police-requests-footage-users). “Ring has been forced to make some important concessions—but we still believe the company must do more.”

51
3
privacy
Privacy c0mmando 8 months ago 100%
750m Indian mobile subscribers’ info for sale on dark web www.theregister.com

Indian infosec firm CloudSEK last week claimed it found records describing 750 million Indian mobile network subscribers on the dark web, with two crime gangs offering the trove of data for just $3,000. CloudSEK named CYBO CREW affiliates CyboDevil and UNIT8200 as the vendors of a 1.8TB trove, which contains mobile subscribers' names, phone numbers, addresses, and Aadhaar details. CloudSEK stated its investigation of the trove saw threat actors claim to have "obtained the data through undisclosed asset work within law enforcement channels" rather than as a result of a leak from Indian telcos. CloudSEK said its initial analysis found that the leak affects all major telecom providers. "The leak of Personally Identifiable Information (PII) poses a huge risk to both individuals and organizations, potentially leading to financial losses, identity theft, reputational damage, and increased susceptibility to cyber attacks," stated CloudSEK.

16
0
privacy
Privacy c0mmando 8 months ago 96%
Privacy Companies Push Back Against EU Plot To End Online Privacy reclaimthenet.org

An urgent appeal has been relayed to ministers across the European Union by a consortium of tech companies, exacting a grave warning against backing a proposed regulation focusing on child sexual abuse as a pretense to jeopardize the security integrity of internet services relying on end-to-end encryption and end privacy for all citizens. A total of 18 organizations – predominantly comprising providers of encrypted email and messaging services – have voiced concerns about the potential experimental regulation by the European Commission (EC), singling out the “detrimental” effects on children’s privacy and security and the possible dire repercussions for cyber security. Made public on January 22, 2024, this shared [open letter](https://tuta.com/blog/open-letter-encryption-eu) argues that the EC’s draft provision known as “Chat Control,” mandating the comprehensive scanning of encrypted communications, may create cyber vulnerabilities that expose citizens and businesses to increased risk. Further inflating the issue, the letter also addresses a stalemate amongst member states, the EC, and the European Parliament, who haven’t yet reconciled differing views on the proportionality and feasibility of the EC’s mass-scanning strategy in addressing child safety concerns. Among the signatories are Proton, an encrypted email service from Switzerland; Tuta Mail and NextCloud, specializing in email and cloud storage respectively; as well as Element, a provider of encrypted communications and collaboration services. Together, they implore EU leaders to consider a more balanced version of the mandate, as suggested by the European Parliament, which experts argue to be more potent and efficient than mass scanning of encrypted services. The proposed version of the regulation by the EC pushes tech companies to inject “backdoors” or leverage “client-side scanning”, to scrutinize the content of all encrypted communications for evidence of child sexual abuse. However, these companies are forceful in their conviction that despite its purpose to combat cybercrime, the mechanism could be swiftly utilized by offenders, “compromising security for everyone.” The application of client-side scanning – juxtaposing “hash values” of encrypted messages with a “hash value” database of unlawful content residing on personal devices – has met stiff critique from the security community. In defiance of the EU’s strong standpoint towards data protection, which paved the way for ethical, privacy-centric tech companies to flourish in the European market, these tech firms believe the EC’s proposal could contradict other EU regulations like the Cyber Resilience Act (CSA) and the Cybersecurity Act, which encourage the application of end-to-end encryption to counter cyber risks. The tech firms propose alternatives to mandatory scanning they believe are more effective and prioritize data protection and security. They argue an approach aligned with the European Parliament’s proposals provides a robust framework for child protection. Moreover, they discuss the danger of such scanning technology being potentially misused by oppressive regimes to squash political dissidents. They conclude that while they are not solely resistant to solutions, they stress the importance of devising strategies closely aligned to the European Parliament’s proposals. In a statment to Reclaim The Net Matthias Pfau, founder of Tuta, adds that such legislation “to scan every chat message and every email would create a backdoor – one that could and will be abused by criminals.”

30
0
privacy
Privacy c0mmando 8 months ago 98%
NSA Confirms Purchasing Data on American Citizens' Internet Behavior, Circumventing the Need for Warrants reclaimthenet.org

The NSA’s long history of often legally sketchy mass surveillance continues, despite some of the agency’s activities getting exposed more than a decade ago by whistleblower Edward Snowden. Now, the National Security Agency has had to reveal, in response to a senator’s questions, that it is, as one report put it, “sidestepping” obtaining warrants first before it buys people’s information, put on sale by data brokers. This came to light in an exchange of letters between Senator Ron Wyden and several top security officials. And this time – because of NSA’s own interest being at stake – he has been able to reveal the information he obtained. Wyden’s January 25 letter to Director of National Intelligence Avril Haines contained a fairly straight-forward request: US intelligence agencies should only buy American’s data “that has been obtained in a lawful manner.” We obtained a copy of the letter for you here. With the implication that something entirely different is happening, the senator went on to explain what: if these agencies went to communications companies themselves for the data, that would require a court order. Instead, Wyden continued, they go the roundabout way to get information (like location data) taken from people’s phones – collected via apps, and finally ending up with commercial brokers, who sell it to the likes of the NSA. And, this particular agency is also buying “Americans’ domestic internet metadata.” In other words, a comprehensive, yet legally questionable mass surveillance scheme. Wyden “reinforced” his letter to Haines by attaching NSA Director General Paul Nakasone’s December response to one of his earlier queries – a back-and-forth that has been going on for almost three years, he says, and concerned other agencies as well and their practice of data acquisition. But now that he said he would block the Senate confirmation of Nakasone’s successor – the information he received finally “got cleared” for release and pretty quickly. Nakasone confirmed the practice, and then went on to justify it by saying it only pertains to “records” of online traffic, rather than “emails and documents.” He said what the NSA purchases is “netflow data” that comes from devices where “one or both” ends of the connection is in the US. And why? It is “critical,” wrote Nakasone, in “protecting US defense contractors from cyber threats.”

144
43
privacy
Privacy c0mmando 8 months ago 100%
Florida Law Banning Children From Social Media and Pushing Digital ID and Online Identity Verification, Advances reclaimthenet.org

As part of a move that, at least on the surface, is aimed at protecting the privacy and well-being of young adolescents, Florida is taking legislative steps towards restricting social media access for children under sixteen. The bill, advanced in the state legislature on Wednesday, mandates rigorous age verification and calls for existing accounts of underage users to be deleted while purging any stored personal information. We obtained a copy of the bill for you [here](https://docs.reclaimthenet.org/florida-social-media-id-bill.pdf). (PDF) A bipartisan majority in the Florida house has ratified the bill with a vote of 106-13. It now awaits receipt by the Republican-majority senate. The legislation calls for age verification through an independent, third-party unrelated to the social media platform, ensuring that children under 16 can’t open new accounts. Drawn up to address platforms with potential “addictive, harmful, or deceptive design features,” this legislation is aimed at deterring persistent or compulsive usage influenced by digital design. In support of the legislation, Republican state lawmaker and bill co-sponsor Fiona McFarland equated social media usage designed to trigger dopamine releases to a “digital fentanyl.” Florida’s move comes against the backdrop of growing concern over the impact of social media on kids’ mental health and well-being. Last year, US Surgeon General [Vivek Murthy](https://reclaimthenet.org/us-surgeon-general-vivek-murthy-suggests-joe-rogan-should-be-censored) issued a warning on the potential harms of social media for children and adolescents, calling for more research into the area. The implementation of online age verification systems, increasingly requiring to the rollout of digital IDs, has sparked a significant debate about the balance between internet safety and free speech. These systems are designed to verify the age of users, ostensibly to protect younger audiences from inappropriate content or to ensure compliance with legal age restrictions. However, they often necessitate the use of digital IDs, which can include personal information like name, age, and sometimes even location. This shift towards digital IDs for age verification purposes raises concerns about the erosion of online anonymity and the ability to speak under a pseudonym.

44
1
privacy
Privacy c0mmando 8 months ago 100%
Popular iPhone Apps Abuse Notifications To Scoop Up Your Data reclaimthenet.org

Many iPhone applications have been found to be exploiting iOS’s push notifications to secretly harvest user data, a practice that potentially aids in building tracking profiles. This revelation was made by mobile researcher Mysk, who also added that these apps could bypass Apple’s stringent restrictions on background app activities, thereby posing a privacy threat to its users. While Apple’s App Store review guidelines explicitly prevent such data collection practices, apps have been found to engage in this activity rather surreptitiously. According to [Mysk](https://twitter.com/mysk_co/status/1750502700112916504), many well-known apps with large user bases clandestinely send user details back to their servers upon receiving or acknowledging push notifications, raising concerns about privacy. Apple’s iOS operating system is designed to minimize security risks and resource consumption by preventing apps from running in the background. Essentially, an app’s execution is suspended and eventually terminated when not in use. However, a system introduced in iOS 10 enabled apps to launch silently in the background to process incoming notifications before they were displayed on the device. Once this action gets completed, the app is again terminated. Mysk’s research unearthed extensive abuse of this feature, with apps using this as an opportunity to send data back to their servers. The transmitted information may vary depending on the app, but it typically included data like system uptime, language settings, available storage, battery status, device model, and screen brightness. This collected data can potentially be leveraged to create tracking profiles. In December, it was uncovered that several governments were [seeking access to push notification records from Apple’s and Google’s servers for espionage on users.](https://reclaimthenet.org/apple-reveals-governments-use-app-notifications-to-surveil-users)

21
1
privacy
Privacy c0mmando 8 months ago 100%
COVID-19 Testing Firm Exposed 1.3 Million Records Including Patient Names, Email Addresses, Dates of Birth, and Passport Numbers reclaimthenet.org

During the Covid-19 pandemic, data protection and privacy concerns took center stage as governments and organizations worldwide implemented various overreaching measures to control the spread of the virus. One of the most significant and controversial of these measures was the introduction of vaccine passports and health passes. These digital certificates indicated an individual’s vaccination status and, in many cases, were required for access to public spaces, travel, and certain services. This move marked a drastic shift in the way personal health information was used and shared, raising serious concerns about data privacy and surveillance. But aside from that, constant mandatory testing opened up new ways that people’s data could be collected and shared. CoronaLab, one of the largest Dutch COVID-19 test providers, seemingly exposed a password-less database to the internet. A total of 1.3 million sets of coronavirus testing records were potentially compromised, but thus far, no party has claimed responsibility for the oversight. This database contained an alarming variety of vital personal information including patient names, passport numbers, email addresses, and other data. According to [The Register](https://www.theregister.com/2024/01/24/dutch_covid_testing_firm_ignored_warnings/), the disappointing discovery of the data leak was made by Jeremiah Fowler, a credible source known for detecting breaches. Fowler found 118,441 test certificates, 660,173 testing samples, 506,663 appointment logs, and several internal files on the open internet, which, if sourced by a nefarious actor, could lead to significant privacy infringement. “Criminal[s] could potentially reference test dates, locations, or other insider information that only the patient and the laboratory would know,” Fowler commented. Believed to be linked to CoronaLab, a subsidiary of the Amsterdam-based Microbe & Lab, the exposed database paints a troubling picture of negligence. CoronaLab is listed by the US Embassy in the Netherlands as a recommended commercial COVID-19 test provider.

32
0
privacy
Privacy c0mmando 8 months ago 100%
Wrongful Arrest Raises (Another) Alarm: Invasive Facial Recognition Technology’s Flaws Exposed in $10M Lawsuit reclaimthenet.org

Harvey Eugene Murphy Jr, a 61-year-old man, is launching a legal battle against Macy’s and EssilorLuxottica, Sunglass Hut’s parent company, alleging a misidentification by facial recognition technology led to his unlawful arrest. Murphy’s lawsuit asserts that owing to a flawed criminal identification by a low-quality camera image, he spent days unjustly incarcerated where he underwent horrific physical and sexual violence. In January 2022, a robbery at a Houston-based Sunglass Hut led to the theft of merchandise worth thousands. However, Murphy’s legal counsel insists that Murphy was living in California, not Texas, during that time frame. Murphy’s lawsuit details how an EssilorLuxottica staff member in cooperation with Macy’s used facial recognition software to single out him as the thief. Following the allegation, a team member from EssilorLuxottica claimed to have identified one of two burglars involved in the heist using this technology, directing the Houston police department to halt its ongoing investigation. In addition to this accusation, the employee shared that Murphy was potentially associated with two more theft cases, based on the same software. On returning to Texas, Murphy was soon arrested upon his identity notification to a DMV clerk, as a warrant had been issued for his arrest concerning an aggravated theft case. According to a [Guardian report](https://www.theguardian.com/technology/2024/jan/22/sunglass-hut-facial-recognition-wrongful-arrest-lawsuit), after experiencing wrongful imprisonment in the local county jail and later being transferred to the Harris County jail, his charges were dropped as his alibi was certified by his defense attorney and prosecutor. Nevertheless, the alleged horrifying physical and sexual assault he underwent hours before his release in jail left him severely traumatized. “That was sort of terrifying,” Murphy expressed. He stayed anticipating help in the same cell as his alleged attackers until his release, constantly overwhelmed with intense anxiety. The entire litigation process was unbeknownst to Murphy until Daniel Dutko, his attorney, informed him about the facial recognition technology used in the investigation. Dutko discovered that footage collected by the Sunglass Hut employee was shared with Macy’s and it was their staff that identified Murphy. Dutko insists that facial recognition software is the sole explanation for Murphy’s false identification. The use of facial recognition technology by corporations, in cooperation with law enforcement, as seen in Murphy’s case, also raises significant privacy concerns. The idea that an individual can be tracked, identified, and potentially criminalized based on surveillance footage underscores a worrying trend towards a surveillance state. This situation is troubling not only for the individuals who might be misidentified but also for society at large, as it raises questions about the erosion of personal privacy and autonomy. Murphy’s ordeal highlights the lack of transparency and accountability in the use of these technologies. He was unaware of the facial recognition technology’s role in his case until informed by his attorney.

26
0
privacy
Privacy c0mmando 8 months ago 75%
Religious Groups Criticize Government Surveillance of “Religious Texts” Transactions reclaimthenet.org

The Epoch Times has found that religious leaders and advocates for religious freedom are expressing significant concern. This comes after information surfaced that, as part of their investigation into January 6, 2021, federal officials encouraged banks and other financial entities to [surveil customer accounts using search terms including “religious texts,” “MAGA,” and “Trump.](https://reclaimthenet.org/feds-asked-banks-to-search-transactions-related-to-maga)” The criticism mounted among those fighting for religious liberty, as these searches were conducted absent judicially authorized warrants. Tony Perkins, the head of the Family Research Council, a non-profit organization centered around traditional values, found such actions by the Biden administration appalling. “This is beyond alarming,” Perkins told The Epoch Times. “If we did a word search in history of the type of activities the Biden administration is engaged in, it would return words like ‘KGB,’ ’totalitarian,‘ ’repressive,’ ‘anti-democratic,’ and ‘grave threat to freedom.’” Mat Staver, Founder and Chairman of Liberty Counsel, another non-profit organization championing religious liberty, warned against the inherent danger of such governmental overreach in a society renowned for its freedom of speech. He laid bare the realities of such oppressive actions, typically associated with autocratic governments looking to suppress dissenting voices. According to [the report](https://www.theepochtimes.com/us/religious-liberty-experts-outraged-that-feds-asked-banks-to-search-customer-purchases-5568600), Leaders of other organizations expressed similar sentiments. The First Liberty Institute’s Kelly Shackelford termed federal encroachment on freedom of speech through financial institutions monitoring the purchase of religious texts as ‘outrageous,’ adding that such actions posed a significant threat to all freedoms. His sentiments were echoed by Jeremy Tedesco of the Alliance Defending Freedom, who labeled the government’s collaboration with financial institutions to flag their customers for activities pertaining to the exercise of constitutional rights as “terrifying.” According to a statement by Representative Jim Jordan, Chairman of the House Judiciary Committee, such surveillance was orchestrated by the Department of the Treasury’s Office of Stakeholder Integration and Engagement in collaboration with the FBI. His letters to Noah Bishoff, the former director of FinCEN now performing the role of anti-money laundering officer for Plaid Inc, and Peter Sullivan, senior private sector partner for the FBI, asked them to testify about the searches. “According to this analysis, FinCEN warned financial institutions of ‘extremism’ indicators that include ‘transportation charges, such as bus tickets, rental cars, or plane tickets, for travel areas with no apparent purpose,’ or ‘the purchase of books (including religious texts) and subscriptions to other media containing extremist views,’” Jordan wrote. “In other words, FinCEN used large financial institutions to comb through the private transactions of their customers for suspicious charges on the basis of protected political and religious expression.”

4
0
privacy
Privacy c0mmando 8 months ago 34%
Donald Trump Pledges to Block the Introduction of a Central Bank Digital Currency in the United States if He Wins Election reclaimthenet.org

Donald Trump, former president of the United States and a clear contender for returning to this role, has made it clear that the creation of a central bank digital currency (CBDC) within the US will not occur under his watch should he be elected back into office. He considers such a move a potential infringement on citizens’ liberties, equating them to the government gaining total control of finances. In a recent campaign speech held in Portsmouth, New Hampshire, Trump made his anti-CBDC stance abundantly clear. He vowed to disallow the development of a CBDC, fostering a narrative that any such state-backed digital currency would only enable the federal government to exert complete dominance over citizens’ monetary assets. This, to him, appears as a direct threat to freedom. Once known for his opposition toward Bitcoin and others in the cryptocurrency ecosystem in the past, something he has now reversed his position on, Trump extended the same disapproval to CBDCs. He vowed to nip any attempt at such a development in the bud if he earns the title of US President once again. “As your President, I will never allow the creation of a central bank digital currency. Such a currency will give a federal government, our federal government absolute control over your money,” President Trump said. “They could take your money, you won’t even know it’s gone.” Trump’s sentiments strike a chord with the perspective of the current Governor of Florida, Ron DeSantis, a fellow presidential hopeful who also pledged to quash any developments toward a CBDC if he succeeds in the race. Similarly, Robert F. Kennedy Jr and Ron DeSantis, other contender for the top seat, echoed these sentiments back in July 2023. CBDCs could potentially lead to a dystopian future, marked by a significant erosion of financial privacy and an unprecedented concentration of power. The ability of governments to track all financial transactions in real-time raises concerns about total surveillance and the potential for financial censorship. This level of oversight could allow authorities to block transactions, freeze accounts, or alter balances, undermining the concept of personal asset security. Additionally, the shift to a CBDC-dominated system could eliminate the anonymity provided by cash transactions, leading to a society where anonymous spending is impossible. The implementation of social credit systems, where financial capabilities are tied to government-approved behavior, could further erode individual freedoms. Meanwhile, other nations continue to not only explore but embrace the idea of sovereign digital currencies, indicating a divergent approach to the United States. The Bank of Russia even anticipates the wide adoption of its own CBDC, known as the digital ruble, by 2025. The commencement of their pilot is set for August 2023.

-7
5
privacy
Privacy c0mmando 8 months ago 90%
UN Chief António Guterres Advocates for "Sustainable Development Goals" (Including Digital ID) and Enhanced Data Sharing at WEF 2024 reclaimthenet.org

UN Secretary-General Antonio Guterres is in Davos these days for the annual World Economic Summit (WEF), and [his address](https://yewtu.be/watch?v=Fl8VaC9uR8U&t=800s) at one of the panels has been in keeping with the meeting’s agenda(s) – but also, clearly, those recently forcefully pushed by the world organization. Among the schemes Guterres spoke about are the UN’s Global Digital Compact and the Sustainable Development Goals. The first consists of several proposals, [including digital ID](https://reclaimthenet.org/united-nations-development-program-urges-governments-to-push-digital-id) that is linked to people’s bank accounts, while the second, overarching one, that enjoys the support of some of the world’s most powerful countries, also involves digital ID, and UN’s “vision” of disinformation moderation (also known as censorship). Guterres said that Global Digital Compact would be a major contributor to what he called “the digital connectivity gap.” Referring to the overall project as multi-stakeholder, the UN chief noted that “AI” would play a role in building the public and private sector’s capability of a “networked governance model.” More data sharing seems to be at the heart of all this, while in order to keep control over the way “AI” is used in the future, Guterres and his team want to see governments and (private) tech companies work together. All these initiatives will feature at the “Summit for the Future” this coming September. One idea voiced by Guterres is to make globalist organizations – such as G20, international financial institutions, and the UN itself, even closer. A recent UN Policy Brief discusses the complex “pyramid” of initiatives, where until now something called “Our Common Agenda” (of which Global Digital Compact is one mechanism) was designed to accelerate Sustainable Development Goals. Now, there is also the move to bring G20 and others into this play – likened in some reports as an economy-oriented counterpart to UN’s Security Council. And the fear here becomes the effect it may have on the international banking system – and in the process, of people’s financial liberties. As for the Global Digital Compact, it looks like yet another dystopian iteration of an idea that is cropping up all over the world in different forms. With digital ID as an unavoidable component, it would create a centralized – therefore easily controlled – network of citizens.

8
0
privacy
Privacy c0mmando 8 months ago 29%
Surveillance Overreach: Federal Investigators Asked Banks To Search Transactions Related to “MAGA,” “TRUMP” reclaimthenet.org

A revelation by the House Judiciary Committee has raised concerns about federal investigators’ intrusion into individuals’ private financial transactions. According to the committee, federal investigators instructed banks to filter customer transactions using politically related terms such as “MAGA” and “TRUMP.” This move, part of an investigation into the events of January 6, 2021, was [exposed by the House Judiciary Committee](https://judiciary.house.gov/media/press-releases/federal-government-flagged-transactions-using-terms-maga-and-trump-financial). The potential alarm does not end here, as even purchases of “religious texts” are seen as indicative of extremism in the eyes of the investigators. The committee’s report further revealed that the investigators suggested banks filter transactions using keywords such as names of popular merchandise stores including Dick’s Sporting Goods and Bass Pro Shops, among others. The meticulous oversight undertaken by the House Judiciary Committee and its subcommittee on the Weaponization of the Federal Government is focused on federal law enforcement’s acquisition of information about US citizens without legal procedures as well as its engagement with the private sector. Documents acquired by the committee, according to Chair Jim Jordan, point to actions from the Treasury Department’s Office of Stakeholder Integration and Engagement in the Strategic Operations of the Financial Crimes Enforcement Network (FinCEN). Following the events of January 6th, the department provided search terms to banks to identify transactions on behalf of federal law enforcement. These materials suggested generic keywords to search Zelle payment messages amidst other instructions. According to Jordan, FinCEN essentially relied on large financial institutions to scrutinize their customers’ private transactions for suspicious activities that might stem from protected political and religious expression. The committee has called Niah Bishoff, the former director of FinCEN, to appear for a formal interview on this issue. As Jordan highlights, pervasive financial surveillance of American citizens’ private transactions, coordinated with federal law enforcement, raises serious concerns about FinCEN’s attitude towards fundamental civil liberties. We obtained a copy of the letter the committee sent Bishoff for you [here](https://docs.reclaimthenet.org/jordan-bishoff-letter-rtn.pdf). Jordan also requested an interview with Peter Sullivan, the senior private sector partner for outreach in the Strategic Partner Engagement Section )at the FBI. The concern arose after receiving testimony indicating that Bank of America [voluntarily provided the FBI](https://reclaimthenet.org/bank-of-americas-handing-over-of-data-fbi-invesigated) with a list of individuals who made transactions in the Washington, D.C. area using a bank card within a specific period in 2021.

-7
4
privacy
Privacy c0mmando 8 months ago 30%
Unlocking Privacy: Paying on Amazon with Monero https://twitter.com/atMonezon/status/1749084246289838178

cross-posted from: https://monero.town/post/1871686 > **💸 Why Monezon?** > > - Anonymous Transactions: Say goodbye to tedious KYC procedures! Spend your Monero on real-life goods without compromising your privacy. > > **🚚 Flexible Delivery Options:** > > - Ship Anywhere: Monezon allows you to have Amazon goods delivered to any Amazon Locker, personal address, or relay point of your choice. Enjoy the convenience of personalized delivery options. > > **🔧 How it Works:** > > - Place Your Order: Submit your Amazon order through Monezon using monero and receive a unique passphrase. > > - Executor Network: Your order is broadcasted to available executors (total order cost only, no personal details). > > - Order Execution: An executor accepts your order and places it on Amazon. > > - Secure Communication: Utilize end-to-end encrypted chat with your executor using the provided passphrase. > > - Keep Track: Get a tracking code to ensure that your order has been sent. > > - Confirmation & Payment: Once both parties confirm the delivery, the executor receives payment in monero, completing the seamless transaction. > > Monezon has got a new update, and we're looking to improve your experience from your feedbacks! > > Check out our updated Frequently Asked Questions (F.A.Q) section for some quick insights. Feel free to ask us anything, and we'll be more than happy to assist you. > > [Monezon](https://monezon.com) > > [Twitter](https://twitter.com/atMonezon) > > **We are currently welcoming executors! Slide in our dms if interested in earning monero!** > > *Note: Monezon is an independent platform and is not affiliated with Amazon.*

-4
0
privacy
Privacy c0mmando 8 months ago 100%
China warns of AirDrop de-anonymization flaw www.theregister.com

In June 2023 China made a typically bombastic announcement: operators of [short-distance ad hoc networks](https://www.theregister.com/2023/06/08/beijing_targets_bluetooth_and_wifi/) must ensure they run according to proper socialist principles, and ensure all users divulge their real-world identities. The announcement targeted techs like running Wi-Fi hotspots from smartphones and Apple's AirDrop, as they both allow the operation of peer-to-peer networks. AirDrop allows users to accept incoming files from unknown parties and was [reportedly](https://www.nytimes.com/2022/10/24/business/xi-jinping-protests.html) used to share anti-government material during China's long and strict COVID-19 lockdowns. China understands that Apple considers AirDrop's peer-to-peer links are a feature, not a bug. But Chinese netizens know they're always being watched in the name of national security, and many welcome it. Which is why Chinese authorities last week [admitted](https://sfj.beijing.gov.cn/sfj/sfdt/ywdt82/flfw93/436331732/index.html) that the use of AirDrop is considered problematic after police previously found inappropriate material being shared on the Beijing subway using the protocol. "Because AirDrop does not require an Internet connection to be delivered, this behavior cannot be effectively monitored through conventional network monitoring methods, which has become a major problem for the public security organs to solve such cases," states an article posted by the city of Beijing's municipal government. The piece goes on to describe an assessment by the Beijing Wangshendongjian Forensic Appraisal Institute that found the techniques Apple uses to anonymize AirDrop users' identities is easily circumvented because identifiable information is only hashed and a technique called a "rainbow table" allows access to the relevant information in plain text. Chinese netizens are therefore on notice that their attempts to share material critical of Beijing can be observed, meaning the 2023 pronouncement about such networks is now more than a theoretical warning. Netizens now know that AirDrop can lead to nasty consequences. Infosec academic Matthew Green [analyzed](https://blog.cryptographyengineering.com/2024/01/11/attack-of-the-week-airdrop-tracing/) the Chinese post, and research on AirDrop published in 2019 by academics from Germany's Technische Universität Darmstadt, and concluded the protocol is leaky and the Chinese Institute's assertions are entirely plausible – if an Apple ID or phone number can be guessed by an attacker. "The big question in exploiting this vulnerability is whether it's possible to assemble a complete list of candidate Apple ID emails and phone numbers," wrote Green, a cryptographer and professor at Johns Hopkins University. I The extent of surveillance in China means gathering candidate info would not be vastly difficult. Green's post details ways in which actors could create lists of target credentials. In June 2023 China made a typically bombastic announcement: operators of short-distance ad hoc networks must ensure they run according to proper socialist principles, and ensure all users divulge their real-world identities. The announcement targeted techs like running Wi-Fi hotspots from smartphones and Apple's AirDrop, as they both allow the operation of peer-to-peer networks. AirDrop allows users to accept incoming files from unknown parties and was reportedly used to share anti-government material during China's long and strict COVID-19 lockdowns. China understands that Apple considers AirDrop's peer-to-peer links are a feature, not a bug. But Chinese netizens know they're always being watched in the name of national security, and many welcome it. Which is why Chinese authorities last week admitted that the use of AirDrop is considered problematic after police previously found inappropriate material being shared on the Beijing subway using the protocol. "Because AirDrop does not require an Internet connection to be delivered, this behavior cannot be effectively monitored through conventional network monitoring methods, which has become a major problem for the public security organs to solve such cases," states an article posted by the city of Beijing's municipal government. The piece goes on to describe an assessment by the Beijing Wangshendongjian Forensic Appraisal Institute that found the techniques Apple uses to anonymize AirDrop users' identities is easily circumvented because identifiable information is only hashed and a technique called a "rainbow table" allows access to the relevant information in plain text. Chinese netizens are therefore on notice that their attempts to share material critical of Beijing can be observed, meaning the 2023 pronouncement about such networks is now more than a theoretical warning. Netizens now know that AirDrop can lead to nasty consequences. Infosec academic Matthew Green analyzed the Chinese post, and research on AirDrop published in 2019 by academics from Germany's Technische Universität Darmstadt, and concluded the protocol is leaky and the Chinese Institute's assertions are entirely plausible – if an Apple ID or phone number can be guessed by an attacker. "The big question in exploiting this vulnerability is whether it's possible to assemble a complete list of candidate Apple ID emails and phone numbers," wrote Green, a cryptographer and professor at Johns Hopkins University. I The extent of surveillance in China means gathering candidate info would not be vastly difficult. Green's post details ways in which actors could create lists of target credentials. AirDrop users are therefore at risk, in China, or anywhere else. Green suggests "a bizarre high-entropy Apple ID that nobody will possibly guess" as one way to protect yourself. "Apple could also reduce their use of logging," he wrote, before suggesting that Cupertino could easily fix this issue by using a robust version of a cryptographic technique called "Private Set Intersection." "But this is not necessarily an easy solution, for reasons that are both technical and political," he observed. "It's worth noting that Apple almost certainly knew from the get-go that their protocol was vulnerable to these attacks — but even if they didn't, they were told about these issues back in May 2019 by the Darmstadt folks. It's now 2024, and Chinese authorities are exploiting it. So clearly it was not an easy fix." Green then speculated that even if Apple can fix the issue, it might not want to undertake repairs given it earns around 20 percent of its revenue in China. That revenue is already at risk: in 2023 Beijing discouraged use of the iPhone by government employees and suggested buying made-in-China phones as a more patriotic choice. "Hence there is a legitimate question about whether it's politically wise for Apple to make a big technical improvement to their AirDrop privacy, right at the moment that the lack of privacy is being viewed as an asset by authorities in China. Even if this attack isn't really that critical to law enforcement within China, the decision to 'fix' it could very well be seen as a slap in the face," Green wrote.

1
0
privacy
Privacy c0mmando 8 months ago 100%
FTC wins first settlement banning sale of location data www.theregister.com

The US Federal Trade Commission has secured its first data broker settlement agreement, prohibiting X-Mode Social from sharing or selling sensitive location data. In its complaint, the FTC accused X-Mode, which sold its assets to successor firm Outlogic in 2021, of selling raw non-anonymized location data collected through its own apps and an SDK for embedding in third-party applications. The X-Mode SDK has been found in [hundreds of apps](https://www.theregister.com/2021/02/03/location_tracking_report_xmode_sdk/) downloaded billions of times on both Apple and Android devices. "By securing a first-ever ban on the use and sale of sensitive location data, the FTC is continuing its critical work to protect Americans from intrusive data brokers and unchecked corporate surveillance," chair Lina Khan said of the settlement. According to the FTC [complaint](https://www.ftc.gov/system/files/ftc_gov/pdf/X-Mode-Complaint.pdf) [PDF], X-Mode/Outlogic has for years collected and sold data associated with mobile advertising IDs, which can easily be matched to an individual mobile device to figure out what locations an individual has visited. If that sounds familiar, it's the same allegations the FTC leveled against data broker Kochava when it filed a [complaint](https://www.theregister.com/2022/08/29/ftc_sues_kochava/) against that company in 2022. According to the FTC's complaints against Kochava and Outlogic, data collected and sold by the companies could easily be used to link individuals to places of worship, homeless and domestic violence shelters, addiction facilities, reproductive health clinics, and other sensitive locations. The threat of [data misuse](https://www.theregister.com/2023/05/19/abortion_data_tracking_cases/) by governments and individuals since the overturning of Roe vs Wade has made the collection of this data type an even more pressing issue to address. Per the [settlement](https://www.ftc.gov/system/files/ftc_gov/pdf/X-Mode-D%26O.pdf) [PDF], Outlogic will be required to delete all data it has previously collected, and requires the company to honor opt-out requests. The FTC said the company had not previously asked users for consent to have their location data collected. Additionally, Outlogic will be required to maintain a list of sensitive locations for which it won't gather data, and must implement procedures to ensure buyers of its location data can't associate what they've purchased with sensitive locations. "The FTC's action against X-Mode makes clear that businesses do not have free license to market and sell Americans' sensitive location data," Khan said.

1
0
privacy
Privacy c0mmando 8 months ago 100%
Canada: Police Say Protect the Privacy of Thieves, Don’t Post Videos of Them Stealing Packages reclaimthenet.org

Quebec’s Police force, the Sûreté du Québec (SQ), has advised residents against publicly sharing surveillance videos capturing thieves stealing their packages. In recent years, the phenomenon of “porch piracy” has become an increasingly troubling aspect of urban and suburban life. As online shopping has surged in popularity, so too has the opportunity for thieves to swipe unattended packages from doorsteps and porches. The rise of porch piracy correlates directly with the boom in e-commerce. With millions of packages being delivered daily, many are left unattended for hours, making them easy targets for thieves. The convenience that online shopping provides to consumers also creates a vulnerability that these criminals exploit. Thieves have reportedly followed delivery vehicles and swooped in to seize packages when left unattended, according to Montreal West Public Security Councilor Lauren Small-Pennefather. As package thefts continue to rise, homeowners are utilizing online platforms to expose thieves, seek community support, and sometimes even shame the perpetrators. The trend of uploading videos of package thefts has gained significant momentum. Home security cameras, doorbell cams, and other surveillance devices are capturing clear footage of these crimes. Victims share these videos on social media platforms like Facebook, X, and Nextdoor to warn neighbors and others. But now Quebecois police are telling people not to upload the videos to social media. Their reasoning? According to SQ Communications Officer Lt. Benoit Richard, it’s to protect the privacy of the thieves. “You cannot post the images yourself because you have to remember, in Canada, we have a presumption of innocence and posting that picture could be a violation of private life,” SQ communications officer Lt. Benoit Richard warned, according to a report aired on CTV.

1
0
privacy
Privacy c0mmando 8 months ago 100%
UK Police Have Been Secretly Using Passport Database for Facial Recognition Since 2019 reclaimthenet.org

It has come to light that UK police forces have been using facial recognition technology to conduct extensive searches within the nation’s passport database, which comprises 46 million British passport holders. This clandestine operation, ongoing since at least 2019, [The Telegraph and Liberty Investigates have found](https://www.telegraph.co.uk/news/2024/01/05/police-facial-recognition-searches-passport-database-mps/). Passport photos are collected for a specific purpose – to verify the identity of individuals for international travel. When these photos are repurposed for a facial recognition database by law enforcement, it constitutes a significant invasion of privacy. People do not expect their passport photos to be used for surveillance or policing purposes, which can lead to a feeling of constant monitoring and a loss of anonymity in public spaces. Misuse can lead to a chilling effect on free speech and assembly, as individuals might fear being unjustly targeted due to their presence in these databases. When individuals provide their photos for passports, they do not explicitly consent to their use in law enforcement databases, neither have they typically been arrested for a crime – which is often used as grounds for collecting biometric data from a suspect. Law enforcement’s use of passport databases for facial recognition scanning turns all passport holders into a potential suspect pool, raising major civil liberties concerns. This lack of consent is a fundamental issue, as it bypasses the individuals’ rights to control how their personal data is used. The use of these photos without explicit permission for law enforcement purposes can be seen as a violation of personal autonomy and rights. Concerns have been raised over privacy and data protection following this disclosure. The Information Commissioner’s Office (ICO), through its spokesman, has voiced intentions to address these concerns with the Home Office, emphasizing the need to align facial recognition technology’s usage with data protection principles. The Home Office’s Freedom of Information (FOI) Request data reveals that, in the first nine months of 2023 alone, over 300 facial recognition searches were conducted on the UK passport database. This practice has raised alarms among several MPs and privacy watchdogs. John Edwards, the information commissioner, through his spokesman, has indicated the ICO’s engagement with the Home Office on this matter, highlighting the “importance of transparency” and the potential implications for data protection.

22
0
privacy
Privacy c0mmando 9 months ago 100%
The Digital ID Rollout Is Becoming a Hacker's Dream reclaimthenet.org

Governments and corporations around the world are showing great enthusiasm in either already implementing, or planning to implement some form of digital IDs. As it turns out ironically, these efforts are presented to citizens as not only making their lives easier through convenience, but also making sure their personal data contained within these digital IDs is safer in a world teeming with malicious actors. Opponents have been warning about serious privacy implications, but also argue against the claim that data security actually gets improved. It would appear they are right – at least according to a [report by a cybersecurity firm](https://www.resecurity.com/blog/article/cybercriminals-launched-leaksmas-event-in-the-dark-web-exposing-massive-volumes-of-leaked-pii-and-compromised-data) issued after the hacker attacks happening around the Christmas holiday, something that’s now been dubbed “Leaksmas.” Not only governments, but hackers as well love digital IDs and huge amounts of personal information all neatly gathered in one place, and, judging by what’s been happening recently, in many instances, sitting there pretty much easily available to them. And hackers have expressed this love by making digital ID data their primary focus, the firm, Resecurity, said in its report. Resecurity claims that this is a clear fact, and that it was able to discern it by analyzing data dumps once they started appearing on the dark web after the Christmas-time “digital smash-and-grabs.” In numbers, a staggering 50 million records containing personally identifiable information have surfaced on the dark web. The reason so many stolen datasets have made it to the black digital market all at once appear to be “technicalities” related to the time window during which most of it will be “sellable”. Breaking down that 50 million number, Resecurity said that 22 million records were stolen from a telecommunications company in Peru, which include what’s known there as DNIs – national IDs. According to [reports](https://www.biometricupdate.com/202401/leaksmas-report-calls-digital-id-primary-focus-of-hackers), it is hard to overestimate how devastating this event could be, if the DNIs end up in the wrong hands. It is the sole ID document recognized by the authorities in Peru for a range of things fundamental to people’s everyday life: “judicial, administrative, commercial and civil transactions,” as one article put it. After Peru, other countries most affected are the Philippines, the US, France, and Vietnam.

28
1
privacy
Privacy c0mmando 9 months ago 86%
AI Watermarking Is Advocated by Biden’s Advisory Committee Member, Raising Concerns for Parody and Memes reclaimthenet.org

The Biden administration doesn’t seem quite certain how to do it – but it would clearly like to see AI watermarking implemented as soon as possible, despite the idea being marred by many misgivings. And, even despite what some [reports admit](https://fedscoop.com/ai-watermarking-misinformation-election-bad-actors-congress/) is a lack of consensus on “what digital watermark is.” Standards and enforcement regulation are also missing. As has become customary, where the government is constrained or insufficiently competent, it effectively enlists private companies. With the standards problem, these seem to none other than tech dinosaur Adobe, and China’s TikTok. It’s hardly a conspiracy theory to think the push mostly has to do with the US presidential election later this year, as watermarking of this kind can be “converted” from its original stated purpose – into a speech-suppression tool. The publicly presented argument in favor is obviously not quite that, although one can read between the lines. Namely – AI watermarking is promoted as a “key component” in combating misinformation, deepfakes included. And this is where perfectly legal and legitimate genres like parody and memes could suffer from AI watermarking-facilitated censorship. Spearheading the drive, such as it is, is Biden’s National Artificial Intelligence Advisory Committee and now one of its members, Carnegie Mellon University’s Ramayya Krishnan, admits there are “enforcement issues” – but is still enthusiastic about the possibility of using technology that “labels how content was made.” From the Committee’s point of view, a companion AI tool would be a cherry on top. However, there’s still no actual cake. Different companies are developing watermarking which can be put in three categories: visible, invisible (i.e., visible only to algorithms), and based on cryptographic metadata. And while supporters continue to tout watermarking as a great way to detect and remove “misinformation,” experts are at the same time pointing out that “bad actors,” who are their own brand of experts, can easily remove watermarks – or, adding another layer to the complication of fighting “misinformation” windmills – create watermarks of their own. At the same time, insisting that manipulated content is somehow a new phenomenon that needs to be tackled with special tools is a fallacy. Photoshopped images, visual effects, and parody, to name but a few, have been around for a long time.

22
4
privacy
Privacy c0mmando 9 months ago 88%
EU Steams Ahead With Controversial, Centrally-Controlled Digital Euro reclaimthenet.org

The European Union (EU) has been known to waste a lot of money on wrong or even hopeless causes, and opponents of centralized digital money (CBDCs) must be hoping that the digital euro, which has just had €1.3 billion earmarked towards its development, will be one of those. In fact, in announcing the move EU’s European Central Bank (ECB) made sure to add the disclaimer that it is “not making a commitment to launch any of the development work listed.” But for the moment, ECB is [pushing forward with its plans](https://www.ecb.europa.eu/paym/intro/news/html/ecb.mipnews240103_1.en.html), and much earlier than observers expected – so much so that the announcement is viewed by some as a surprise. A total of five private sector partners will now receive huge contracts; in the past, Amazon was controversially involved in the e-commerce payments prototype. How a company that flaunted EU’s own data protection rules and was fined $887 million as recently as in 2021 found its way to becoming an EU “partner” on projects of this importance upset some members of the European Parliament. And they won’t be pleased to know that although not guaranteed to continue, Amazon might easily be selected this time as well. According to the ECB statement, the recipients of the money will be tasked not only with prototyping the CBDC, but also with developing a relevant app, offline payment schemes, and, “risk and fraud management.” This last “initiative” will receive €237 million, while the majority of the funds will go toward creating offline payments – €662 million. Regardless of how much criticism CBDCs are receiving, particularly in relation to being a power grab, supporters appear convinced that the digital euro would improve the bloc’s financial infrastructure. And it looks like the EU would like to keep the money “in the Big Money family”: Etonec COO and Digital Euro Association chairman Jonas Gross thinks those most likely to get the contracts are “established CBDC tech providers with offline capabilities,” Big Tech, global financial consultancies, and, “smaller” (but also, “larger”) software firms. Privacy violations concerns, and general usefulness of CBDC in terms of becoming viable competition to other kinds of digital payments remain recurring arguments offered by opponents.

7
0
privacy
Privacy CCL 9 months ago 100%
Tech group with TikTok and Meta sues Ohio over law curbing children’s social media use www.scmp.com

There is no way to actually enforce a law like this, but they'll sure use it as an excuse to increase surveillance, particularly of our youth.

20
0
privacy
Privacy c0mmando 9 months ago 89%
California Introduces New Law, Limiting and Regulating Cops’ Posting of Mugshots Online, Forcing Chosen Names and Pronouns reclaimthenet.org

California’s AB 994 is a newly instituted piece of legislation that significantly regulates the protocol for police departments and sheriff’s offices in relation to the publication of mugshots on their social media platforms. This law, effective from January 1, 2024, introduces strict controls regarding the display of mugshots, particularly emphasizing the inclusion of the arrested individual’s self-identified name and gendered pronouns. We obtained a copy of the bill for you [here](https://docs.reclaimthenet.org/20230AB994_94.pdf) (PDF). Under AB 994, the conditions allowing for the sharing of mugshots on social media by law enforcement are notably restricted. The law mandates that the person’s chosen name and chosen pronouns be used when their mugshot is uploaded on these platforms. This requirement is applicable regardless of the nature of the alleged crime. “With respect to an individual who has been arrested for any crime, this bill would require a police department or sheriff’s office, upon posting a booking photo on social media, to use the name and pronouns given by the individual arrested,” the law specifically states. Law enforcement agencies, however, retain the discretion to use other legal names or aliases in situations where it may aid in apprehending the individual, reduce imminent threats to public safety, or in other urgent scenarios. Traditionally, the booking process after an arrest involves using the person’s legal name from official documents like birth certificates or driver’s licenses. The new legislation, however, stipulates these new rules specifically for the phase when the individual’s mugshot is posted online. AB 994 also sets boundaries on the posting of mugshots for those accused of nonviolent crimes. The exceptions to this limitation include instances where the release of the image could contribute to the person’s capture, mitigate threats to others, or is ordered by a judge. Furthermore, the law requires the removal of all mugshots from law enforcement social media accounts within two weeks, unless it falls under the same three exceptional categories.

23
4
privacy
Privacy c0mmando 9 months ago 97%
Facebook Rolls Out "Link History" Showing How it Tracks All The Websites Users Visit reclaimthenet.org

Facebook, just like the rest of Big Tech, has historically made a great effort to track users across the internet, even when they are not logged into the platform, for data collecting and ultimately monetary reasons. Now, [reports](https://gizmodo.com/meet-link-history-facebook-s-new-way-to-track-the-we-1851134018) say that a new way to achieve this has been recently launched by the giant, and notably, for the first time this type of tracking is made visible. Called Link History, the new feature is found in the Facebook app as essentially one of the permissions, and “documents” every link a user clicks while using the app. Once again, fully in vein of what Google, Microsoft, etc., are doing, Facebook says the change – putting all links in one place – is there for better user experience, and again habitually, while the feature is not mandatory, it is there by default and “hiding” behind a pretty solid wall of an “opt-out.” Whatever the case may be, most users don’t bother jumping over that wall, allowing corporations to at once offer a choice – and in most cases have it their way. ![](https://links.hackliberty.org/pictrs/image/3b712cc5-6f8c-4bab-97b8-30410e402ec3.jpeg) In order to deactivate this on their app, users first need to be aware Link History exists, and then navigate to the appropriate setting in order to “opt out.” But there is no shortage of criticism of this latest move, from the privacy point of view (although mainstream tech press curiously chooses to single out Facebook while praising Google and Apple as some sort of “privacy warriors” now). This should be viewed as part of the big (political) picture where keeping pressure on Facebook as still the most influential social media is especially important in an election year – while at the same time rightfully questioning Facebook’s (persistent) motivation for pursuing cross-site user tracking. A classic example of two things getting to be true at the same time. Facebook (Meta) doesn’t exactly pretend it is working solely to make sure users “never lose a link again” and enjoy other things that benefit them. A part of Link History’s announcement spells this out: “When you allow link history, we may use your information to improve your ads across Meta technologies.” What the statement doesn’t clarify is if any of the well-known, ultra-invasive methods it uses to track users will actually change in any way with the introduction of Link History.

40
0
privacy
Privacy c0mmando 9 months ago 92%
Recovering Our Lost Free Will Online: Tools and Techniques That Are Available Now https://www.complete.org/recovering-our-lost-free-will-online-tools-and-techniques-that-are-available-now/

>As I’ve been thinking and writing about Privacy and decentralization lately, I had a conversation with a colleague this week, and he commented about how loss of privacy is related to loss of agency: that is, loss of our ability to make our own choices, pursue our own interests, and be master of our own attention. >In terms of telecommunications, we have never really been free, though in terms of Internet and its predecessors, there have been times where we had a lot more choice. Many are too young to remember this, and for others, that era is a distant memory. >The irony is that our present moment is one of enormous consolidation of power, and yet also one of a proliferation of technologies that let us wrest back some of that power. In this post, I hope to enlighten or remind us of some of the choices we have lost — and also talk about the ways in which we can choose to regain them, already, right now. >I will talk about the possibilities, the big dreams that are possible now, and then go into more detail about the solutions.

11
1
privacy
Privacy c0mmando 9 months ago 84%
Visitors of Assange Allowed to Continue Lawsuit Against CIA Surveillance, Judge Rules reclaimthenet.org

A US judge has allowed four persons who visited whistleblower and WikiLeaks founder Julian Assange while he was residing in the [Ecuadorian embassy in London](https://reclaimthenet.org/assange-spied-on-by-cia) to continue their legal battle against the CIA, which was launched in the summer of 2022. The four – journalists John Goetz and Charles Glass, as well as attorneys Margaret Ratner Kunstler and Deborah Hrbek – allege that the agency spied on them during the visits. District Court Judge John Koeltl made this decision in response to the CIA asking the Manhattan court to dismiss the case. We obtained a copy of the decision for you [here](https://docs.reclaimthenet.org/gov.uscourts.nysd.584750.77.0.pdf) (PDF). The filing accuses the spies of gaining access to data copied from their phones without their knowledge, and Judge Koeltl agreed that, if this occurred, it was an act of unconstitutional privacy violation, therefore representing sufficient grounds for the lawsuit to proceed. Spain’s El Pais originally reported that a contractor harvested information using hidden microphones and accessing the phones, and then turned it over to the CIA. The judge specified that Assange’s visitors had “reasonable expectation of privacy” regarding the data on their phones and that this is guaranteed by the Fourth Amendment (which protects against unreasonable searches and seizures). The ruling allowed the part of the lawsuit that would, if the final verdict goes in favor of the plaintiffs, force the CIA to destroy whatever information they unlawfully obtained from the devices. At least in theory. But, the judge dismissed the money damages request from the filing, which named then CIA Director Mike Pompeo as the one who should pay it. And, Koeltl found that the CIA listening in on the plaintiffs’ conversations and getting their hands on the copies of their passports did not violate any rights, contrary to the original lawsuit’s claims. While representatives of the court were not in the mood to make any further comments, a lawyer for the plaintiffs, Richard Roth, stated for the press that he and his clients were “thrilled” that the court did not go along with what he said is the CIA trying to silence them. Roth added that they “merely seek to expose the CIA’s attempt to carry out Pompeo’s vendetta against WikiLeaks.” According to [Politico](https://www.politico.com/news/2023/12/19/wikileaks-founder-visitors-spying-lawsuit-cia-00132552), the tactic the government could turn to now is the state-secrets privilege doctrine. This would be a workaround of sorts used in order to “shut down civil suits that implicate classified information.”

27
0
privacy
Privacy freedomPusher 9 months ago 80%
Hospital in Czech Republic considering sharing sensitive medical data with Cloudflare site to translate docs

Some people use https://libretranslate.com/ thinking they are gaining some privacy by avoiding Google Translate. This web service is proxied through Cloudflare, thus exposing potentially sensitive text to a privacy-hostile US tech giant. #Libretranslate uses words like “libre”, “free” and “open” to gain people’s trust. No mention of Cloudflare, so quite deceiving. A CTO in Czech Republic was considering using Libretranslate on sensitive medical and personal data of people. Yikes! His only concern was whether the Libretranslate admin kept logs of the translations. Czech is an EU member, thus subject to the GDPR. But I actually cannot think of a GDPR violation here in the general case. Everyone is free to outsource. And Europeans would likely be routed to a CF server in Europe.

3
2
privacy
Privacy c0mmando 9 months ago 98%
Google to settle class action lawsuit alleging Incognito mode does not protect user privacy therecord.media

A nearly four-year-long battle between Google and consumers in a class action lawsuit reached a preliminary settlement Tuesday over allegations that Google deceives users about their privacy when browsing in the tech giant’s so-called Incognito mode. Google and the plaintiffs are planning for a “final and definitive settlement,” according to a [joint update](https://storage.courtlistener.com/recap/gov.uscourts.cand.360374/gov.uscourts.cand.360374.1089.0.pdf) (PDF) filed with a California federal judge. The agreement comes after mediation led to a “binding term sheet” with parties expected to present their final settlement to Northern District of California Judge Yvonne Gonzalez Rogers within 60 days. The settlement follows a [ruling](https://www.documentcloud.org/documents/24242172-govuscourtscand36037410880) last Thursday denying a Google request to exclude large swaths of evidence, including all arguments regarding class-wide damages, unjust enrichment, “purported harm to plaintiffs’ peace of mind,” and testimony from a plaintiffs’ expert offering methodology standards for statutory damages. The Thursday ruling also denied a Google request to exclude evidence of “other litigation and regulations not at issue in this litigation.” Another [setback](https://therecord.media/google-loses-bid-to-throw-out-incognito-lawsuit-private-browsing) occurred in August when Gonzalez Rogers [denied](https://www.documentcloud.org/documents/24243485-govuscourtscand3603749690) a Google request for summary judgment. The plaintiffs argued that Google illegally violated the privacy of millions due to its use of cookies, analytics, and tools in apps tracking internet browsing activity despite users having gone into Incognito mode in Chrome. Citing Google’s Incognito Splash Screen, Chrome privacy notice and Search & Browse Privately Help page assertions about Incognito mode minimizing stored information and promising consumers they can manage whether their activities are shared, Gonzalez Rogers wrote that “a triable issues exists as to whether these writings created an enforceable promise that Google would not collect users’ data while they browsed privately.” Google declined to comment on the preliminary settlement Wednesday but issued a statement after the August ruling saying Incognito mode in Chrome “gives you the choice to browse the internet without your activity being saved to your browser or device.” “As we clearly state each time you open a new Incognito tab, websites might be able to collect information about your browsing activity during your session,” the statement said. The [original complaint](https://www.documentcloud.org/documents/24243480-govuscourtscand36037410) filed against the tech giant in 2020 held that Google “tracks and collects consumer browsing history and other web activity data no matter what safeguards consumers undertake to protect their data privacy.” “Indeed, even when Google users launch a web browser with ‘private browsing mode’ activated (as Google recommends to users wishing to browse the web privately), Google nevertheless tracks the users’ browsing data and other identifying information,” the complaint continued. Calling the tracking “surreptitious,” the plaintiffs alleged that the company used Google Analytics, Google Ad Manager, and various other application and website plug-ins to render Incognito not incognito at all. “Through its pervasive data tracking business, Google knows who your friends are, what your hobbies are, what you like to eat, what movies you watch, where and when you like to shop, what your favorite vacation destinations are, what your favorite color is, and even the most intimate and potentially embarrassing things you browse on the internet — regardless of whether you follow Google’s advice to keep your activities ‘private,’” the complaint alleged, saying Google’s knowledge of users’ habits is so “detailed and expansive that George Orwell could never have dreamed it.”

138
12
privacy
Privacy c0mmando 9 months ago 92%
Google's New Patent: Using Machine Learning to Identify "Misinformation" on Social Media reclaimthenet.org

Google has filed an [application](https://docs.reclaimthenet.org/US-20230385548-A1-I.pdf) with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media. Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward. The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there. Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.” But it seems that Google is developing the tool with other platforms in mind. The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.” Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at. (Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.) The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data. The patent indeed states that it uses neural networks language models (where neural networks represent the “infrastructure” of ML). Google’s tool will classify data as IO or benign, and further aims to label it as coming from an individual, an organization, or a country. And then the model predicts the likelihood of that content being a “disinformation campaign” by assigning it a score.

21
3
privacy
Privacy c0mmando 9 months ago 97%
Even The EU Commission's Own Study Fails to Prove Efficacy of Extensive Surveillance of Personal Images and Videos reclaimthenet.org

The European Union (EU) might at this point in time be floundering for a new way to get attention, regarding many of the key points that brought about its existence in the first place. But continuing to advance free-speech-and-privacy-undermining policies, unfortunately, doesn’t seem to be one of the quandaries the bloc finds itself in. In fact, the EU seems to show a rare display of clarity – and purpose – when dealing with these issues. Unfortunately, that does not benefit rights and freedoms, at least not as we knew them until these last few years. Colloquially known by critics as “chat control” (EU’s CSAM – child sexual abuse – Regulation) is still passing through the figurative “bowels” of the EU (and nation-states) legislative and decision-making system. Currently, it’s with the EU Commission’s reporting about how effective voluntary application of the legislation might be, and the result has disappointed long-time critic and member of the European Parliament (MEP) from Germany, Patrick Breyer. In a [blog post](https://www.patrick-breyer.de/en/chat-control-evaluation-report-eu-commission-fails-to-demonstrate-effectiveness-of-mass-surveillance-of-intimate-personal-photos-and-videos/), Breyer explains that while the formal premise (once again) is simply to bolster online protection of children – the law actually looks to make Big Tech social platforms (Facebook, Google, etc) allow wholesale intrusive surveillance of private communications – including chat and emails. And this is to be happening as a form of “pre-crime scanning” – or as Breyer puts it, automatic and indiscriminate bulk search of personal content. So, how effective does the EU Commission think it is? “The long overdue evaluation of voluntary chat control turns out to be a disaster: Provided figures on suspicious activity reports, identifications and convictions lack any proven connection to the chat control bulk scanning of private messages because NCMEC (National Center for Missing & Exploited Children) reports also result of user reports and the scanning of public posts/websites,” writes Breyer. The MEP makes a further point in clarifying that a system that includes its results the “success” of identifying (by having effectively unimpeded access to everybody’s phone) sexual content exchanged between consenting adult individuals is “hardly a challenge and does not protect anyone from child sexual abuse.” (Clearly, here the legislation is hitting the brick wall of the limitations of algorithms, that is, automated censorship tech.) Breyer’s point is also that this form of mass surveillance in the EU is to be carried out by US companies. But “the horse of EU’s sovereignty” on a whole range of issues may have simply left the barn a while ago – hence, the “chat control” mechanism is simply one of the many symptoms of the one and same ill.

37
0
privacy
Privacy c0mmando 9 months ago 94%
Police to be able to run face recognition searches on 50m driving licence holders www.theguardian.com

The police will be able to run facial recognition searches on a database containing images of Britain’s 50 million driving licence holders under a law change being quietly introduced by the government. Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners. Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK’s driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is “sneaking it under the radar”. Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish “driver information regulations” to enable the searches, but he will need only to consult police bodies, according to the bill. Critics claim facial recognition technology poses a threat to the rights of individuals to privacy, freedom of expression, non-discrimination and freedom of assembly and association. Police are increasingly using live facial recognition, which compares a live camera feed of faces against a database of known identities, at major public events such as protests. Prof Peter Fussey, a former independent reviewer of the Met’s use of facial recognition, said there was insufficient oversight of the use of facial recognition systems, with ministers worryingly silent over studies that showed the technology was prone to falsely identifying black and Asian faces. He said: “This constitutes another example of how facial recognition surveillance is becoming extended without clear limits or independent oversight of its use. The minister highlights how such technologies are useful and convenient. That police find such technologies useful or convenient is not sufficient justification to override the legal human rights protections they are also obliged to uphold.” Access to driving licence records is controlled under regulations related to the Criminal Justice and Court Services Act 2000, which requires the police to provide a good cause relating to a contravention of mainly road traffic acts. An explanatory note to the new criminal justice bill says that “clause 21 clarifies who can access the driver data and enables regulations to provide for access to DVLA driver information for all policing or law enforcement purposes”. The policing minister, Chris Philp, made a first explicit reference to what appears to be the unsaid purpose of the legislative change during a first committee sitting of MPs scrutinising the bill on 12 December. Questioning Graeme Biggar, the director general of the National Crime Agency, Philp said: “There is a power in clause 21 to allow police and law enforcement, including the NCA, to access driving licence records to do a facial recognition search, which, anomalously, is currently quite difficult. “When you get a crime scene image from CCTV or something like that, do you agree it would be useful to be able to do a facial recognition search across DVLA records as well as the other records that can currently be accessed?” Biggar responded: “Yes, it would. It is really important for us to be able to use facial recognition more. I know that is an issue you have been championing.” The EU had considered making images on its member states’ driving licence records available on the Prüm crime fighting database. The proposal was dropped earlier this year as it was said to represent a disproportionate breach of privacy. Philp is known to be an enthusiast for facial recognition technology and has encouraged the police to use it more often. The Home Office is already seeking to integrate data from the police national database (PND), the Passport Office and the EU settled status database into a single system to help police find an image match with the “click of one button”. A Home Office spokesperson said: “Clause 21 in the criminal justice bill clarifies the law around safeguarding and accountability of police force’s use of DVLA records. “It does not allow for automatic access to DVLA records for facial recognition. Any further developments would be subject to further engagement as the public would expect.” Carole McCartney, a professor of law and criminal justice at the University of Leicester, said the lack of consultation over the change in law raised questions over the legitimacy of the new powers. She said: “This is another slide down the ‘slippery slope’ of allowing police access to whatever data they so choose – with little or no safeguards. Where is the public debate? How is this legitimate if the public don’t accept the use of the DVLA and passport databases in this way?” The government scrapped the role of the commissioner for the retention and use of biometric material and the office of surveillance camera commissioner this summer, leaving ministers without an independent watchdog to scrutinise such legislative changes. Chris Jones, the director at Statewatch, a civil liberties NGO, called on MPs to reject the controversial change. He said: “There has been no public announcement of or consultation over this plan, which will put anyone in the country with a driving licence into a permanent police lineup. Opening up civil databases to mass police searches turns everyone, a priori, into a suspect. More surveillance and snooping powers will not make people safer.” In 2020, the court of appeal ruled that South Wales police’s use of facial recognition technology had breached privacy rights, data protection laws and equality laws, given the risk the technology could have a race or gender bias. The force has continued to use the technology. Live facial recognition is to be deployed to find a match of people attending Christmas markets this year against a watchlist. Katy Watts, a lawyer at the civil rights advocacy group Liberty said: “This is a shortcut to widespread surveillance by the state and we should all be worried by it.”

16
0
privacy
Privacy c0mmando 9 months ago 89%
Has avoiding Cloudflare become Impossible?

Nearly every website today seems to be hosted behind Cloudflare which is really concerning for the future of privacy on the internet. Cloudflare no doubt logs, stores, and correlates network telemetry that can be used for a wide array of deanonymization attacks. Not only that, but Cloudflare acts as a man-in-the-middle for all encrypted traffic which means that not even TLS will prevent Cloudflare from snooping on you. Their position across the internet also lends them the ability to conduct netflow and traffic correlation attacks. ~~Even my proposed solution to use archive.org as a proxy is not a valid solution since I found out today that archive.org is also hosted behind Cloudflare...~~ *edit: i was wrong* So what options do we even have? What privacy concerns did I miss, and are there any workaround solutions?

63
63
privacy
Privacy c0mmando 9 months ago 97%
FTC proposes tougher children’s data privacy rules for first time in a decade web.archive.org

The Federal Trade Commission (FTC) is proposing new restrictions on the use and disclosure of children’s personal data and wants to make it much harder for companies to exclude children from their services if they can’t monetize their data, the agency announced Wednesday. The proposed overhaul of the Children’s Online Privacy Protection Rule (COPPA) is the first suggested update of the landmark regulation in a decade. It comes as the agency is showing new muscle in protecting children online, most notably with its recent crackdown on Meta, which it wants to prohibit from monetizing kids’ data across the board. Under the new COPPA proposal, the FTC seeks to make providers, not parents, mostly responsible for ensuring digital experiences are safe for kids, the agency said in a press release. “Kids must be able to play and learn online without being endlessly tracked by companies looking to hoard and monetize their personal data,” FTC Chair Lina Khan said in a statement. She went on to say the proposed changes — which the public will be invited to comment on for 60 days before a final rule is issued — are especially vital in an era “where firms are deploying increasingly sophisticated digital tools to surveil children.” “By requiring firms to better safeguard kids’ data, our proposal places affirmative obligations on service providers and prohibits them from outsourcing their responsibilities to parents,” Khan said. In addition to the above changes, the new proposal: - Bars companies from gathering more personal information than is “reasonably necessary” for a child to participate in online games and contests - Limits companies’ ability to use kids’ personal information to send them push notifications or otherwise “nudge” them to stay online - Prohibits educational technology companies from using kids’ data commercially and requires protections for that data - Makes data security rules stronger by requiring that operators implement a “written children’s personal information security program” to protect sensitive data - Ensures data is only retained for as long as needed for the task at hand and bars companies from keeping data for secondary purposes - Broadens how personal information is defined to include biometric identifiers The existing COPPA rule, which was originally implemented in 2000, mandates that websites and online services collecting data for children under the age of 13 notify their parents first. The rule also reins in companies’ use of kids’ data, limiting what they can collect, how long they can store it and how to secure it. The public has historically shown keen interest in the COPPA rule, with the FTC fielding more than 175,000 comments when it last asked for input on updating the rule in 2019. It did not move forward with formal changes at the time. The rule was last updated in 2013 in an effort to better regulate childrens’ use of cell phones and social media. That update broadened the definition of personal information by regulating how companies could use youths’ geolocation data, photos, videos and audio recordings as well as online tracking tools such as cookies. Childrens’ privacy advocates highlighted Wednesday’s announcement as one of several recent FTC actions cracking down on big tech’s use of kids’ data. Some advocates said action is needed now more than ever as artificial intelligence feeds off of data, driving demand for more of it. “With this critical rule update, the FTC has further delineated what companies must do to minimize data collection and retention and ensure they are not profiting off of children’s information at the expense of their privacy and wellbeing,” Haley Hinkle, policy counsel at the childrens’ advocacy organization Fairplay, said in a statement.

73
5