Apple against child pornography on the iPhone, because it is a risk of mass surveillance

Time: 15/Feb By: kenglenn 375 Views

Apple's new operating systems will have some features designed to protect minors and limit the spread of "Child Sexual Abuse Material" (CSAM) - i.e. child pornography.

The child protection measures will be implemented on three different systems: iMessages, iCloud and Siri.

For now, the new features are expected to be implemented by the end of the year only in the United States, but there is no doubt that they can also arrive in the European Union.

As virtuous as Apple's intentions may seem, there are many risks associated with the use of the proposed technologies, and for this reason I believe that their implementation must be stopped immediately.

Topics index

What's Concerning About Apple-iPhone Child Pornography Surveillance

The most concerning measure is probably the one involving iCloud. Apple's intention is to introduce a feature for automated scanning of photos on devices (iPhone and iPad), looking for CSAM (child sexual abuse material).

How scanning takes place

Scanning follows two distinct phases. The first phase is exclusively local, i.e. directly on the person's device. Through an algorithm called NeuralHash, the photographs stored in memory are scanned to create a "hash digest" (an alphanumeric identification string), which is then compared in real time with a database downloaded onto devices containing a "hash digest" of child pornography content. In the event of a positive match, the system creates a "safety voucher" which is then linked to the photograph.

Once the photo is uploaded to iCloud, the second stage of the scan begins. Once a certain limit of positive matches has been exceeded, Apple's systems are able to decipher the "safety voucher" and incorporate all the necessary information. At that point the offending photograph is examined by a team of people who will decide whether it is actually child pornography or not. If so, the authorities will be notified.

Although promoted to protect children and combat the spread of child pornography, this technology is open to abuses that would end up compromising the privacy and security of every Apple user, including the children they want to protect.

Apple's reassurances after the controversy

Apple against child pornography on iPhone , because it is a risk of mass surveillance

(Updated August 11th)

Apple released a document this week explaining that the new system will not scan private photo libraries on iPhones.

Also, the matching technology will stop working if people disable their iPhone's photo library from backing up images to iCloud, a company spokesperson said.

But in reality this clarification does not change anything. It is true that the matching scan does not work without icloud but the photos in the iphone are still compared with the hashes downloaded to the device.

The risk of attack

There are several reasons. First, there is no way to limit the use of these tools. Once such a client-side scanning system is implemented, anyone in possession of the blacklists and scanning and matching algorithms has the power to expand the content and extent of these controls at will. Simply enter certain content into the blacklist database and every iteration of that content is instantly identified and censored on every person's device – even when offline. Today it is a database of child pornography, tomorrow it could be a database of "terrorist" content (as already happens in some cases), or a database containing undefined "illegal material".

As if this were not enough, we must also reflect on the "closed source" nature of these systems. It is impossible for people to verify the contents of this blacklist, just as it is impossible to verify the functioning of the algorithms used to scan the contents on the device.

Finally, there are also risks of external attack. There are machine learning technologies (GANs – Generative Adversarial Networks) that can attack hashing algorithms such as those allegedly used by Apple. Possible attacks are different, such as:

Surveillance and censorship risk

But beyond the risks of attack, the more serious risk is that this becomes a real-time surveillance and censorship tool serving governments around the world.

Apple has historically been subjected to political pressure from the United States, as well as from China or Saudi Arabia. For example, after pressure from the US government Apple decided not to encrypt iCloud backups, to allow access to the data to the authorities. Similarly, some recent investigations have revealed dangerous compromises made by Apple in China regarding the encryption of its communication and hosting services. In practice, the Chinese government today has access to every communication, photo, document, contact and location of every Chinese citizen who uses Apple devices. Finally, Apple is among the companies involved in the NSA's global mass surveillance, discovered by Edward Snowden in 2013. In short, there is nothing to suggest that such a powerful system cannot be abused in the years to come.

A long battle against encryption, for surveillance

Apple's proposal, however, is not a bolt from the blue. For some time now the US government, together with those of the Five Eyes (UK, Australia, Canada, New Zealand) have been trying to limit the spread of end-to-end encryption, which makes it almost impossible to intercept communications.

However, end-to-end encryption is increasingly widespread and within everyone's reach, and for this reason for some years the primary objective has been to develop technologies to completely bypass strong encryption. Client-side scanning systems (i.e. directly on the device), such as the one proposed by Apple, are among the most recurring and appreciated measures. Being able to directly scan the device's memory allows you to access the contents before they are encrypted and then transmitted to the recipient.

At the beginning I wrote that I have no doubts that this feature can also arrive in the EU. The reason is that the "chatcontrol" Regulation was approved in July, which has exactly the same objective pursued by Apple. A regulation, which I have been writing about for some time but which for some reason has gone quietly. It provides for an exemption from the ePrivacy Directive to allow communication service providers to scan the contents of messages for child sexual abuse material. The problem is always the same: end-to-end encrypted communications are not scannable by anyone. So how to do in this case? The best solution is precisely the one proposed by Apple.

So how can you trust an inaccessible system created expressly to bypass end-to-end encryption and to facilitate the interception of communications by governments around the world? The aim is commendable, but that doesn't justify the mass surveillance of hundreds of millions of people.

Surveillance must always be justified by serious indications of a crime. Conversely, mass surveillance of people for signs of crime is an unjustifiable reversal of our fundamental principles. No crime, no matter how heinous, can make such a hypothesis reasonable. Even the European Court of Justice has repeatedly affirmed the violation of fundamental rights by any indiscriminate activity of communications surveillance, which must instead be limited in time and justified by specific national security needs. Horrible as it is, child pornography is not a national security issue. And in any case, surveillance could not be systematically applied to every person (including the children to be protected) indefinitely.

@ALL RIGHTS RESERVED