Apple iOS 15.2: this is how your iPhone will change forever!Tiziana Roselli

Time: 13/Jan By: kenglenn 345 Views

The next iOS update 15.2 of Apple seems to be the most radical update for iPhone in recent years, not because it includes new extraordinary features or services, but because, despite the countless warnings, it includes a "shocking" change of direction for Apple and for the many iPhone users of the'agency.

Many will remember the dispute that marked the launch of iPhone 13 and iOS 15 this summer on Apple which would have scanned all the photos uploaded by its users in iCloud and saved on devices, looking for child pornographic images to report to the competent authorities.

Apple then made back and the controversy has vanished, even if he promised to reflect on the avalanche of feedback he had received and to return, declaring his intention to continue regardless.And in fact here we are to talk about it again.But let's see what it is.

Apple: safety plans on iPhone

Apple's security plans included two separate updates.Firstly, to scan users' photo bookstores on their iPhones before synchronizing with iCloud, using artificial intelligence to combine users of users with watchlists accredited by the government, reporting sexual abuse material on minors.

And secondly, to allow parents to enable Apple's artificial intelligence on their children's iPhones to warn when you send or received naked or sexually explicit images on iMessage.

Both proposals aroused serious warnings on privacy and security implications.The scan on the device (client side) of your photo bookcase breaks a dangerous, even if not written pact, that your own phone is not monitored in the same way as your cloud storage.

But there is no real controversy on the scan of the archive of photos in the cloud in search of illegal content.This has become a standard practice.The change of iMessage is probably much more serious, practically if not technically, violating the same security on which iMessage has built its reputation.

Apple: problems with the two updates

Apple has announced both updates confusedly, as they recognized themselves.Although both have children's safety (an area in which much...But much more must be done) there are serious problems with both.

The scan of photos on the client side opens a door to government interference in what is reported and undermines perceived security that what happens on your iPhone remains on your iPhone.The critics havetened to emphasize that foreign and national governments would see this as an opportunity to seek something more than CSAM.

Apple iOS 15.2: ecco come cambierà il tuo iPhone per sempre! Tiziana Roselli

Although there is still no sign of the CSAM scan, the latest beta version for OS 15 developers.2 has just introduced a slightly watered -down version of the iMessage update.Slightly watered because the initial plan to inform the parents of minors of 13 years that displayed the marked images was removed.

Apple: nothing seems to have changed with the new update

Without external reports in this update of iMessage, it may seem not very different from the categorization of photos that already takes place on iPhone using the IA on the device.But the critical problem is that iMessage is an end-to-end encrypted and the update is essentially Apple that adds monitoring to the platform.Yes, this initial use case is very limited.But the technical impediment to a wider monitoring has been removed.Radically change iMessage and there will be no way to go back.

As Effes warned,

End-to-end encryption protects the transport of content from your device somewhere else, in messaging, which is obviously someone else's device.Basically, the two (or more) end are found outside of end-to-end encryption: messages and any attachments must be deciphered to allow you to read them.This is the reason why Apple can say that end-to-end encryption remains intact.But that nuance does not capture the point.

The messaging database on the phone is protected from the security of the device: the access code or biometric access.If I have full access to your phone, I will have access to its content.That's why the backup of the phone on the cloud can compromise your security, as you will have seen with the update of the encrypted WhatsApp backup;And that's why attacks on safe messaging platforms focus on the impairment of an "end" and not an "end-to-end".

When you open a encrypted messenger, iMessage or WhatsApp or Signal, you must feel safe, that you are operating within a protected area that includes the app, the level of transport and the reception app on the other side.As soon as the app includes some form of monitoring, however well intentioned, everything changes.

But what do you actually serve for the protection of children?

We try to be very clear, we need better measures on social media and on communication platforms to protect children.The reporting functions, such as those of WhatsApp, allow to mark the contents from a recipient, then they withdraw messages from the safe platform to send them to the auditors.

Nothing happens automatically.A specific action of the user activates security impairment, which is not different from the acquisition of a screenshot.Apple's solution is automated and operates without any direct intervention by the user.

This update is contained in the beta version of iOS 15.2 of Apple, but there is no confirmation on when it is released or if this function will reach that final version.Should not.Not until there is more debate on the door that is open and where it could lead.

Apple: Imperial College analysis

After the repercussions, Apple assured that he would stick to his plans, incorporating feedback.There is still no news on the CSAM client side scan on iPhone against the photo bookstores of iCloud.

But assuming that it is also in the processing, this week there has been other bad news for Apple, with researchers from one of the main universities of the United Kingdom who say that their new technology could beat a similarity with CSAM AI of Apple 99% of thetimes.

The Imperial College team in London discovered, as Forbes wrote, which attacking the signature of perceptual hashing of an image on a device, could falsify the client -side correspondence.

Using detection technologies "similar to the systems proposed by Apple", the researchers said that their "visually imperceptible filter" masked the images to appear "different from the algorithm 99.9% of the time, despite seem identical to the eyehuman".

Forbes asked the team if they expected to defeat the possible Apple solution.Apple's algorithm is not public ", they replied," so we were not able to verify their statements...This research shows that what perceptual hashing aims to achieve is a difficult task with difficult compromises.

None of the five algorithms (including Facebook PDQs) that we tested was strong enough for the [our] attack...The current technologies used for scanning of the cloud (such as Fotodna) are likely to be relatively easy to defeat.

The imperial filter is not made available publicly.But the plans, the advertising, the defense, the recourse and the reverse of Apple have made the public domain in the public domain, which cover the safety systems how they work and where they are vulnerable.

Apple: no news on what will happen

There is no news from Apple of what will happen later with its scanning plans of the client -side photos, but we hope that the latter research, in addition to what has already been done, push the company to stick to the scanCloud, even if with a little privacy to the Apple improvements.There is no real controversy on this approach.

In the meantime, we can focus on what this long -term iMessage change means, since one of the largest safe messaging platforms in the world becomes the first to monitor the contents in this way.

The tweak to remove the parent notification is important.But the fact that it was there means that it can be easily restored: Apple will probably suffer pressure to do it.Let's see the same in the current CSAM reports of Facebook and how it differs from WhatsApp.

While Facebook and Messenger can monitor content, WhatsApp cannot and instead be based on metadata.If Whatsapp adds classifiers to the way I plan iMessage, this would change.

iMessage, Signal, WhatsApp and others currently have no visibility of what is sent on their platforms.As soon as this changes, a very valid question is asked: if your technology knows that a child is sending or receiving explicit images, how can you ignore your reporting obligations?And this will open the door to changes in the laws to impose this report when a platform has identified potential abuse.