Apple announced to start scanning all photos uploaded to iCloud to prevent Child Sexual Abusive Material(CSAM) from spreading and identify criminals, but won't that violate our privacy?
Here's how Apple's CSAM-detection mechanism works:
In the upcoming update of iOS15, an encrypted database and algorithm will be put into iPhones to detect any CSAM. Before a photo is uploaded, it will be encrypted into a hash number, like a "digital fingerprint" that is unique but cannot trace back to what the original photo looks like.
The algorithm will then perform detection on the hash number and generate a matching score. When the photo is uploaded to iCloud, it will be encrypted and uploaded with this matching score. Until this point, Apple won't be able to decrypt any photo!
To prevent Apple from intruding on any account due to false detection, Apple sets a threshold for matching results before they are allowed to decrypt an image. Apple will only be allowed to decrypt potential CSAM when there are too many matched results on the account. If there is no related materials, Apple will be locked out from cracking any photos in iCloud!
Some may wonder, there are misleading photos that can fool human eyes, can Apple's algorithm tell the difference? If the algorithm isn't smart enough, will innocent accounts and photos be reviewed all the time and privacy being invaded?