BLOG + NEWS

HOME NEWS
Apple's new Privacy Explained 蘋果新隱私懶人包
BIIC Knows · AI 新聞新知
Apple's new Privacy Explained 蘋果新隱私懶人包
17
AUG
2021
279
SHARE
Apple announced to start scanning all photos uploaded to iCloud to prevent Child Sexual Abusive Material(CSAM) from spreading and identify criminals, but won't that violate our privacy?
 
Here's how Apple's CSAM-detection mechanism works:

In the upcoming update of iOS15, an encrypted database and algorithm will be put into iPhones to detect any CSAM. Before a photo is uploaded, it will be encrypted into a hash number, like a "digital fingerprint" that is unique but cannot trace back to what the original photo looks like.

The algorithm will then perform detection on the hash number and generate a matching score. When the photo is uploaded to iCloud, it will be encrypted and uploaded with this matching score. Until this point, Apple won't be able to decrypt any photo!

To prevent Apple from intruding on any account due to false detection, Apple sets a threshold for matching results before they are allowed to decrypt an image. Apple will only be allowed to decrypt potential CSAM when there are too many matched results on the account. If there is no related materials, Apple will be locked out from cracking any photos in iCloud!
 
Some may wonder, there are misleading photos that can fool human eyes, can Apple's algorithm tell the difference? If the algorithm isn't smart enough, will innocent accounts and photos be reviewed all the time and privacy being invaded?
Apple's CSAM Privacy Explained
Is Apple violating our privacy?
Apple's CSAM Privacy Explained
Is Apple violating our privacy?
Article Tags
Privacy 隱私 Federated Learning 聯合式學習 ASR 語音辨識 Emotion Recognition 情緒辨識 Psychology 心理學 Healthcare 醫療 Algorithm 演算法 Edge Computing 終端運算 Human Behavior 人類行為 Multimedia 多媒體 NLP 自然語言處理 Signal Processing 訊號處理