Facebook's Mistake & AI Ethics 臉書誤判與 AI 倫理
BIIC Knows · AI 新聞新知
Facebook's Mistake & AI Ethics 臉書誤判與 AI 倫理
Last week, Facebook's algorithm labeled a video with black males in it as a "primate video", which caused the spokeswoman to apologize and called it an "unacceptable error".

It wasn't the first time AI made these disturbing mistakes, companies with the most advanced engineering teams like Facebook, Google, OpenAI, and Microsoft all created algorithms with flaws that generated inappropriate results.
But where do these mistakes come from? 

The design of an algorithm and the quality of the data are both critically important to the training result. When the features chosen by engineers can't separate two similar yet different subjects apart, the accuracy will drop and you might see some mistakes like the one Facebook just made. 

On the other hand, if the dataset engineers feeding to the algorithm contains dirty data or is essentially unbalanced, results that represent unsolved social inequality may appear.
Many countries started to pay attention to the AI ethic and set up standards to prevent these mistakes from happening, such as EU's "Ethics Guidelines for Trustworthy AI". We need to be cautious with the algorithm we design and the data we feed it!
Facebook apologized after AI labelled Black Men as Primates
Facebook apologized after AI labelled Black Men as Primates
Article Tags
Privacy 隱私 Federated Learning 聯合式學習 ASR 語音辨識 Emotion Recognition 情緒辨識 Psychology 心理學 Healthcare 醫療 Algorithm 演算法 Edge Computing 終端運算 Human Behavior 人類行為 Multimedia 多媒體 NLP 自然語言處理 Signal Processing 訊號處理