Purely AI News: For AI professionals in a hurry
July 23, 2020
Fawkes: An AI system that puts an 'invisibility cloak' on images so that facial recognition algorithms are not able to reveal identities of people without permission
Fawkes is adding specific perturbations to an image in a way that it is misidentified as someone else by facial recognition models
The prevalence of effective facial recognition technologies emerging today presents a significant challenge to personal privacy. As Clearview.ai has shown, anyone can scan the Internet for data and train highly accurate facial recognition models of people without their knowledge. The same models can then be used to identify those people in other photographs in the future. A number of US states and European governments even recommend banning facial recognition devices in public spaces. The researchers add that "Opportunities for misuse of this technology are numerous and potentially disastrous. Anywhere we go, we can be identified at any time through street cameras, video doorbells, security cameras, and personal cellphones. Stalkers can find out our identity and social media profiles with a single snap-shot. Stores can associate our precise in-store shopping behavior with online ads and browsing profiles. Identity thieves can easily identify (and perhaps gain access to) our personal accounts"

The Fawkes system, proposed by a group of researchers from the University of Chicago, would be able to help individuals protect their photographs against unauthorized facial recognition models. Fawkes does this by introducing a cloak of pixel-level changes that are imperceptible to the human eye, but fools the face recognition AI models into believing that there are no faces in the processed image.
The system calculates precisely how to manipulate the pixels such that the AI-detected features deviate as much as possible from the real features that define someone's face, while still maintaining the exact same appearance to the human eye.
Fawkes is also capable of taking a target face image and add specific perturbations to your image in a way that your image is misidentified as the target face by facial recognition models. Fawkes offers high levels of safety against facial recognition models during tests, the team said, no matter how the models are trained. Even in cases where 'secure' photographs have already been made available to image scrapers, the manipulation of an image using Fawkes results in at least an 80 percent misidentification rate.

The user will, however, have to be vigilant to ensure that no uncloaked photos are posted publicly and connected to their identity. Some photos posted by friends and labeled with her name will provide a tracker model with uncloaked training data. Fortunately, a user on most photo-sharing sites can proactively "untag" themselves.
LinkedIn



Aug. 2, 2020

Sample Factory, a new training framework for Reinforcement Learning slashes the level of compute required for state-of-the-art results

23
July 31, 2020

Intel joins hands with researchers from MIT and Georgia Tech to work on a code improvement recommendation system, develops "An End-to-End Neural Code Similarity System"

22
July 25, 2020

Google's tensorflow-lite framework for deep learning is now more than 2x faster on average, using operator fusion and optimizations for additional CPU instruction sets

21
July 22, 2020

Researchers from Austria propose an AI system that reads sheet music from raw images and aligns that to a given audio accurately

19
July 21, 2020

WordCraft: A Reinforcement Learning environment for enabling common-sense based agents

18
July 20, 2020

A designer who worked on over 20 commercial projects for a year turns out to be an AI built by the Russian design firm Art. Lebedev Studio

17
July 19, 2020

Microsoft is developing AI to improve camera-in-display technology for natural perspectives and clearer visuals in video calls

16
July 18, 2020

Microsoft and Zhajiang Univ. researchers create AI Model that can sing in several languages including both Chinese and English

15
July 18, 2020

New event-based learning algorithm 'E-Prop' inspired by the Human brain is more efficient than conventional Deep Learning

14
July 17, 2020

Scientists from the University of California address the false-negative problem of MRI Reconstruction Networks using adversarial techniques

13
July 16, 2020

A new technique of exposing DeepFakes uses the classical signal processing technique of frequency analysis

12
July 16, 2020

New AI model By Facebook researchers can recognize five different voices speaking simultaneously, pushes state-of-the-art forward

11
July 15, 2020

Researchers from Columbia Univ. and DeepMind propose a new framework for Taylor Expansion Policy Optimization (TayPO)

10
July 14, 2020

Federated Learning is finally here; Presagen's new algorithm creates higher performing AI than traditional centralized learning

9
July 14, 2020

Fujitsu designed a new Deep Learning based method for dimensionality reduction inspired by compression technology

8
July 12, 2020

Databricks donates its immensely popular MLflow framework to the Linux Foundation

7
July 12, 2020

Microsoft Research restores old photos that suffer from severe degradation with a new deep learning based approach

6
July 12, 2020

Amazon launches a new AI based automatic code review service named CodeGuru

5
July 12, 2020

IBM launches new Deep Learning project: Verifiably Safe Reinforcement Learning (VSRL) framework

4
July 12, 2020

DevOps for ML get an upgrade with new open-source CI/CD library, "Continuous Machine Learning (CML)"

3
July 12, 2020

DeepMind's new open-sourced Reinforcement-Learning library, dm_control, packs a simple interface to common RL utilities

2
July 12, 2020

Learning to learn: Google's AutoML-Zero learns to evolve new ML algorithms from scratch

1