Exploiting Model Inversion Attacks on AI Artificial Intelligence

Model inversion attacks are among the most sophisticated ways in which adversaries can steal sensitive information from the AI models, ranging from details of the training data to the structure and parameters of the model. This is achievable due to AI models being known to inadvertently memorize and reveal some details about the training data used. An explanation of the earlier methods is given below with examples and processes:

An example of that would be taking an AI model that predicts attributes of a face from a photo, let's say an eye color or hair color. One would still be able to make it without having direct access to the facial images in the training dataset, by reversing the facial images in the model.


Now, let's get deep into the case of this model inversion attack on an AI model predicting facial attributes from a photo. Truly a big kind of concern of privacy in machine learning, especially where it is being used with sensitive information and in turn requires protection. This process essentially involves an attacker leveraging the model to reconstruct images looking like individuals in the training set, except that the attacker doesn't have direct access to these images.

The whole model inversion attack process goes as follows:

  1. Understand Target Model:

    The attacker starts by understanding the nature of the target model, which, in this case, predicts facial attribute information such as eye color, hair color, etc. from input photos. Such a model inherently has learned fine-grained details of human faces in the training dataset.

  2. Initial Data Collection:

    Where a facial image, either publicly available or a generic face template, has been collected by the attacker, it is used as a base in the inversion process.

  3. Model Querying:

    The attacker systematically modifies the initial images and submits them as a query to the model. For each query, the model returns the predicted facial attributes for the submitted image.

  4. Analysis and adjustment:

    The attacker analyzes to what extent the predicted attributes coincide with the attributes of interest (features known to be related to individuals in the training set or specific attributes of interest), based on the model’s response. The attacker then tunes the images to better match the attributes, refining through iteratively querying and feedback.

  5. Reconstruction:

    After many iterations, the modified images look more and more like the faces "remembered" by the model from its training data, especially when those faces had very unique or distinct characteristics strongly influencing the learning done by the model. This is where the adversary tries to make the image that would look quite close to real people's photos from training data to the model.

  6. Conclusion


  7. In this case, the model inversion attack leverages the internalization of detailed features of the face training images by the AI model to predict the attributes accurately. By iteratively adjusting the input images and analyzing model predictions, an attacker is in a position to reverse-engineer everything the model knows about those features.

    For example, if a certain model continuously predicts a face with a particular shape with high confidence, with the presence of a certain hair color and face shape, an attacker could be fairly confident that there was an example in the training data that had a face with that shape, that color of hair (and even that color of eye). Over time, such a process can result in a synthesized face that closely resembles some individuals from the training dataset, which will potentially violate the privacy of those individuals.

    This is the kind of attack that enforces the potential for which AI models leak private and sensitive information about people being reflected in their training datasets. It has a strong emphasis on the absolute need of strong privacy-preserving techniques, such as adding noise to the result of the model, or its training process, in order to avoid such precise reconstructions.

Bhanu Namikaze

Bhanu Namikaze is an Ethical Hacker, Security Analyst, Blogger, Web Developer and a Mechanical Engineer. He Enjoys writing articles, Blogging, Debugging Errors and Capture the Flags. Enjoy Learning; There is Nothing Like Absolute Defeat - Try and try until you Succeed.

No comments:

Post a Comment