Latest Machine Learning Study Explains Background Bias in DML Metric Deep Learning by Conducting Multiple Experiments on Three Standard DML Datasets and Five Different DML Loss Functions

Latest Machine Learning Study Explains Background Bias in DML Metric Deep Learning by Conducting Multiple Experiments on Three Standard DML Datasets and Five Different DML Loss Functions

Spread the love

A computer system known as an image retrieval system is used to browse, search and retrieve images from a large database of digital images. Feature extraction is the most crucial aspect of image retrieval. The characteristics correspond to the representation of an image and must also make it possible to recover the images efficiently. Deep Metric Learning (DML) is a technique used to train a neural network to map input images to a lower-dimensional integration space so that similar images are closer than dissimilar images. Unfortunately, DML does not address the background bias that leads to irrelevant feature extraction.

In the literature, two main approaches can be distinguished, which attempt to overcome the problem of background bias: background augmentation and attribution regularization. These methods are designed for classification networks and cannot be directly used for DML networks. Using random images, background augmentation techniques replace the background of images used for training or inference. In order to determine which regions of an image the network focuses on, attribution regularization computes the attribution map of an input sample during training. A German research team proposes a study to analyze the influence of background on through the use of three standard data sets and five common loss functions.

The study conducted in this article aims at two major points:

1) Prove that models learned by DML are not robust against background bias.

2) Propose a data augmentation technique to remedy the problem cited above.

The dependence of trained DML models on the image background is measured using a new test environment introduced by the authors. They postulate that the more a DML model considers the background of images when creating an embedding, the more the embeddings will vary when the background of the image is changed. Conversely, if the model prioritizes the background, recovery performance should noticeably suffer when the background of test photos is randomly changed in the DML option. They therefore proposed to create a new test dataset by combining the region of interest of each image with a background from the famous photo site Unsplash. U-Net performs the detection of the object of interest.

To overcome background bias in DML, the authors apply a novel strategy, BGAugment, to perform data augmentation during literature-inspired training and validation of background bias in classification networks. They propose to follow the same process done to create the test dataset. To avoid interfering with the test set background images, the images selected in Unsplash are different from those used in the test set.

To validate the two postulates mentioned above, an experimental study was conducted to compare three classification losses: Contrastive Loss, Triplet Loss and Multi Similarity Loss, in addition to two classification losses, ArcFace Loss and Normalized Softmax Loss. Experiments were performed on three standard reference datasets for Deep Metric Learning: Cars196, CUB200, and Stanford Online Products. The results confirm that when a model is trained without BGAugment, its performance drops when confronted with test images from the modified test dataset. On the other hand, the use of the proposed data augmentation improves these results and makes the model more robust against background bias.

In this paper, the authors proved that it is beneficial for retrieval parameters such as object retrieval or people re-identification systems to investigate and counter background bias in DML. They claim they are the first to demonstrate how background bias affects DML models. For the trivial image, a new approach has been proposed to reduce background bias in DML that does not require more labeling work, model adjustments, or longer inference times.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'On Background Bias in Deep Metric Learning'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and code.
Please Don't Forget To Join Our ML Subreddit


Mahmoud is a PhD researcher in machine learning. He also holds a
bachelor’s degree in physical sciences and a master’s degree in
telecommunications systems and networks. His current areas of
research focuses on computer vision, stock market prediction and
learning. He has produced several scientific articles on the re-
identification and study of the robustness and stability of
networks.


Leave a Comment

Your email address will not be published.