Overcoming Background Bias in AI: University of Montpellier Unveils Cutting-Edge Image Categorization Techniques

In a world increasingly driven by artificial intelligence and machine learning, fine-grained image categorization— the algorithmic identification and distinction of specific subclasses within a broader category— has become a crucial technological frontier. The ability of AI to see and differentiate the world with a level of detail akin to human vision offers staggering potential, from…

Written by

Casey Jones

Published on

September 12, 2023
BlogIndustry News & Trends
University of Montpellier unveils cutting-edge techniques for categorizing images.

In a world increasingly driven by artificial intelligence and machine learning, fine-grained image categorization— the algorithmic identification and distinction of specific subclasses within a broader category— has become a crucial technological frontier. The ability of AI to see and differentiate the world with a level of detail akin to human vision offers staggering potential, from refining facial recognition algorithms to enhancing medical imaging technologies. However, like human perception, AI has its blind spots, with background bias standing as a major confounding factor in image categorization.

Designed to mimic the functions of a human brain, Convolutional Neural Networks (CNN) is a class of deep learning neural networks widely recognised for their superior image processing capabilities. In recent years, the emergence of Vision Transformers (ViT) has breathed fresh life into computational vision, leveraging the power of transformer architectures initially designed for natural language processing tasks for image categorization. However, both these cutting-edge techniques often grapple with the problem of ‘background bias.’ Simply put, the context in which an object is pictured can unduly influence both human and AI perception, leading, in some cases, to gross classification errors.

Enter the pioneering research by eminent scholars at the University of Montpellier, France, who, seeking ways to mitigate this challenge, unveiled groundbreaking Early Masking and Late Masking techniques. They proposed refining the focus of AI on the object of interest by either excluding or reducing irrelevant background information.

In Early Masking, the AI is designed to remove background information at the image input stage, comparable to looking at an object against a blank background. The method not only simplifies the image input but also forces the AI model to focus exclusively on the object, minimizing distractions.

On the other hand, Late Masking, as its name suggests, kicks in at a later stage of processing. Once the model has initially processed the entire image, the Late Masking technique selectively excludes high-level spatial features related to the background. This strategic timing allows the AI to benefit from a broader contextual understanding before centering attention on the relevant object, thereby significantly reducing background bias.

The teams meticulously evaluated these groundbreaking strategies using the CUB dataset (short for Caltech-UCSD Birds 200), a large and complex dataset of bird images with intricate background details. The results were convincing and affirmed the potential of these masking techniques in negating the influence of backgrounds, leading to notably superior subclass recognition.

This intuitive leap in reconciling the contentious dilemma of background bias in fine-grained image categorization breaks new grounds in the AI domain, raising exciting possibilities. It holds profound implications for numerous sectors reliant on advanced imaging, from healthcare to security, offering the potential to fundamentally refine and redefine how AI sees, and thus understands, the world around us. Remember, as we increasingly delegate perception-based tasks to AI, it becomes ever more imperative to ensure we share a common, unfiltered picture of the world. With masking techniques such as these, it’s an AI future we can all look forward to.