Recently, with the emergence of retrieval requirements for certain individual in the same superclass, e.g.,
birds, persons,
cars, fine-grained recognition task has attracted a significant amount of attention from academia and
industry. In
fine-grained recognition scenario, the inter-class differences are quite diverse and subtle, which makes it
challenging to
extract all the discriminative cues. Traditional training mechanism optimizes the overall discriminativeness
of the whole
feature. It may stop early when some feature elements has been trained to distinguish training samples well,
leaving other
elements insufficiently trained for a feature. This would result in a less generalizable feature extractor
that only
captures major discriminative cues and ignores subtle ones. Therefore, there is a need for a training
mechanism that
enforces the discriminativeness of all the elements in the feature to capture more the subtle visual cues.
In this paper, we
propose a Discrimination-Aware Mechanism (DAM) that iteratively identifies insufficiently trained elements
and improves
them. DAM is able to increase the number of well learned elements, which captures more visual cues by the
feature extractor.
In this way, a more informative representation is learned, which brings better generalization performance.
We show that DAM
can be easily applied to both proxy-based and pair-based loss functions, and thus can be used in most
existing fine-grained
recognition paradigms. Comprehensive experiments on CUB-200-2011, Cars196, Market-1501, and MSMT17 datasets
demonstrate the
advantages of our DAM based loss over the related state-of-the-art approaches.