Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Bier

50 vues

Publié le

Boosting Independent Embeddings Robustly

Publié dans : Technologie
  • Login to see the comments

  • Soyez le premier à aimer ceci

Bier

  1. 1. Deep Metric Learning Independence in ensembles Online Boosting
  2. 2. Deep metric learning Learns a deeply-embedded space, in which semantically similar images should be close to each other, and semantically dissimilar images should be far apart.
  3. 3. Motivation With large embedding size, the discriminative ability saturates or declines due to over-fitting. Boosting Independent Embedding Robustly (BIER) assembles several groups of embedding which are relatively independent, so that BIER leverages large embedding sizes more effectively.
  4. 4. To learn independent embeddings BIER uses an online boosting approach.
  5. 5. To learn independent embeddings BIER uses an online boosting approach.
  6. 6. To learn independent embeddings BIER uses an online boosting approach.
  7. 7. Online Boosting of CNN embedding Ensemble output The m-th group of embedding of Image x Pre-defined weight of each group Cosine similarity
  8. 8. Online Boosting of CNN embedding
  9. 9. Online Boosting of CNN embedding
  10. 10. Initialization of W (to ensure independence at begining) W To avoid W=0 The correlation between feature vectors with different size can be measured
  11. 11. Evaluation With the proposed initialization method “harder” learners should be assigned with larger embedding size
  12. 12. Evaluation The “hard” learner alone already surpasses baseline. The ensemble manipulation gains a boosting.
  13. 13. Their Extension to TPAMI 1、An auxiliary loss (originally used for W initialization) to punish correlation among each group pairs. To avoid W=0 The correlation between feature vectors with different size can be measured Michael Cogswell, Faruk Ahmed, Ross B. Girshick, Larry Zitnick, Dhruv Batra, Reducing Overfitting in Deep Networks by Decorrelating Representations, in ICLR, 2016 BIER does not decorrelate every entry pair of the representation, but only the “group pair”. The correlation within a same group is not considered.
  14. 14. Their Extension to TPAMI 2、An adversarial loss to punish correlation
  15. 15. Their Extension to TPAMI 2、An adversarial loss to punish correlation Is learned by maximizing the similarity between the projection of the j-th output and the i-th output
  16. 16. Their Extension to TPAMI 2、An adversarial loss to punish correlation But during bp, The gradient to the learners are passed by reversal operation

×