i using caffe, deep neural network library, generate image features image based retrieval. particular network using generates 4096 dimensional feature.
i using lshash generate hash buckets features. when brute comparison of available feature, sorting images euclidean distance, find features represent image similarity well. when use lshash, however, find similar features land in same bucket.
are source features large use lsh? there other ways reduce dimensions of image features before attempting hash them?
if looking intelligent dimensionality reduction, can add "innerproduct" layer on top of net lower output dimension.
train layer without altering rest of weights can set lr_mult values layers (apart new one) 0 training (aka "finetuning") top dim-reduction layer.
Comments
Post a Comment