Saturday 11:00 AM–11:45 AM in Speakeasy

Searchable datasets in Python: images across domains, experiments, algorithms and learning

Dani Ushizima, Flavio Araujo, Romuere Silva

Audience level:
Novice

Description

pyCBIR is a new python tool for content-based image retrieval (CBIR) capable of searching relevant items in large databases, given unseen samples. While much work in CBIR has targeted ads and recommendation systems, our pyCBIR allows general purpose investigation across image domains. Also, pyCBIR contains ten distance metrics, and six bags of features, including a Convolutional Neural Network.

Abstract

Introduction

Image capture turned into an ubiquitous activity in our daily lives, but mechanisms to organize and retrieve images based on their content are available only to a few people or to very specific problems. With the significant improvement in image processing speeds and availability of large storage systems, the development of methods to query and retrieve images is fundamental to simple human activities like cataloguing and complex research such as synthesizing materials. Content-Based Image Retrieval (CBIR) systems use computer vision techniques to describe images in terms of its properties in order to search similar samples given an image itself as a query, instead of keywords. For this reason, the system works independently of annotations, which can be time consuming and impossible in some scenarios, e.g. high-throughput imaging instruments.

While much work in CBIR has targeted ads and recommendation systems, our pyCBIR allows general purpose investigation across image domains and experiments. Also, pyCBIR contains different distance metrics, and several feature extraction techniques, including a Convolutional Neural Network (CNN).

Proposed Methodology

We proposed a CBIR tool using python program language called pyCBIR. This tool is composed by six feature extraction methods and ten distances (see Figure 1). Searches occur based on a single image (or a set of images) as query, then pyCBIR retrieves and rank the most similar images according to the user selected parameters.

Figure 1. Flowchart of the proposed methodology Figure 1. Flowchart of the proposed methodology: http://vis.lbl.gov/~daniela/2016/pyData/cbir.png

Regarding the feature extraction methods in Figure 2, pyCBIR calculates the following sets of attributes: Gray Level Co-occurrence Matrix (GLCM), Histogram of Oriented Gradients (HOG), First Order Texture Features (FOTF), Local Binary Pattern (LBP). We also implemented two CNN-based schemes for image characterization. The first scheme uses a CNN without the last layer (classification layer), retaining the convolution results as features - this is a common approach among new CBIR systems. The second scheme proposes the use of the class probabilities as descriptor, an original contribution of our work, which also achieved competitive results in comparison to the other feature extraction methods - we call this scheme CNN with probabilities or CNNp. For example, let DB be a database with 5 classes, the CNNp will return 5 probabilities for each DB image, which will be the DB feature vectors. For each retrieval image (RI), we will use the trained CNNp to compute the RI probability. Next, we can compute different distances, as listed in Figure 2, between feature vectors and return the most similar.

Figure 2. Graphic interface Figure 2. Graphic interface: http://vis.lbl.gov/~daniela/2016/pyData/pyCBIR.png

Experiments

We carried out several experiments using classical image databases for CBIR problems, such as: CIFAR-10, Describable Textures Dataset (DTD) and MNIST, as well as scientific datasets containing microscopic images. Current experiments pointed out that descriptors like HOG and LBP are very sensitive to the parameters choice, and the absence of parameters that work well in all databases. The proposed CNN scheme for feature extraction, CNNq, requires only two parameters: number of epochs and learning rate. These parameters showed less sensitivity, as we illustrated in Figure 3 and 4.

Figure 3. pyCBIR results for public dataset (DTD) Figure 3. pyCBIR results for public dataset (DTD): http://vis.lbl.gov/~daniela/2016/pyData/textureDTD.png

Figure 4. pyCBIR results for dataset of microscopic images Figure 4. pyCBIR results for dataset of microscopic images: http://vis.lbl.gov/~daniela/2016/pyData/fiberMicroscopy.png