Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
5d98bb0
Merged commit includes the following changes:
andrefaraujo Jun 13, 2019
3153738
Updating DELF init to adjust to latest changes
andrefaraujo Jun 13, 2019
f53250e
Editing init files for python packages
andrefaraujo Jun 14, 2019
d321b1b
Edit D2R dataset reader to work with py3.
andrefaraujo Jun 14, 2019
3624965
DELF package: fix import ordering
andrefaraujo Jun 14, 2019
608bd22
Merge remote-tracking branch 'upstream/master'
andrefaraujo Feb 7, 2020
d23506d
Adding new requirements to setup.py
andrefaraujo Feb 7, 2020
4f9756d
Adding init file for training dir
andrefaraujo Feb 7, 2020
f054e3f
Merged commit includes the following changes:
andrefaraujo Feb 7, 2020
6026239
Adding init file for training subdirs
andrefaraujo Feb 7, 2020
f95f991
Working version of DELF training
andrefaraujo Feb 8, 2020
6dc7f9a
Internal change.
andrefaraujo Jun 14, 2019
4519efe
Fix variance loading in open-source code.
andrefaraujo Jul 30, 2019
64f45bd
Separate image re-ranking as a standalone library, and add metric wri…
andrefaraujo Jul 31, 2019
620dc62
Tool to read written D2R Revisited datasets metrics file. Test is added.
andrefaraujo Aug 14, 2019
49116a9
Add optional resize factor for feature extraction.
andrefaraujo Aug 20, 2019
bb799f9
Fix NumPy's new version spacing changes.
andrefaraujo Aug 23, 2019
ec3d60b
Maker image matching function visible, and add support for RANSAC seed.
andrefaraujo Oct 29, 2019
7d1f100
Avoid matplotlib failure due to missing display backend.
andrefaraujo Dec 27, 2019
59b7fef
Removes tf.contrib dependency.
andrefaraujo Jan 9, 2020
0bc84d4
Fix tf contrib removal for feature_aggregation_extractor.
andrefaraujo Jan 13, 2020
d874999
Merged commit includes the following changes:
andrefaraujo Apr 29, 2020
a56b550
Merge remote-tracking branch 'upstream/master'
andrefaraujo Apr 29, 2020
b50913d
Updating README, dependency versions
andrefaraujo Apr 30, 2020
189432f
Updating training README
andrefaraujo Apr 30, 2020
5b54e56
Fixing init import of export_model
andrefaraujo May 1, 2020
18d5f78
Fixing init import of export_model_utils
andrefaraujo May 1, 2020
05e7804
tkinter in INSTALL_INSTRUCTIONS
andrefaraujo May 1, 2020
ce8ef85
Merged commit includes the following changes:
andrefaraujo May 1, 2020
4879ffa
INSTALL_INSTRUCTIONS mentioning different cloning options
andrefaraujo May 1, 2020
13029dd
Updating required TF version, since 2.1 is not available in pip
andrefaraujo May 6, 2020
51545af
Internal change.
andrefaraujo Apr 30, 2020
c094b98
Fix missing string_input_producer and start_queue_runners in TF2.
andrefaraujo May 1, 2020
3c5b526
Handle RANSAC from skimage's latest versions.
andrefaraujo May 6, 2020
00d99cb
Merging upstream
andrefaraujo May 6, 2020
d3a59d4
DELF 2.1 version: badge and setup.py updated
andrefaraujo May 6, 2020
23b99c9
Add TF version badge in INSTALL_INSTRUCTIONS and paper badges in README
andrefaraujo May 6, 2020
7e2854e
Add paper badges in paper instructions
andrefaraujo May 6, 2020
033a9d4
Add paper badge to landmark detection instructions
andrefaraujo May 6, 2020
a837bce
Merge remote-tracking branch 'upstream/master'
andrefaraujo May 8, 2020
ed43903
Small update to DELF training README
andrefaraujo May 8, 2020
714e077
Merge remote-tracking branch 'upstream/master'
andrefaraujo May 21, 2020
2db1fc2
Merged commit includes the following changes:
andrefaraujo May 21, 2020
8f50740
DELF README update / DELG instructions
andrefaraujo May 21, 2020
423b026
DELF README update
andrefaraujo May 21, 2020
b80a8fd
DELG instructions update
andrefaraujo May 21, 2020
2e99e9b
Merged commit includes the following changes:
andrefaraujo May 21, 2020
5c3de0a
Merged commit includes the following changes:
andrefaraujo May 21, 2020
6e92f04
Markdown updates after adding GLDv2 stuff
andrefaraujo May 21, 2020
e3d87c2
Merging upstream
andrefaraujo May 21, 2020
1ad9f56
Small updates to DELF README
andrefaraujo May 21, 2020
454e027
Clarify that library must be installed before reproducing results
andrefaraujo May 22, 2020
af0fc49
Merging upstream
andrefaraujo May 22, 2020
6028790
Merging upstream
andrefaraujo Jun 15, 2020
064dd55
Merge remote-tracking branch 'upstream/master'
andrefaraujo Jun 30, 2020
5d069db
Merged commit includes the following changes:
andrefaraujo Jun 30, 2020
d80bbaa
Properly merging README
andrefaraujo Jun 30, 2020
78bb90d
Merging upstream
andrefaraujo Jun 30, 2020
2bfa509
small edits to README
andrefaraujo Jul 1, 2020
03203ea
small edits to README
andrefaraujo Jul 1, 2020
13fda17
small edits to README
andrefaraujo Jul 1, 2020
e37da65
global feature exporting in training README
andrefaraujo Jul 1, 2020
788ce1c
Update to DELF README, install instructions
andrefaraujo Jul 1, 2020
07a563d
Centralizing installation instructions
andrefaraujo Jul 1, 2020
6edbb8d
Small readme update
andrefaraujo Jul 1, 2020
1dfbb43
Merging upstream
andrefaraujo Jul 1, 2020
26e0ef0
Fixing commas
andrefaraujo Jul 1, 2020
9515e60
Merge remote-tracking branch 'upstream/master'
andrefaraujo Jul 6, 2020
9d4b909
Mention DELG acceptance into ECCV'20
andrefaraujo Jul 6, 2020
a181e79
Merge remote-tracking branch 'upstream/master'
andrefaraujo Aug 14, 2020
1a1ce1a
Merged commit includes the following changes:
andrefaraujo Aug 14, 2020
8a3a8c5
Adding back matched_images_demo.png
andrefaraujo Aug 14, 2020
994b476
Merge remote-tracking branch 'upstream/master'
andrefaraujo Aug 23, 2020
4fb12b5
Merged commit includes the following changes:
andrefaraujo Aug 18, 2020
19e42d5
Updated DELG instructions after model extraction refactoring
andrefaraujo Aug 24, 2020
6d24570
Updating GLDv2 paper model baseline
andrefaraujo Aug 24, 2020
3a2d764
Merge remote-tracking branch 'upstream/master'
andrefaraujo Sep 2, 2020
7f14e1e
Merged commit includes the following changes:
andrefaraujo Aug 28, 2020
f91c181
Updated training README after recent changes
andrefaraujo Sep 2, 2020
88d20a0
Updated training README to fix small typo
andrefaraujo Sep 2, 2020
8d14533
Merge remote-tracking branch 'upstream/master'
andrefaraujo Sep 4, 2020
e2ea6d8
Merged commit includes the following changes:
andrefaraujo Sep 4, 2020
e0ef6a6
Updated DELG exporting instructions
andrefaraujo Sep 4, 2020
07f1b84
Updated DELG exporting instructions: fix small typo
andrefaraujo Sep 4, 2020
8c0e759
Merge remote-tracking branch 'upstream/master'
andrefaraujo Sep 14, 2020
8da1db5
Adding DELG pre-trained models on GLDv2-clean
andrefaraujo Sep 14, 2020
723b3ab
Merged commit includes the following changes:
andrefaraujo Sep 14, 2020
7a82d81
Merge remote-tracking branch 'upstream/master'
andrefaraujo Sep 15, 2020
7935626
Merge remote-tracking branch 'upstream/master'
andrefaraujo Dec 14, 2020
9dcd445
Merged commit includes the following changes:
andrefaraujo Dec 14, 2020
c3fb284
Merge remote-tracking branch 'upstream/master'
andrefaraujo Feb 23, 2021
f65a869
Merged commit includes the following changes:
andrefaraujo Feb 23, 2021
447a6ff
Merged commit includes the following changes:
andrefaraujo Feb 23, 2021
a2e3e88
Merge to upstream
andrefaraujo Apr 20, 2021
2a1ada8
Merged commit includes the following changes:
andrefaraujo Apr 20, 2021
9d55fe6
Add whiten module import
andrefaraujo Apr 20, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions research/delf/delf/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
from delf.python import feature_extractor
from delf.python import feature_io
from delf.python import utils
from delf.python import whiten
from delf.python.examples import detector
from delf.python.examples import extractor
from delf.python import detect_to_retrieve
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import argparse
import sys

from tensorflow.python.platform import app
from absl import app
from delf.python.datasets.google_landmarks_dataset import dataset_file_io
from delf.python.datasets.google_landmarks_dataset import metrics

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import argparse
import sys

from tensorflow.python.platform import app
from absl import app
from delf.python.datasets.google_landmarks_dataset import dataset_file_io
from delf.python.datasets.google_landmarks_dataset import metrics

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,7 @@ class DatasetFileIoTest(tf.test.TestCase):

def testReadRecognitionSolutionWorks(self):
# Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir,
'recognition_solution.csv')
file_path = os.path.join(FLAGS.test_tmpdir, 'recognition_solution.csv')
with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,landmarks,Usage\n')
f.write('0123456789abcdef,0 12,Public\n')
Expand Down Expand Up @@ -64,8 +63,7 @@ def testReadRecognitionSolutionWorks(self):

def testReadRetrievalSolutionWorks(self):
# Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir,
'retrieval_solution.csv')
file_path = os.path.join(FLAGS.test_tmpdir, 'retrieval_solution.csv')
with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,images,Usage\n')
f.write('0123456789abcdef,None,Ignored\n')
Expand Down Expand Up @@ -96,8 +94,7 @@ def testReadRetrievalSolutionWorks(self):

def testReadRecognitionPredictionsWorks(self):
# Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir,
'recognition_predictions.csv')
file_path = os.path.join(FLAGS.test_tmpdir, 'recognition_predictions.csv')
with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,landmarks\n')
f.write('0123456789abcdef,12 0.1 \n')
Expand Down Expand Up @@ -134,8 +131,7 @@ def testReadRecognitionPredictionsWorks(self):

def testReadRetrievalPredictionsWorks(self):
# Define inputs.
file_path = os.path.join(FLAGS.test_tmpdir,
'retrieval_predictions.csv')
file_path = os.path.join(FLAGS.test_tmpdir, 'retrieval_predictions.csv')
with tf.io.gfile.GFile(file_path, 'w') as f:
f.write('id,images\n')
f.write('0123456789abcdef,fedcba9876543250 \n')
Expand Down
73 changes: 37 additions & 36 deletions research/delf/delf/python/datasets/revisited_op/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@

_GROUND_TRUTH_KEYS = ['easy', 'hard', 'junk']

DATASET_NAMES = ['roxford5k', 'rparis6k']


def ReadDatasetFile(dataset_file_path):
"""Reads dataset file in Revisited Oxford/Paris ".mat" format.
Expand Down Expand Up @@ -105,14 +107,14 @@ def ParseEasyMediumHardGroundTruth(ground_truth):
hard_ground_truth = []
for i in range(num_queries):
easy_ground_truth.append(
_ParseGroundTruth([ground_truth[i]['easy']],
[ground_truth[i]['junk'], ground_truth[i]['hard']]))
_ParseGroundTruth([ground_truth[i]['easy']],
[ground_truth[i]['junk'], ground_truth[i]['hard']]))
medium_ground_truth.append(
_ParseGroundTruth([ground_truth[i]['easy'], ground_truth[i]['hard']],
[ground_truth[i]['junk']]))
_ParseGroundTruth([ground_truth[i]['easy'], ground_truth[i]['hard']],
[ground_truth[i]['junk']]))
hard_ground_truth.append(
_ParseGroundTruth([ground_truth[i]['hard']],
[ground_truth[i]['junk'], ground_truth[i]['easy']]))
_ParseGroundTruth([ground_truth[i]['hard']],
[ground_truth[i]['junk'], ground_truth[i]['easy']]))

return easy_ground_truth, medium_ground_truth, hard_ground_truth

Expand Down Expand Up @@ -216,13 +218,13 @@ def ComputePRAtRanks(positive_ranks, desired_pr_ranks):
positive_ranks_one_indexed = positive_ranks + 1
for i, desired_pr_rank in enumerate(desired_pr_ranks):
recalls[i] = np.sum(
positive_ranks_one_indexed <= desired_pr_rank) / num_expected_positives
positive_ranks_one_indexed <= desired_pr_rank) / num_expected_positives

# If `desired_pr_rank` is larger than last positive's rank, only compute
# precision with respect to last positive's position.
precision_rank = min(max(positive_ranks_one_indexed), desired_pr_rank)
precisions[i] = np.sum(
positive_ranks_one_indexed <= precision_rank) / precision_rank
positive_ranks_one_indexed <= precision_rank) / precision_rank

return precisions, recalls

Expand Down Expand Up @@ -272,8 +274,8 @@ def ComputeMetrics(sorted_index_ids, ground_truth, desired_pr_ranks):

if sorted_desired_pr_ranks[-1] > num_index_images:
raise ValueError(
'Requested PR ranks up to %d, however there are only %d images' %
(sorted_desired_pr_ranks[-1], num_index_images))
'Requested PR ranks up to %d, however there are only %d images' %
(sorted_desired_pr_ranks[-1], num_index_images))

# Instantiate all outputs, then loop over each query and gather metrics.
mean_average_precision = 0.0
Expand All @@ -295,7 +297,7 @@ def ComputeMetrics(sorted_index_ids, ground_truth, desired_pr_ranks):
continue

positive_ranks = np.arange(num_index_images)[np.in1d(
sorted_index_ids[i], ok_index_images)]
sorted_index_ids[i], ok_index_images)]
junk_ranks = np.arange(num_index_images)[np.in1d(sorted_index_ids[i],
junk_index_images)]

Expand Down Expand Up @@ -335,9 +337,9 @@ def SaveMetricsFile(mean_average_precision, mean_precisions, mean_recalls,
with tf.io.gfile.GFile(output_path, 'w') as f:
for k in sorted(mean_average_precision.keys()):
f.write('{}\n mAP={}\n mP@k{} {}\n mR@k{} {}\n'.format(
k, np.around(mean_average_precision[k] * 100, decimals=2),
np.array(pr_ranks), np.around(mean_precisions[k] * 100, decimals=2),
np.array(pr_ranks), np.around(mean_recalls[k] * 100, decimals=2)))
k, np.around(mean_average_precision[k] * 100, decimals=2),
np.array(pr_ranks), np.around(mean_precisions[k] * 100, decimals=2),
np.array(pr_ranks), np.around(mean_recalls[k] * 100, decimals=2)))


def _ParseSpaceSeparatedStringsInBrackets(line, prefixes, ind):
Expand Down Expand Up @@ -378,8 +380,8 @@ def _ParsePrRanks(line):
ValueError: If input line is malformed.
"""
return [
int(pr_rank) for pr_rank in _ParseSpaceSeparatedStringsInBrackets(
line, [' mP@k['], 0) if pr_rank
int(pr_rank) for pr_rank in _ParseSpaceSeparatedStringsInBrackets(
line, [' mP@k['], 0) if pr_rank
]


Expand All @@ -397,8 +399,8 @@ def _ParsePrScores(line, num_pr_ranks):
ValueError: If input line is malformed.
"""
pr_scores = [
float(pr_score) for pr_score in _ParseSpaceSeparatedStringsInBrackets(
line, (' mP@k[', ' mR@k['), 1) if pr_score
float(pr_score) for pr_score in _ParseSpaceSeparatedStringsInBrackets(
line, (' mP@k[', ' mR@k['), 1) if pr_score
]

if len(pr_scores) != num_pr_ranks:
Expand Down Expand Up @@ -430,8 +432,8 @@ def ReadMetricsFile(metrics_path):

if len(file_contents_stripped) % 4:
raise ValueError(
'Malformed input %s: number of lines must be a multiple of 4, '
'but it is %d' % (metrics_path, len(file_contents_stripped)))
'Malformed input %s: number of lines must be a multiple of 4, '
'but it is %d' % (metrics_path, len(file_contents_stripped)))

mean_average_precision = {}
pr_ranks = []
Expand All @@ -442,13 +444,13 @@ def ReadMetricsFile(metrics_path):
protocol = file_contents_stripped[i]
if protocol in protocols:
raise ValueError(
'Malformed input %s: protocol %s is found a second time' %
(metrics_path, protocol))
'Malformed input %s: protocol %s is found a second time' %
(metrics_path, protocol))
protocols.add(protocol)

# Parse mAP.
mean_average_precision[protocol] = float(
file_contents_stripped[i + 1].split('=')[1]) / 100.0
file_contents_stripped[i + 1].split('=')[1]) / 100.0

# Parse (or check consistency of) pr_ranks.
parsed_pr_ranks = _ParsePrRanks(file_contents_stripped[i + 2])
Expand All @@ -461,18 +463,18 @@ def ReadMetricsFile(metrics_path):

# Parse mean precisions.
mean_precisions[protocol] = np.array(
_ParsePrScores(file_contents_stripped[i + 2], len(pr_ranks)),
dtype=float) / 100.0
_ParsePrScores(file_contents_stripped[i + 2], len(pr_ranks)),
dtype=float) / 100.0

# Parse mean recalls.
mean_recalls[protocol] = np.array(
_ParsePrScores(file_contents_stripped[i + 3], len(pr_ranks)),
dtype=float) / 100.0
_ParsePrScores(file_contents_stripped[i + 3], len(pr_ranks)),
dtype=float) / 100.0

return mean_average_precision, pr_ranks, mean_precisions, mean_recalls


def create_config_for_test_dataset(dataset, dir_main):
def CreateConfigForTestDataset(dataset, dir_main):
"""Creates the configuration dictionary for the test dataset.

Args:
Expand All @@ -482,8 +484,8 @@ def create_config_for_test_dataset(dataset, dir_main):
Returns:
cfg: Dataset configuration in a form of dictionary. The configuration
includes:
`gnd_fname` - path to the ground truth file for teh dataset,
`ext` and `qext` - image extentions for the images in the test dataset
`gnd_fname` - path to the ground truth file for the dataset,
`ext` and `qext` - image extensions for the images in the test dataset
and the query images,
`dir_data` - path to the folder containing ground truth files,
`dir_images` - path to the folder containing images,
Expand All @@ -496,16 +498,15 @@ def create_config_for_test_dataset(dataset, dir_main):
Raises:
ValueError: If an unknown dataset name is provided as an argument.
"""
DATASETS = ['roxford5k', 'rparis6k']
dataset = dataset.lower()

def _config_imname(cfg, i):
def _ConfigImname(cfg, i):
return os.path.join(cfg['dir_images'], cfg['imlist'][i] + cfg['ext'])

def _config_qimname(cfg, i):
def _ConfigQimname(cfg, i):
return os.path.join(cfg['dir_images'], cfg['qimlist'][i] + cfg['qext'])

if dataset not in DATASETS:
if dataset not in DATASET_NAMES:
raise ValueError('Unknown dataset: {}!'.format(dataset))

# Loading imlist, qimlist, and gnd in configuration as a dictionary.
Expand All @@ -526,8 +527,8 @@ def _config_qimname(cfg, i):
cfg['n'] = len(cfg['imlist'])
cfg['nq'] = len(cfg['qimlist'])

cfg['im_fname'] = _config_imname
cfg['qim_fname'] = _config_qimname
cfg['im_fname'] = _ConfigImname
cfg['qim_fname'] = _ConfigQimname

cfg['dataset'] = dataset

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import numpy as np
import tensorflow as tf

from delf.python.detect_to_retrieve import dataset
from delf.python.datasets.revisited_op import dataset

FLAGS = flags.FLAGS

Expand Down
4 changes: 2 additions & 2 deletions research/delf/delf/python/datasets/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
from PIL import Image

import tensorflow as tf
from delf.python import utils as image_loading_utils
from delf import utils as image_loading_utils


def pil_imagenet_loader(path, imsize, bounding_box=None, preprocess=True):
Expand All @@ -32,7 +32,7 @@ def pil_imagenet_loader(path, imsize, bounding_box=None, preprocess=True):
preprocess: Bool, whether to preprocess the images in respect to the
ImageNet dataset.

Returns:
Returns:
image: `Tensor`, image in ImageNet suitable format.
"""
img = image_loading_utils.RgbLoader(path)
Expand Down
11 changes: 6 additions & 5 deletions research/delf/delf/python/datasets/utils_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ def testDefaultLoader(self):

max_img_size = 1024
# Load the saved dummy image.
img = image_loading_utils.default_loader(filename, imsize=max_img_size,
preprocess=False)
img = image_loading_utils.default_loader(
filename, imsize=max_img_size, preprocess=False)

# Make sure the values are the same before and after loading.
self.assertAllEqual(np.array(img_out), img)
Expand All @@ -63,9 +63,10 @@ def testDefaultLoaderWithBoundingBox(self):
# Load the saved dummy image.
expected_size = 400
img = image_loading_utils.default_loader(
filename, imsize=max_img_size,
bounding_box=[120, 120, 120 + expected_size, 120 + expected_size],
preprocess=False)
filename,
imsize=max_img_size,
bounding_box=[120, 120, 120 + expected_size, 120 + expected_size],
preprocess=False)

# Check that the final shape is as expected.
self.assertAllEqual(tf.shape(img), [expected_size, expected_size, 3])
Expand Down
2 changes: 1 addition & 1 deletion research/delf/delf/python/delg/extract_features.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
from delf import datum_io
from delf import feature_io
from delf import utils
from delf.python.detect_to_retrieve import dataset
from delf.python.datasets.revisited_op import dataset
from delf import extractor

FLAGS = flags.FLAGS
Expand Down
2 changes: 1 addition & 1 deletion research/delf/delf/python/delg/perform_retrieval.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
import tensorflow as tf

from delf import datum_io
from delf.python.detect_to_retrieve import dataset
from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import image_reranking

FLAGS = flags.FLAGS
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@
import sys
import time

from absl import app
import numpy as np
import tensorflow as tf

from tensorflow.python.platform import app
from delf import feature_io
from delf.python.detect_to_retrieve import dataset
from delf.python.datasets.revisited_op import dataset

cmd_args = None

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
import argparse
import sys

from tensorflow.python.platform import app
from absl import app
from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import aggregation_extraction
from delf.python.detect_to_retrieve import dataset

cmd_args = None

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@
import os
import sys

from tensorflow.python.platform import app
from absl import app
from delf.python.datasets.revisited_op import dataset
from delf.python.detect_to_retrieve import boxes_and_features_extraction
from delf.python.detect_to_retrieve import dataset

cmd_args = None

Expand Down
Loading