-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supervised Contrastive Learning and Feature Fusion for Improved Kinship Verification #80
Comments
ChatGPT summary The paper titled "Supervised Contrastive Learning and Feature Fusion for Improved Kinship Verification" by Nazim Bendib proposes a novel approach to facial kinship verification (FKV) using supervised contrastive learning combined with feature fusion. This methodology aims to enhance the ability to verify kinship relationships by training a model to maximize the similarity between related individuals and minimize it between unrelated individuals. Key Points:
Não citam o #26
|
QA by ChatGPT Method:
Batch Sampling:
|
Sure, here's a PyTorch implementation of the batch sampler algorithm described in the paper: import random
from collections import defaultdict
class KinshipBatchSampler:
def __init__(self, relationships, batch_size):
self.relationships = relationships
self.batch_size = batch_size
self.image_counters = defaultdict(int)
self._shuffle_relationships()
def _shuffle_relationships(self):
random.shuffle(self.relationships)
def _get_image_with_min_count(self, person_images):
min_count_image = min(person_images, key=lambda img: self.image_counters[img])
return min_count_image
def _replace_duplicates(self, sub_batch):
unique_families = set()
for pair in sub_batch:
unique_families.add(pair[2]) # Assuming the third element is the family identifier
while len(unique_families) < len(sub_batch):
for i in range(len(sub_batch)):
if sub_batch[i][2] in unique_families:
continue
replacement_pair = random.choice(self.relationships)
if replacement_pair[2] not in unique_families:
sub_batch[i] = replacement_pair
unique_families.add(replacement_pair[2])
return sub_batch
def __iter__(self):
for i in range(0, len(self.relationships), self.batch_size):
sub_batch = self.relationships[i:i + self.batch_size]
if len(sub_batch) < self.batch_size:
continue
sub_batch = self._replace_duplicates(sub_batch)
batch = []
for pair in sub_batch:
per1, per2, fam = pair
img1 = self._get_image_with_min_count(per1)
img2 = self._get_image_with_min_count(per2)
self.image_counters[img1] += 1
self.image_counters[img2] += 1
batch.append((img1, img2))
yield batch
# Example usage:
# relationships = [
# (['img1_1', 'img1_2'], ['img2_1', 'img2_2'], 'fam1'),
# (['img3_1', 'img3_2'], ['img4_1', 'img4_2'], 'fam2'),
# ...
# ]
# batch_size = 32
# sampler = KinshipBatchSampler(relationships, batch_size)
# for batch in sampler:
# print(batch) Explanation:
Example Usage:
This implementation ensures balanced and representative training batches while adhering to kinship constraints. Here's a comparison between the provided algorithm and my initial implementation: Provided Algorithm:
My Implementation:import random
from collections import defaultdict
class KinshipBatchSampler:
def __init__(self, relationships, batch_size):
self.relationships = relationships
self.batch_size = batch_size
self.image_counters = defaultdict(int)
self._shuffle_relationships()
def _shuffle_relationships(self):
random.shuffle(self.relationships)
def _get_image_with_min_count(self, person_images):
min_count_image = min(person_images, key=lambda img: self.image_counters[img])
return min_count_image
def _replace_duplicates(self, sub_batch):
unique_families = set()
for pair in sub_batch:
unique_families.add(pair[2]) # Assuming the third element is the family identifier
while len(unique_families) < len(sub_batch):
for i in range(len(sub_batch)):
if sub_batch[i][2] in unique_families:
continue
replacement_pair = random.choice(self.relationships)
if replacement_pair[2] not in unique_families:
sub_batch[i] = replacement_pair
unique_families.add(replacement_pair[2])
return sub_batch
def __iter__(self):
for i in range(0, len(self.relationships), self.batch_size):
sub_batch = self.relationships[i:i + self.batch_size]
if len(sub_batch) < self.batch_size:
continue
sub_batch = self._replace_duplicates(sub_batch)
batch = []
for pair in sub_batch:
per1, per2, fam = pair
img1 = self._get_image_with_min_count(per1)
img2 = self._get_image_with_min_count(per2)
self.image_counters[img1] += 1
self.image_counters[img2] += 1
batch.append((img1, img2))
yield batch
# Example usage:
# relationships = [
# (['img1_1', 'img1_2'], ['img2_1', 'img2_2'], 'fam1'),
# (['img3_1', 'img3_2'], ['img4_1', 'img4_2'], 'fam2'),
# ...
# ]
# batch_size = 32
# sampler = KinshipBatchSampler(relationships, batch_size)
# for batch in sampler:
# print(batch) Comparison:
Conclusion:The provided algorithm and my implementation are very similar in structure and logic. My implementation includes some additional details, such as shuffling the relationships and explicit method definitions, which align with the steps in the provided algorithm. Both approaches ensure balanced and representative training batches while adhering to kinship constraints. |
Não encontrei código associado, todavia parece simples de reproduzir.
The text was updated successfully, but these errors were encountered: