Hi,
I am currently using your library for JSON validation and I find it very useful. However, I have come across a use case that seems not to be covered in the current feature set. Specifically, I am looking for a way to validate the uniqueness of certain keys across multiple JSON objects.
For instance, consider a scenario where I am validating a list of JSON objects where each object has an id field. I would like to ensure that the id is unique across all objects.
A simplified example is given below:
[
{"id": "1", "name": "Andrew"},
{"id": "2", "name": "Bob"},
{"id": "1", "name": "Charlie"},
{"id": "3", "name": "Dave"},
{"id": "4", "name": "Eve"}
]
In this scenario, I would like the validation to fail because the id "1" appears twice.
Currently, I am using a workaround to achieve this, which involves post-processing after validation. Here is the code I'm using to check for duplicate IDs:
def check_unique_ids(data, key):
ids = [item[key] for item in data]
duplicates = set(id for id in ids if ids.count(id) > 1)
positions = {duplicate: [i for i, x in enumerate(ids) if x == duplicate] for duplicate in duplicates}
return positions
# Check for unique IDs
duplicates = check_unique_ids(data, 'id')
if duplicates:
print("Duplicate IDs found at the following positions:")
for id, positions in duplicates.items():
print(f'ID "{id}" found at positions: {positions}')
Although this workaround is functional, integrating this feature into the jsonschema library itself would make the code cleaner, more efficient, and allow for validation to fail at the appropriate step, rather than requiring additional post-processing.
Hi,
I am currently using your library for JSON validation and I find it very useful. However, I have come across a use case that seems not to be covered in the current feature set. Specifically, I am looking for a way to validate the uniqueness of certain keys across multiple JSON objects.
For instance, consider a scenario where I am validating a list of JSON objects where each object has an
idfield. I would like to ensure that the id is unique across all objects.A simplified example is given below:
In this scenario, I would like the validation to fail because the
id"1" appears twice.Currently, I am using a workaround to achieve this, which involves post-processing after validation. Here is the code I'm using to check for duplicate IDs:
Although this workaround is functional, integrating this feature into the jsonschema library itself would make the code cleaner, more efficient, and allow for validation to fail at the appropriate step, rather than requiring additional post-processing.