You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to be able to easily and persistently store EvalOutput results to e.g. local disk or NoSQL databases like DynamoDB... And ideally also load them back into fmeval/Python objects.
There are severalgoodreasons why I'd prefer to avoid just using pickle... and JSON seems like a natural fit for this kind of data, but we can't simply json.dumps() an EvalOutput object today.
It would be useful if we offered a clear mechanism to save the evaluation summary/scores to JSON, and ideally load back from JSON as well.
The text was updated successfully, but these errors were encountered:
Hi, if you set the save parameter of evaluate to True, we will save the EvalOutput objects to disk in the form of a .jsonl file. See the save_dataset function.
Thanks for responding @danielezhu & sorry to be slow
As far as I can see, setting save stores the example-level data to disk in JSON-Lines right? With this req, I'm trying to serialize the summary-level metrics in EvalOutput.
The returned EvalOutput is derived from aggregated scores
When using the library from Python, it'd be helpful if it was easier to dump the returned summary objects to JSON - independently of whether the example-level backup has already been saved to disk.
Ah, yes. I misunderstood your original question. This will certainly be a nice feature to have. We'll add this feature request to our roadmap; thanks for bringing this up.
I'd like to be able to easily and persistently store EvalOutput results to e.g. local disk or NoSQL databases like DynamoDB... And ideally also load them back into fmeval/Python objects.
There are several good reasons why I'd prefer to avoid just using pickle... and JSON seems like a natural fit for this kind of data, but we can't simply
json.dumps()
anEvalOutput
object today.It would be useful if we offered a clear mechanism to save the evaluation summary/scores to JSON, and ideally load back from JSON as well.
The text was updated successfully, but these errors were encountered: