When using bulk_update_with_history to update large batches of records, the function performs unnecessary computations for each object in the batch. This creates performance overhead that becomes significant when processing thousands of records.
The main issue is that certain values that should be the same for all objects in a batch (like timestamps, model metadata, or field lists) are being recalculated for every single object, even though they don't change. For objects with around 20 attributes, the execution gets ~20% slower than for simpler objects.