-
Error Injection: Profiles the BERT model (data types, layers, parameters) and configures bit error rate (BER) and injection targets. Generates an error map to identify which bits to flip, then injects hardware errors by modifying parameters accordingly.
-
Error Mitigation: Profiles layer-specific value bounds and applies parameter clipping based on these bounds to reduce the impact of errors, resulting in a more robust model.
-
Minimal Dependency: Fully PyTorch-based with no extra dependencies (e.g., bitstring, struct), avoiding complex environment setups or broken packages.
-
GPU Acceleration: Leverages native PyTorch tensor APIs (e.g.,
torch.view()) for bit-level error injection, enabling up to 100× speedup without needing C backend changes or custom CUDA kernels. -
Flexible Mitigation Interface: Includes a default parameter clipping strategy and supports user-defined mitigation methods via a simple interface.
-
You can find detailed instructions of using hugging-error at
./run.ipynb. -
If you find our error injection framework useful, please consider citing our paper:
Understanding the Robustness of BERT Models Against Hardware Errors: An Experimental Study (in-submission)
