Skip to content

ecker-lab/Learning_Vibrating_Plates

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Repository for: Learning to Predict Structural Vibrations

Preprint available from arXiv.

In mechanical structures like airplanes, cars and houses, noise is generated and transmitted through vibrations. To take measures to reduce this noise, vibrations need to be simulated with expensive numerical computations. Surrogate deep learning models present a promising alternative to classical numerical simulations as they can be evaluated magnitudes faster, while trading-off accuracy. To quantify such trade-offs systematically and foster the development of methods, we present a benchmark on the task of predicting the vibration of harmonically excited plates. The benchmark features a total of 12000 plate geometries with varying forms of beadings, material and sizes with associated numerical solutions. To address the benchmark task, we propose a new network architecture, named Frequency-Query Operator, which is trained to map plate geometries to their vibration pattern given a specific excitation frequency. Applying principles from operator learning and implicit models for shape encoding, our approach effectively addresses the prediction of highly variable frequency response functions occurring in dynamic systems. To quantify the prediction quality, we introduce a set of evaluation metrics and evaluate the method on our vibrating-plates benchmark. Our method outperforms DeepONets, Fourier Neural Operators and more traditional neural network architectures.

Data

We provide a notebook here, that enables the quick and easy visualization of our dataset. The data is available from this data repository in the hdf5 format. There, we also provide information on the structure of the hdf5 files and how to access the data.

fields_geom.mp4

The video shows how the vibration patterns change for three example plates with the frequency. Changes in magnitude are not displayed. To download the data, we recommend using the script acousticnn/utils/download.py. Here we list out the commands to download the available dataset settings. Please note, that the root_folder must already exist:

Setting Dataset Download Dataset Size
small example file python acousticnn/utils/download.py --dataset_name single_example_G5000 --root_folder data/example 2 GB
V5000 python acousticnn/utils/download.py --dataset_name V5000 --root_folder data/V5000 13 GB
G5000 python acousticnn/utils/download.py --dataset_name G5000 --root_folder data/G5000 13 GB

Setup

Given an installation of conda, run the following to setup the environment: bash setup.sh

This repository employs Weights and Biases for logging. To be able to use it you must have an account and login: wandb login

In acousticnn/plate/configs/main_dir.py change data_dir to where you saved the data. Change main_dir to the root path of the repository, i.e. /user/xyz/repository. You can specify the WandB project name you want to log to in this file as well.

Train a model

python scripts/run.py --model_cfg query_rn18.yaml --config V5000.yaml --dir path/to/logs

Change the model_cfg and config args to specify the model and dataset respectively. --dir specifies the save and log directory within the folder acousticnn/plate/experiments. Please note that the available models are specified in acousticnn/plate/configs/model_cfg. The best model in our experiments, FQO-UNet, is specified as localnet.yaml.

Evaluate a model

Use notebooks/evaluate.ipynb to generate plots and numerically evaluate already trained models. To generate prediction videos:

python scripts/generate_videos.py --ckpt path/to/trained_model --model_cfg localnet.yaml --config V5000.yaml --save_path plots/videos

Change the model_cfg and config args to specify the model and dataset respectively and specify the path to a trained model checkpoint via --ckpt. --save_path specifies where the resulting videos are saved.

Example Results

The videos show example predictions for samples from the test set. For the first video for each dataset changes in magnitude are not displayed, which makes the changes more easily visible. For the second video for each dataset changes in magnitude are displayed, which means that many details are lost but the resonance frequencies can be seen.

V5000 Dataset

video_0.1.mp4
video_0.0.mp4

G5000 Dataset

video_0.0.mp4
video_0.0.mp4

Acknowledgments

Parts of this code are built upon Point-MAE and PDEBench.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published