AMI - Acoustic Mapper and Isolator
On mining sites, there exists 1000's of different equipment types, each with unique acoustic signatures. An experienced engineer/technician can simply listen to equipment to determine if there is a potential fault. The issue is however, is that the resource industry is transitioning to unmanned operations. The question is then raised, how can we continue to use acoustics to monitor the condition of plant equipment. The obvious answer is to use microphones, however it is very difficult to isolate individual sound equipment in noisy plant environments. Our propose prototype - AMI (Acoustic Mapper and Isolator) - is a four part solution that comprises of a multi microphone array, machine learning methods, a sound data cloud library and a user interface. The micropohone array utilises beamforming techniques to identify and selectively amplify individual sound sources. Acoustic features are then extracted from this signal and are fed into machine learning algorithms (neural networks and gradient boosting). The signal is then classified as 'normal' or 'abnormal'. These features are collected and stored in a cloud based storage system. The user interface alerts a maintenance technician to a deviation of normal acoustic beahviour. The technician will then validate the alert or enter the correct diagnosis. This information is then fed back into the machine learning algorithms to improve future performance. This solution is applicable to not just mining, but oil and gas, manufactoring, super computing labs and much more. If this can be deployed accross industry, then a robust equipment sound library can be built, for use within the future of condition monitoring within industry.
https://github.com/duxuhao/Hackathon2017Perth/blob/master/Unearthed2017_slides.pdf