Skip to content

ualsg/Visual-soundscapes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sensing soundscapes from street view imagery

Introduction

The acoustic environment is an essential component of healthy and sustainable cities. We present a machine-learning framework for portraying large-area high-resolution urban soundscapes using ubiquitous street view imagery(SVI), without ground measurements. This dataset includes two parts: (1) SVI visual features and soundscape indicators data; (2) field-measured SVI and noise intensity data.

SVI visual features and soundscape indicators data

We also extract a total of 482 visual features from each imagery using computer vision algorithms and deep learning model. The extraction of visual features including: Object Detection(by Faster R-CNN), Semantic Segmentation(by DeepLabV3P), and Scenes Classification(by ResNet ) features and low-level features by the algorithms from the OpenCV library. Each imagery was labeled a total of 15 soundscape indicators are divided into four categories:

  • Noise intensity
  • Sound quality
  • Sound sources: traffic noise, human sounds, natural sounds, mechanical noise, and music noise
  • Perceptual emotion: pleasant, chaotic, vibrant, uneventful, calm, annoying, eventful, and monotonous

We invited a total of 338 people to participate in the survey via Amazon Mechanical Turk and social media.

Field-measured SVI and noise intensity data

The devices used in the collection include a Sound Level Meter(UT353BT) for noise intensity recording and a smartphone for the shooting of videos and street view imagery. Each investigation points include:

  • Three-minute video clips
  • 4-10 street view imageries
  • Three-minute recording of variations in sound intensity

Access

The raw SVI data can be dowmloaded at figshare. The SVI visual features and soundscape indicators data can be found at Label_Features. The Field-measured SVI and noise intensity data can be downloaded at figshare.

Paper

A paper about the work was published in Computers, Environment and Urban Systems and it is available open access here.

If you use this work in a scientific context, please cite this article.

Zhao T, Liang X, Tu W, Huang Z, Biljecki F (2023): Sensing urban soundscapes from street view imagery. Computers, Environment and Urban Systems, 99: 101915. doi:10.1016/j.compenvurbsys.2022.101915

@article{2023_ceus_soundscapes,
  author = {Zhao, Tianhong and Liang, Xiucheng and Tu, Wei and Huang, Zhengdong and Biljecki, Filip},
  doi = {10.1016/j.compenvurbsys.2022.101915},
  journal = {Computers, Environment and Urban Systems},
  pages = {101915},
  title = {Sensing urban soundscapes from street view imagery},
  volume = {99},
  year = {2023}
}

License

This dataset is released under the CC BY-NC-SA 4.0 license.

Contact

Feel free to contact Tianhong Zhao or Filip Biljecki should you have any questions. For more information, please visit the website of the Urban Analytics Lab, National University of Singapore.

Acknowledgements

We gratefully acknowledge the participants of the survey and the input data. We thank the members of the NUS Urban Analytics Lab for the discussions. The Institutional Review Board of the National University of Singapore has reviewed and approved the ethical aspects of this research (reference code NUS-IRB-2021-906).

About

Sensing soundscapes from street view imagery

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages