Skip to content

n8craig/um-inventory

Repository files navigation

Searchable Collections Inventory from PastPerfect Export

This repository contains the code to build a searchable interface for museum collections that are stored in a PastPerfect database. A working model based on collections from the University Museum of the New Mexico State University can be found here.

Note Several key parts of the project data are not included in this repository. There are two reasons for this: 1) some of the data stored in museum databases are sensitive and not appropriate for public display, 2) the original uncompressed images are large and would clutter the repository. Users wishing to implement this tool will need to supply: 1) a PastPerfect object export file and 2) the corresponding Images directory from PastPerfect.

To use this repository:

  1. Clone the repository
  2. Within the project root, create the following directories:
    • data/
    • data/images_full/
    • data/images_thums/
    • data/tables
  3. Export the Object List from PastPerfect following these guidelines.
  4. Save the export output as an .xlsx file and place it into ./data/tables directory created above.
  5. From PastPerfect, copy the Images directory into the ./data directory such that it is ./data/Images.
  6. Within the index.qmd file, navigate to the data-read chunk, and find the line that reads df <- read_excel(here::here("data/tables/EXPORTED_FILE_NAME.xlsx")) %>% janitor::clean_names() .
  7. In this line of code, insert the name of the export file that was saved to ./data/tables . It is suggested to use a date stamped file name like inventory_2022_10_26.xlsx but any name should work.
  8. While developing the site it is recommended to sample the records. To do this, navigate to the code chunk called df-sample and set eval: true and then select the sample size under slice_sample(n=XX). When you want to render the entire table, set eval: false for the df-sample code chunk.
  9. When you want to observe the output, render the project. Be prepared that large collections with lots of images may take some time. On my machine, with 16 gigs of RAM, rendering ~4,200 objects takes about an hour. Prolonged render times is one of the reasons to sample during development.
  10. After rendering, it is a good idea to compress the images in the output _site directory. This will reduce the load for users and save space on the server. For this task, the Caesium Image Compressor is recommended (though this could also be done with magick). I found that JPG compression of 75 works well.
  11. After image compression, the site is ready to send to the server.
  12. Small projects can be posted to Quarto Pubs with the terminal command: quarto publish quarto-pub --no-render . If you want to re-render at publish, leave off the last argument.
  13. Larger projects can be rendered to Netlify with the terminal command: quarto publish netlify --no-render . If you want to re-render at publish, leave off the last argument.
  14. Alternatively, the contents of the _site directory can be pushed to any web server. This is what we've done with univmuseum.nmsu.edu .
  15. Further customization options are documented at Quarto and DT.

About

University Museum Collections Inventory

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published