Skip to content

Commit

Permalink
Update vignette.
Browse files Browse the repository at this point in the history
  • Loading branch information
meganwinton committed Jun 6, 2018
1 parent 8f45ead commit 5841177
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 12 deletions.
19 changes: 9 additions & 10 deletions src/stan_files/COA_Tag_Integrated.stan
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ data {
real testX[ntest]; // test tag locations east-west
real testY[ntest]; // test tag locations north-south
}

// Declare parameters
parameters {
// fixed effects
Expand All @@ -34,17 +34,17 @@ transformed parameters {
real sigma; // Standard deviation of the distance-decay function - assume constant
real d[nind,nrec,ntime]; // Array to store distances
real td[ntest,nrec]; // Matrix of test tag distances

// Specify them
sigma = sqrt(1/(2*alpha1)); // Derived from coefficient specifying distance-related decay in detection prob.
// Test tag distance
// Test tag distance
for(s in 1:ntest){ // For each test tag
for(j in 1:nrec){ // And each receiver
// Calculate Euclidean distance from east test tag to each receiver
td[s,j] = pow(pow(testX[s]-recX[j],2) + pow(testY[s]-recY[j],2),0.5); //Calc for euclidean distance
}
}

// COA distance
for(t in 1:ntime){ // For each time step
for(j in 1:nrec){ // And each receiver
Expand All @@ -54,29 +54,28 @@ transformed parameters {
// Detection probability
//p0[t,j] = exp( alpha0 + alpha2[t,j] )/( 1+exp( alpha0 + alpha2[t,j] ) );
p0[t,j] = exp( alpha0[t,j] )/( 1+exp( alpha0[t,j] ) );

}
}
}
}

}

// Model specification
model {
// priors
alpha0[ntime,nrec]~cauchy(0,2.5);
alpha1~cauchy(0,2.5);

// likelihood
for (t in 1:ntime){ // For each time step
for (j in 1:nrec){ // And each receiver
for (s in 1:ntest){ // And each test tag
// Data from test tag - distance for each known
test[s,j,t] ~ binomial(ntrans, p0[t,j]*exp(-alpha1*td[s,j]*td[s,j]));
}
}
for (i in 1:nind){ // And each individual
// Note observations (y) must be specified as an integer - otherwise will result in an error
y[i,j,t] ~ binomial(ntrans, p0[t,j]*exp(-alpha1*d[i,j,t]*d[i,j,t]));
y[i,j,t] ~ binomial(ntrans, p0[t,j]*exp(-alpha1*d[i,j,t]*d[i,j,t]));
}
}
}
Expand Down
4 changes: 2 additions & 2 deletions vignettes/Estimate_COA_vignette.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@ vignette: >

## Introduction

This vignette will walk you through the analyses presented by Winton, Kneebone, Zemeckis, and Fay (in prep), who describe the use of spatial point process models to estimate individual centers of activity from passive acoustic telemetry data. The vignette progresses from applying the simplest case, which assumes that detection probabilities/receiver detection ranges remain constant over time, to application of a test-tag integrated model, which incorporates detection data from one or more stationary test transmitters to estimate time-varying detection ranges. The models are fitted in a Bayesian framework using the Stan software (Carpenter et al. 2016); code was modified from that provided in Royle et al. (2014) for fitting spatial point process models to data from camera traps. We prefer the Bayesian approach for COA estimation due to its treatment of uncertainty, but realize the longer computational time required may be prohibitive for some applications. In the near future, we will update to include an option to fit in a maximum likelihood framework, which will reduce run times. We'd also like to note that the models described can support varying degrees of complexity - not all applications will require (or have the data to support) the most complex version of the model.
This vignette will walk you through the analyses presented by M. Winton, J. Kneebone, D. Zemeckis, and G. Fay (in prep), who describe the use of spatial point process models to estimate individual centers of activity from passive acoustic telemetry data. The vignette progresses from applying the simplest case, which assumes that detection probabilities/receiver detection ranges remain constant over time, to application of a test-tag integrated model, which incorporates detection data from one or more stationary test transmitters to estimate time-varying detection ranges. The models are fitted in a Bayesian framework using the Stan software (Carpenter et al. 2016); code was modified from that provided in Royle et al. (2014) for fitting spatial point process models to data from camera traps. We prefer the Bayesian approach for COA estimation due to its treatment of uncertainty, but realize the longer computational time required may be prohibitive for some applications. In the near future, we will update to include an option to fit in a maximum likelihood framework, which will reduce run times. We'd also like to note that the models described can support varying degrees of complexity - not all applications will require (or have the data to support) the most complex version of the model. The simpler the model, the shorter the run-time.

We realize that many users will have unique situations and may need to modify the base code to suit their purposes. Users can copy the `.stan` files contained in the `src/stan_files` folder to their local machine to do so. We will be happy to accommodate user requests (as our schedules allow). Our hope is to make this a collaborative package that will evolve based on the needs of the acoustic telemetry community and continue to improve over time. We have tried to make the instructions outlined in this vignette user-friendly since we are a group of applied biologists with varying degrees of statistical experience. If some of the statistical notation outlined here or in the paper remains unclear, feel free to contact us with questions/for clarification. Most of it is actually very intuitive, and we would like to make sure that comes through in the documentation. This is a new package, so if you find bugs, places where code efficiency could be improved, or instances where the documentation could be made more user-friendly, please let us know!


## Data preparation

The package includes two data sets: 1) 153 hours of detection data from a black sea bass (note that we have assigned time as the number of cumulative hours since the tag was deployed rather than providing the raw time stamp), and 2) hourly detections from a stationary test transmitter over the same time period. Two files containing the receiver coordinates and the location of the test tag are also included. We have provided the data in this format to illustrate data preparation specific to fitting these types of models; however, you will need to aggregate your date/time-stamps to the time period of interest. Receiver and test tag coordinates provided are in the Universal Transverse Mercator (zone 18) projection, which have previously been mean-centered and scaled into kilometers. *This scaling step is highly recommended to reduce run-times. In other words, do it to your dataset prior to starting the code chunks here.*
The package includes two data sets: 1) 153 hours of detection data from a black sea bass (note that we have assigned time as the number of cumulative hours since the tag was deployed rather than providing the raw date/time stamp), and 2) hourly detections from a stationary test transmitter over the same time period. Two files containing the receiver coordinates and the location of the test tag are also included. We have provided the data in this format to illustrate data preparation specific to fitting these types of models; however, you will need to aggregate your date/time stamps to the time period of interest. Receiver and test tag coordinates provided are in the Universal Transverse Mercator (zone 18) projection, which have previously been mean-centered and scaled into kilometers. *This scaling step is highly recommended to reduce run-times. In other words, do it to your dataset prior to starting the code chunks here.*
To access the provided data, run:

```{r, echo = T, message = F, warnings = FALSE}
Expand Down

0 comments on commit 5841177

Please sign in to comment.