Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manuscript text #5

Open
James-Thorson opened this issue Oct 25, 2016 · 78 comments
Open

Manuscript text #5

James-Thorson opened this issue Oct 25, 2016 · 78 comments

Comments

@James-Thorson
Copy link
Collaborator

I just added a few more paragraphs to the introduction (Rnw file), but still can't compile to the PDF (see other issue for next bug).

This intro is basically a standard 5-paragraph style intro giving:

  1. context for fisheries management
  2. defining stock assessment
  3. explaining rate of assessment
  4. justifying why studying rate-of-assessment would be useful
  5. outlining our paper's goals.

I'm happy to heavily modify it, but think its a decent place to start

@James-Thorson James-Thorson added this to the Manuscript draft milestone Oct 25, 2016
@James-Thorson
Copy link
Collaborator Author

@Philipp-Neubauer I see that you added a commit "draft dataset and results". Do you want me to start adding some text for the results section, or how could I be helpful?

@Philipp-Neubauer
Copy link
Owner

yes; results and figures should be updated now with the newest data, and
with nearly all stocks in Mike's list included. Over-all the updated
grooming and such added 100+ stocks to the analysis.

There are still a few stocks that are dropped - these are stocks where
catches have in general been really low (e.g., some Rockfish). I have a
rule to exclude species that did not have at least 10t catch in some year -
that was mainly chosen to exclude landings of marginal species that only
showed up here and there in the landings.

I've added some methods text but its really rough and incomplete. Will aim
to fix that sometime this week; I'm at a bycatch workshop though so can
probably only have a few hours here and there. I also haven't added the
projection plot we discussed - same there, will add that as I find time
this week.

Happy for you to add some writing around the results and perhaps put some
bullets about things to discuss. Let me know if you have any queries re
results. The Weibull_model_output.rda file has the original data
attached as year.table. You could use that if you need numbers from
the data...

Also, you could start a bibliography by exporting citations as bibtex and
putting them in a .bib file that can live in the github repo. I can set up
a makefile to make sure the citations get processed.

On Mon, Oct 31, 2016 at 10:21 AM, Jim Thorson notifications@github.com
wrote:

@Philipp-Neubauer https://github.com/Philipp-Neubauer I see that you
added a commit "draft dataset and results". Do you want me to start adding
some text for the results section, or how could I be helpful?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDCyvYTXPqaXQKu-Cfd0ypQ7TR4C_4ks5q5QpUgaJpZM4KgiJj
.

Phil

@James-Thorson
Copy link
Collaborator Author

Cool, I'm starting to look it over. First response:

I get an error in "results.Rnw" code-block starting line 219, which contains ggplot with an argument panel.spacing that throws an error

Error in (function (el, elname)  : 
  "panel.spacing" is not a valid theme element name.

I've updated ggplot2 to version 2.1.0, and am using R 3.3.1. It runs if I comment out line 229, which I have done. Just FYI

@Philipp-Neubauer
Copy link
Owner

I might have the dev version from hadley/ggplot2 installed. You can put
panel.margin instead (that's the old way...I think)

On Mon, Oct 31, 2016 at 11:13 AM, Jim Thorson notifications@github.com
wrote:

Cool, I'm starting to look it over. First response:

I get an error in "results.Rnw" code-block starting line 219, which
contains ggplot with an argument panel.spacing that throws an error

Error in (function (el, elname) :
"panel.spacing" is not a valid theme element name.

I've updated ggplot2 to version 2.1.0, and am using R 3.3.1. It runs if I
comment out line 229, which I have done. Just FYI


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC4cNvMF_WTZJfqFjb4ZlpeNv9Ox3ks5q5RZ4gaJpZM4KgiJj
.

Phil

@James-Thorson
Copy link
Collaborator Author

James-Thorson commented Oct 31, 2016

OK.

Next response: I've spent maybe 30 minutes trying to address a question I posed in a previous email. Basically, Fig. 2 doesn't make sense to me because I would think that "landed stocks" should be the set of stocks that were ever previously landed. Given this definition, the number of landed stocks could only increase, but the green line USNE decreases around 1960, prior to any stocks being assessed.

So I tried exploring the code for Fig. 2 (line 152-186). However, I basically can't read the code at all because I don't ever use dplyr, ggplot, know how to work with a tibble, and I might be misunderstanding the column headers for full.tab. Also, my attempt to use full.tab to replicate the left panel of Fig. 2 wasn't seeming to work out.

So I have two questions:

  1. How would you like us to proceed? If I'm going to work directly with the code, I might need some changes in Results.Rnw to eliminate use of dplyr at least. Would that be easier, or would you prefer for us to direct questions to you for coding? (and of course, I'm sorry for my ignorance! some day I imagine I'll need to learn dplyr)
  2. For Fig. 2, could you define "landed stock" in the caption? Do you agree that "landed stock" should be defined as the cumulative set of stocks landed in that year or any previous year (because its this set of stocks that might potentially have an assessment)?

@Philipp-Neubauer
Copy link
Owner

Yes to question 2; I can fix the code to do that.

For question 1: It would be a mission to eliminate dplyr at this stage
given its prevalence in pretty much all data transformation operations in
the document. I think the more efficient option would be for me to make
changes as they arise.

I'll aim to push the figure 2 change in a few minutes...

On Mon, Oct 31, 2016 at 1:08 PM, Jim Thorson notifications@github.com
wrote:

OK.

Next response: I've spent maybe 30 minutes trying to address a question I
posed in a previous email. Basically, Fig. 2 doesn't make sense to me
because I would think that "landed stocks" should be the set of stocks that
were ever previously landed. Given this definition, the number of landed
stocks could only increase, but the green line USNE decreases around 1960,
prior to any stocks being assessed.

So I tried exploring the code for Fig. 2 (line 152-186). However, I
basically can't read the code at all because I don't ever use dplyr,
ggplot, know how to work with a tibble, and I might be misunderstanding
the column headers for full.tab. Also, my attempt to use full.tab to
replicate the left panel of Fig. 2 wasn't seeming to work out.

So I have two questions:

How would you like us to proceed? If I'm going to work directly with
the code, I might need some changes in Results.Rnw to eliminate use of
dplyr at least. Would that be easier, or would you prefer for us to
direct questions to you for coding?
2.

For Fig. 2, could you define "landed stock" in the caption? Do you
agree that "landed stock" should be defined as the cumulative set of stocks
landed in that year or any previous year (because its this set of stocks
that might potentially have an assessment)?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7nMZaAno23roxDDOunCNdGLWmLpks5q5TF0gaJpZM4KgiJj
.

Phil

@Philipp-Neubauer
Copy link
Owner

Ok, Fig two should now correspond to cumulative number of species landed;
with dotted line being cumulative number of assessments and solid lines
being cumulative landed - cumulative assessed per year.

On Mon, Oct 31, 2016 at 1:34 PM, Philipp Neubauer neubauer.phil@gmail.com
wrote:

Yes to question 2; I can fix the code to do that.

For question 1: It would be a mission to eliminate dplyr at this stage
given its prevalence in pretty much all data transformation operations in
the document. I think the more efficient option would be for me to make
changes as they arise.

I'll aim to push the figure 2 change in a few minutes...

On Mon, Oct 31, 2016 at 1:08 PM, Jim Thorson notifications@github.com
wrote:

OK.

Next response: I've spent maybe 30 minutes trying to address a question I
posed in a previous email. Basically, Fig. 2 doesn't make sense to me
because I would think that "landed stocks" should be the set of stocks that
were ever previously landed. Given this definition, the number of landed
stocks could only increase, but the green line USNE decreases around 1960,
prior to any stocks being assessed.

So I tried exploring the code for Fig. 2 (line 152-186). However, I
basically can't read the code at all because I don't ever use dplyr,
ggplot, know how to work with a tibble, and I might be misunderstanding
the column headers for full.tab. Also, my attempt to use full.tab to
replicate the left panel of Fig. 2 wasn't seeming to work out.

So I have two questions:

How would you like us to proceed? If I'm going to work directly with
the code, I might need some changes in Results.Rnw to eliminate use
of dplyr at least. Would that be easier, or would you prefer for us
to direct questions to you for coding?
2.

For Fig. 2, could you define "landed stock" in the caption? Do you
agree that "landed stock" should be defined as the cumulative set of stocks
landed in that year or any previous year (because its this set of stocks
that might potentially have an assessment)?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7nMZaAno23roxDDOunCNdGLWmLpks5q5TF0gaJpZM4KgiJj
.

Phil

Phil

@James-Thorson
Copy link
Collaborator Author

OK, thanks Phil! makes more sense to me, and also lines up closer to proportions I was getting when I tried to replicate the plot (presumably I missed some small restrictions on which stocks to include -- I was getting the same rank order and qualitative picture)

Another request:

What about expanding Fig. 8 to include the posterior distribution for each Class and Order? I think an interesting result is which taxa (e.g., Elasmobranchs, Clupeids) have significantly higher or lower assessment rates.

@Philipp-Neubauer
Copy link
Owner

Yes, totally. jsut pushed that

On Mon, Oct 31, 2016 at 2:43 PM, Jim Thorson notifications@github.com
wrote:

OK, thanks Phil! makes more sense to me, and also lines up closer to
proportions I was getting when I tried to replicate the plot (presumably I
missed some small restrictions on which stocks to include -- I was getting
the same rank order and qualitative picture)

Another request:

What about expanding Fig. 8 to include the posterior distribution for each
Class and Order? I think an interesting result is which taxa (e.g.,
Elasmobranchs, Clupeids) have significantly higher or lower assessment
rates.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC1SgnhnkZU84XtG83xszV6zXsT1eks5q5UepgaJpZM4KgiJj
.

Phil

@Philipp-Neubauer
Copy link
Owner

Did class only though, could do the same as in figure 4 for order instead
of just having class. that might be more interesting.

On Mon, Oct 31, 2016 at 2:59 PM, Philipp Neubauer neubauer.phil@gmail.com
wrote:

Yes, totally. jsut pushed that

On Mon, Oct 31, 2016 at 2:43 PM, Jim Thorson notifications@github.com
wrote:

OK, thanks Phil! makes more sense to me, and also lines up closer to
proportions I was getting when I tried to replicate the plot (presumably I
missed some small restrictions on which stocks to include -- I was getting
the same rank order and qualitative picture)

Another request:

What about expanding Fig. 8 to include the posterior distribution for
each Class and Order? I think an interesting result is which taxa (e.g.,
Elasmobranchs, Clupeids) have significantly higher or lower assessment
rates.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC1SgnhnkZU84XtG83xszV6zXsT1eks5q5UepgaJpZM4KgiJj
.

Phil

Phil

@James-Thorson
Copy link
Collaborator Author

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of both numeric values (to reference in the text), tables, and figures. But I don't have an immediate opinion about the best combination of figures/tables/in-text. Any thoughts?

jim

@mcmelnychuk
Copy link
Collaborator

One way to do it could be to list 1 & 3 below in tables, and 2 visually.
If we group the Orders in the Fig. 8 panel by Class (similar to Fig. 4,
like Phil suggested, rotated 90deg), then we could overlay short
vertical dashed lines for each Class, maybe color-coded. The location of
these dashed lines could be posterior means of the Class, and the length
of these dashed lines could span only the Orders within the Class. That
would lose out on error bars for the Classes, though they would still be
present for all the Orders. That would visually satisfy #2 below.

Mike

On 2016-10-30 7:39 PM, Jim Thorson wrote:

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided
    bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of
both numeric values (to reference in the text), tables, and figures.
But I don't have an immediate opinion about the best combination of
figures/tables/in-text. Any thoughts?

jim


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AV_oVSCYa2oZ5hovfa3FkZud53u7cG7zks5q5VTTgaJpZM4KgiJj.

@Philipp-Neubauer
Copy link
Owner

Yes - agreed. Actually, if we can manage a figure that includes posterior
means and intervals (say as dashed and dotted lines on either side of the
dashed line), then we could pretty much display 1 & 2 in a single figure.
Else its probably a matter of deciding which is more important to
visualise. Agreed that 3 should probably be a table.

Pretty busy getting stuff ready for the turtle workshop, but will give the
figure a go asap (as well as the projection figure Jim was talking about).
We're starting to have a good number of figures, and might want to chuck
some in an appendix (e.g.,, the model fit...).

Phil

On Mon, Oct 31, 2016 at 6:32 PM, Michael Melnychuk <notifications@github.com

wrote:

One way to do it could be to list 1 & 3 below in tables, and 2 visually.
If we group the Orders in the Fig. 8 panel by Class (similar to Fig. 4,
like Phil suggested, rotated 90deg), then we could overlay short
vertical dashed lines for each Class, maybe color-coded. The location of
these dashed lines could be posterior means of the Class, and the length
of these dashed lines could span only the Orders within the Class. That
would lose out on error bars for the Classes, though they would still be
present for all the Orders. That would visually satisfy #2 below.

Mike

On 2016-10-30 7:39 PM, Jim Thorson wrote:

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided
    bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of
both numeric values (to reference in the text), tables, and figures.
But I don't have an immediate opinion about the best combination of
figures/tables/in-text. Any thoughts?

jim


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-257202406>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVSCYa2oZ5hovfa3FkZud53u7cG7zks5q5VTTgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC_mVPRewIluNOTJCF4gy7gZmGuIwks5q5X1agaJpZM4KgiJj
.

Phil

@Philipp-Neubauer
Copy link
Owner

Hi y'all;

have added the plot discussed above, as well as a draft of the projection
plot. Also, have added an appendix table with all effects, their estimate
for the assessment rate and time-to-assessment etc.

I've done some writing for methods, need to go through to see that it all
makes sense and to figure what still needs writing in the methods.

Let me know what you guys think about the new figures...happy to iterate on
them.

Phil

On Mon, Oct 31, 2016 at 9:08 PM, Philipp Neubauer neubauer.phil@gmail.com
wrote:

Yes - agreed. Actually, if we can manage a figure that includes posterior
means and intervals (say as dashed and dotted lines on either side of the
dashed line), then we could pretty much display 1 & 2 in a single figure.
Else its probably a matter of deciding which is more important to
visualise. Agreed that 3 should probably be a table.

Pretty busy getting stuff ready for the turtle workshop, but will give the
figure a go asap (as well as the projection figure Jim was talking about).
We're starting to have a good number of figures, and might want to chuck
some in an appendix (e.g.,, the model fit...).

Phil

On Mon, Oct 31, 2016 at 6:32 PM, Michael Melnychuk <
notifications@github.com> wrote:

One way to do it could be to list 1 & 3 below in tables, and 2 visually.
If we group the Orders in the Fig. 8 panel by Class (similar to Fig. 4,
like Phil suggested, rotated 90deg), then we could overlay short
vertical dashed lines for each Class, maybe color-coded. The location of
these dashed lines could be posterior means of the Class, and the length
of these dashed lines could span only the Orders within the Class. That
would lose out on error bars for the Classes, though they would still be
present for all the Orders. That would visually satisfy #2 below.

Mike

On 2016-10-30 7:39 PM, Jim Thorson wrote:

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided
    bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of
both numeric values (to reference in the text), tables, and figures.
But I don't have an immediate opinion about the best combination of
figures/tables/in-text. Any thoughts?

jim


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-257202406>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVSCYa
2oZ5hovfa3FkZud53u7cG7zks5q5VTTgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC_mVPRewIluNOTJCF4gy7gZmGuIwks5q5X1agaJpZM4KgiJj
.

Phil

Phil

@James-Thorson
Copy link
Collaborator Author

Phil,

I think it all looks really cool! Definitely plenty to write about here, and I think the projection plot is a good "highline" point with which to end the results. Plus the price, landings, and rockfish-plus-dusky-sharks as being faster assessed makes sense, and are worthwhile mid-results section. And I got it compiling after installing the development version of ggplot2 via devtools::install_github("hadley/ggplot") (rather than CRAN) and installing appendix.sty from MiKTeX package manager

One more request though: Are you willing to convert the projection plot to a "finite-sample" projection? Obviously in 2016 we know how many were assessed (well, its approximate if I understand the model right, because the censoring year might be earlier than 2016). This could be done by removing stocks with an assessment from the set, setting the other stocks as currently unassessed in their censored-year, tracking their probability of having prior assessment during the forecast period, and then recombining the model-based subset with the withheld subset (where the latter have a 100% probability of previous assessment).

Does this sound plausible and reasonable? the point is that the forecast intervals in the finite-sample version will be smaller, particularly in teh earlier years, so it'll be easier to say something specific about the different regions.

@Philipp-Neubauer
Copy link
Owner

Yes, definitely, will give that a go on my plane-ride home over the next
few hours...

On Sat, Nov 5, 2016 at 4:51 AM, Jim Thorson notifications@github.com
wrote:

Phil,

I think it all looks really cool! Definitely plenty to write about here,
and I think the projection plot is a good "highline" point with which to
end the results. Plus the price, landings, and rockfish-plus-dusky-sharks
as being faster assessed makes sense, and are worthwhile mid-results
section. And I got it compiling after installing the development version of
ggplot2 via devtools::install_github("hadley/ggplot") (rather than CRAN)
and installing appendix.sty from MiKTeX package manager

One more request though: Are you willing to convert the projection plot to
a "finite-sample" projection? Obviously in 2016 we know how many were
assessed (well, its approximate if I understand the model right, because
the censoring year might be earlier than 2016). This could be done by
removing stocks with an assessment from the set, setting the other stocks
as currently unassessed in their censored-year, tracking their probability
of having prior assessment during the forecast period, and then recombining
the model-based subset with the withheld subset (where the latter have a
100% probability of previous assessment).

Does this sound plausible and reasonable? the point is that the forecast
intervals in the finite-sample version will be smaller, particularly in teh
earlier years, so it'll be easier to say something specific about the
different regions.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7o4aJuebK1urYbJPgOcIm5WS-65ks5q7Jf9gaJpZM4KgiJj
.

Phil

@Philipp-Neubauer
Copy link
Owner

Hey Jim;

spent some time on this and realised that your description actually sounds
a lot like what I actually did...the uncertainty seems large because of the
scale, I guess (and the start in 2016 is later than the last year for most
stocks, so might need to start earlier....). I've re-sized the y-axis on
the plot to make that clear - happy to change the start date, too.

I thought there might be another way of doing the projections that would
lead to smaller prediction intervals, but I'm not sure now.

I my code, line 471 calculates the probability of assessment for each stock
from 2016-2015 for each MCMC sample, then below that the quantiles are
taken, the key part here is:

sapply(seq(lmin,lmin+34),function(t) 1-exp(-l$MCMC*t^tau$MCMC)),

where 1-exp(-mu*t^tau) is the probability of assessment up to t, i.e., that
the stock will have been assessed by time t.

Then, the predicted proportion assessed is p_assessed +
(1-p_assessed)*mean(P_r,s); where the last term is the mean over the
assessment probabilities for all stocks s in region r.

Hope this makes sense - is this what you described? Definitely open to
other ways of approaching this if you have something else in mind.

Phil

On Sun, Nov 6, 2016 at 5:59 AM, Philipp Neubauer neubauer.phil@gmail.com
wrote:

Yes, definitely, will give that a go on my plane-ride home over the next
few hours...

On Sat, Nov 5, 2016 at 4:51 AM, Jim Thorson notifications@github.com
wrote:

Phil,

I think it all looks really cool! Definitely plenty to write about here,
and I think the projection plot is a good "highline" point with which to
end the results. Plus the price, landings, and rockfish-plus-dusky-sharks
as being faster assessed makes sense, and are worthwhile mid-results
section. And I got it compiling after installing the development version of
ggplot2 via devtools::install_github("hadley/ggplot") (rather than CRAN)
and installing appendix.sty from MiKTeX package manager

One more request though: Are you willing to convert the projection plot
to a "finite-sample" projection? Obviously in 2016 we know how many were
assessed (well, its approximate if I understand the model right, because
the censoring year might be earlier than 2016). This could be done by
removing stocks with an assessment from the set, setting the other stocks
as currently unassessed in their censored-year, tracking their probability
of having prior assessment during the forecast period, and then recombining
the model-based subset with the withheld subset (where the latter have a
100% probability of previous assessment).

Does this sound plausible and reasonable? the point is that the forecast
intervals in the finite-sample version will be smaller, particularly in teh
earlier years, so it'll be easier to say something specific about the
different regions.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7o4aJuebK1urYbJPgOcIm5WS-65ks5q7Jf9gaJpZM4KgiJj
.

Phil

Phil

@James-Thorson
Copy link
Collaborator Author

James-Thorson commented Nov 6, 2016

Phil,

As always, sorry that I can't read the code in this project. A few questions pop up from skimming the model text, and struggling through the code:

  1. Fig. 1 references a weibull shape parameter p (with slope 1.93) but Eq. 1 includes a function Weibull(tau,lambda). Wikipedia lists Weibull(lambda,k), with k defined as the shape parameter. Is accelerate rate = p=k=tau?
  2. do linear predictors affect lambda or tau? It seems like lambda makes more sense and acceleration rate = tau is assumed constant across stocks.
  3. In code line 471 it appears that MCMC$l is lambda and MCMC$tau is tau, so this interpretation seems internally consistent. But still, maybe its easiest to work through whether this projection is what I was thinking (or otherwise optimal) by writing up the process of calculating it in English in the text?

Anyway, I would have thought this forecast would have to be done for each stock individually, i.e., to condition on the biological and economic characteristics of that stock, plus its first year of exploitation and its censoring year. Is that what the 467-473 function is doing? To me, it looks more like 467-473 is just conditioning on the unconditional posterior predictive for lambda and tau, without conditioning on the characteristics of each stock...?

@Philipp-Neubauer
Copy link
Owner

Jim -

my apologies for the lack of consistency - I picked up on it in a few
places as I was trying to tidy up a bit after myself, but evidently missed
a few.

So for the Weibull, the JAGS parametrisation is not the same as the
Wikipedia one
.
But you are right in that p=k=tau; where we should change p to be tau where
it appears in the MS. The lack of consistency is from my own confusion -
not remembering what parameters were called last time I worked on the
project...so I just put in a placeholder while writing and forgot to fix it.

The linear predictor is for the scale parameter lambda, hopefully that's
consistent throughout.

For the code, I use a few patters throughout, so perhaps this summary will
make it easier to read:

  • The pipe operator %>% takes the previous statement and "inserts" the
    result into the next statement. so x %>% mean() is the same as mean(x).
    This leads to a linear series of operations from A -> B, where each
    previous transformation or summary is the input to the next.
  • Dplyr: "summarise" computes some summary (mean, var etc) on a
    (grouped) vector (so you'll usually see a group_by() before), similar to
    something you might do with tapply. "Mutate" does the same, but returns a
    vector of the same length as the input, so its just a transformation by
    some function f().
  • Purrr: "map" applies a function to a list, like lapply. So in lines
    467-473, the table is split (the split statement) into a list of dataframes
    of MCMC output (one for each stock), and the projection is done for each
    stock - so that's consistent with what you describe.

I'll try to write the projection method this up in plain English in the MS
to clarify. Am I right to assume that you mean projection in your point 3,
rather than the whole project?

Hope this helps, sorry again for the confusion!
Phil

On Mon, Nov 7, 2016 at 6:13 AM, Jim Thorson notifications@github.com
wrote:

Phil,

Its not the first time that I've been embarrassed that I can't read the
code in this project. I think the combination of R code I don't know, and
not knowing the equations for the time-to-event. A few questions pop up
from skimming the model text:

Fig. 1 references a weibull shape parameter p (with slope 1.93) but
Eq. 1 includes a function Weibull(tau,lambda). Wikipedia lists
Weibull(lambda,k), with k defined as the shape parameter. Is p=k=tau?
2.

do linear predictors affect lambda or tau?
3.

(In code line 471 it appears that MCMC$l is lambda and MCMC$tau is
tau, so that makes sense). But still, maybe its easiest to work through
whether this project is what I was thinking (or otherwise optimal) by
writing up the process in English in the text?

Anyway, I would have thought this forecast would have to be done for each
stock individually, i.e., to condition on the biological and economic
characteristics of that stock, plus its first year of exploitation and its
censoring year. Is that what the 467-473 function is doing?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDCwnlaGMDgMlJFcZs_25p_uyMpG7wks5q7gq0gaJpZM4KgiJj
.

Phil

@James-Thorson
Copy link
Collaborator Author

Coolcool. So I think we're on the same page. And yes, point 3 was just for the project, I think the rest of the model description is there enough that I can understand and start chipping in clarifications where it seems appropriate.

jim

@mcmelnychuk
Copy link
Collaborator

A quick question that will surely reveal my ignorance - if I don't use
LaTeX, would I be better suited looking at the results.pdf outputs
rather than trying to run snippets of code from results.Rnw extracts?
I've changed RStudio Global Otions > SWeave to weave using knitr, but
after that I don't see how running the Rnw files in RStudio will get me
to seeing the outputs, etc.

A few unrelated questions/comments about some figures in the most recent
results.pdf (which are looking good):

  • A possible third panel for Figure 2 could be the proportion of total
    (cross-species) landings in each region that come from from assessed
    stocks. The current % of assessed stocks is only ~20-40% (Figure 2b),
    but stocks with greater catch are more likely assessed, so the %assessed
    on the basis of total catch may be much higher. Total (cross-species)
    landed value could be an alternative to total landings, and would
    probably have an even greater %assessed.
  • Figure 3 - is the "bathy" group ok to leave in as a random effect
    level given there are few observations with that level?
  • FIgure 4 - there are two A's (Actinopterygii, Cephalopoda), two B's
    (Bivalvia, Cephalaspidomorphi), two E's (Echinoidea, Branchiopoda), and
    two Ms (Malacostraca, Holocephali) in the legend.
  • Table 1 caption: "ratio" instead of "ration"
  • Figure 6 x-axis label: "population" misspelled
  • Figures 2 and 7: the yellow lines are a little hard to see, at least
    on my screen. Possibly red or orange instead? Also, in Figure 2 the
    green and blue are possibly a bit too close together to easily distinguish.
  • Fig. 9 shows 3 of the 12 classes that were shown in Fig. 4. Do the
    other classes not have multiple orders within them, thus they're not
    shown in Fig. 9?

Mike

@James-Thorson
Copy link
Collaborator Author

@mcmelnychuk I'm reading through the definition of assessments, and see that it has no criterion that would exclude stock-reduction analyses e.g., DCAC or DBSRA, CMSY or Catch-MSY etc. However, I don't think we included these. Is it fair to expand the definition to clarify that stock assessments needed to be fitted to biological data to estimate population scale (e.g., an index of abundance or compositional data that allows changes in biomass to be inferred)?

@mcmelnychuk
Copy link
Collaborator

hi Jim,

Yes, that's an improvement to the definition, and is consistent with
what we did.

To double-check I took a quick look at the comments, found a few
references to stock reduction analyses, and looked into those. They were
occasionally used in combination with other methods for the first
assessment, but after a quick check at some archived assessments, I
think our dataset as it stands is at least consistent.

Mike

On 2016-11-06 5:53 PM, Jim Thorson wrote:

@mcmelnychuk https://github.com/mcmelnychuk I'm reading through the
definition of assessments, and see that it has no criterion that would
exclude stock-reduction analyses e.g., DCAC or DBSRA, CMSY or
Catch-MSY etc. However, I don't think we included these. Is it fair to
expand the definition to clarify that stock assessments needed to be
fitted to biological data to estimate population scale (e.g., an index
of abundance or compositional data that allows changes in biomass to
be inferred)?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AV_oVRJVw2Rnz6bySaZTW-NLWcr01hQrks5q7oSvgaJpZM4KgiJj.

@Philipp-Neubauer
Copy link
Owner

@mcmelnychuk

For the .Rnw; you can either click "compile pdf" to compile the pdf yourself from the most recent changes, or you can run the R chunks one by one (e.g., by going to the chuck you want to see, and clicking "Run all chunks above" (ctrl-alt-p for me) in the run menu on the top right of your editor panel) to produce the plots in your R graphics device. If you want to tweak things, this is usually the better way since you don't have to wait for the whole doc to compile every time you make a change. Once you're happy with your edit, you can re-compile...

I like the idea of the "assessed proportion of catch" panel for figure two. Will try and get that in tomorrow.

is the "bathy" group ok to leave in as a random effect level given there are few observations with that level?

That shouldn't cause trouble as long as the over-all variance for habitat is well informed, I'd say. Perhaps a column for n for each effect level (for all random effects) listed in the appendix table would be good though.

Fig. 9 shows 3 of the 12 classes that were shown in Fig. 4. Do the
other classes not have multiple orders within them, thus they're not
shown in Fig. 9?

Yep.

I'll deal with typos and colors tomorrow, thanks for pointing those out. And agreed that the colors are still sub-optimal. Will try to find a better color scheme to go with.

And yes, agreed that @James-Thorson's amendment of the assessment definition seems like the right one to go with.

@Philipp-Neubauer
Copy link
Owner

Pushed a third panel for figure 2 in 68c70fc; quite impressive how in terms of catch, Alaska has almost 100% coverage!

@James-Thorson
Copy link
Collaborator Author

Phil,

If you're willing to do another finicky change, I think Fig. 8 would look
better as a 4-panel figure with one region per panel (this could be done
without color to save money, and also the current version is hard to read).

Also Phil, could you:

  1. add a new section on the title page with our priotized list of target
    journals? (I forget what we'd said)
  2. add a section for the abstract, which we could start drafting as a way
    to decide what is the "most important points"?

Jim

On Mon, Nov 7, 2016 at 4:46 PM, Philipp Neubauer notifications@github.com
wrote:

Pushed a third panel for figure 2 in 68c70fc
68c70fc;
quite impressive how in terms of catch, Alaska has almost 100% coverage!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHnqTRSBMActonY1Xavf0ZYsdo08zqgjks5q78ZsgaJpZM4KgiJj
.

@Philipp-Neubauer
Copy link
Owner

Have added the above; as well as a very rough abstract. Just a listing of what I thought stood out so far...but I may have missed a fair bit as I haven't really taken a step back to look at it in detail yet...

@Philipp-Neubauer
Copy link
Owner

In theory, one could vary the tau parameter to have independent assessment
trends in different periods BUT the way the model is set up, it has no
specific reference to the actual time; i.e., right now, all the model knows
is time from first (recorded) landings to first assessment. We could
possibly put in effect for 'un-assessed post 1996' to see if those stocks
have a higher probability of being assessed if they were still un-assessed
at the time of the fisheries act. I think.

On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
notifications@github.com> wrote:

Sorry, I forgot about this suggestion in your second paragraph when
adding to the Discussion yesterday. I agree that those ideas would be
good to add. We could mention in the discussion that perceived
abundance/status may be a factor affecting when an assessment is first
conducted, but that (obviously) abundance is not actually known before
that assessment is done.

I can't remember what we decided, but was there any possibility (or
advantage) to allowing model parameters to vary pre- and post-1996?
Would that allow us to indirectly get at such questions of whether price
matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim. I've
also
added the citations/bib into the manuscript. To make the citations
work, in
Rstudio you'll need to go to the "Build" menu -> Configure build tools ->
choose "Makefile" and select the project directory. From there on,
when you
want to build after adding writing etc, do "build all" (Shift-Ctrl-B).
Outside of Rstudio, open a command window, cd to the first assessment
directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I think
the interesting angle here is conservation vs economics: We could asdd
something more general: we can only capture the conservation status
driver
in the taxonomic component since it would be hard to define some kind of
surrogate for that for stocks without an assessment. This points to a
problem in prioritizing assessments: the conservation status only really
factors once we have some evidence that things are probably going
badly for
a stock. Thus, valuable stocks are potentially well managed early on in
their exploitation history, whereas small stocks probably only get that
level of attention when there is an indication that things are heading
for
disaster. This could have unforeseen ecological consequences if the
importance of such species is high relative to the economic value from
fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed quantitatively,
but won't figure in our DB (Turtles, Mammals) since they would probably
have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
<notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references
explaining Scorpaenids. I don't have any special knowledge of
groundsharks or flatfishes, so those might require a bit more
sleuthing if anyone is willing to take the lead?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261154300>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261159042>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/
ACJDC8y0pzRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
.

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261412044>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVdEB4r6fNgae_
zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj
.

Phil

@Philipp-Neubauer
Copy link
Owner

To add to this:

I tried the 'un-assessed post 1996' effect, and it comes out negative. I'm
not sure if that alone is worth including, as it doesn't say much about
price effects in different decades, etc. Just FYI right now, mean price and
max landings that go into the model are taken before the assessment year
for assessed stocks.

On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer neubauer.phil@gmail.com
wrote:

In theory, one could vary the tau parameter to have independent assessment
trends in different periods BUT the way the model is set up, it has no
specific reference to the actual time; i.e., right now, all the model knows
is time from first (recorded) landings to first assessment. We could
possibly put in effect for 'un-assessed post 1996' to see if those stocks
have a higher probability of being assessed if they were still un-assessed
at the time of the fisheries act. I think.

On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
notifications@github.com> wrote:

Sorry, I forgot about this suggestion in your second paragraph when
adding to the Discussion yesterday. I agree that those ideas would be
good to add. We could mention in the discussion that perceived
abundance/status may be a factor affecting when an assessment is first
conducted, but that (obviously) abundance is not actually known before
that assessment is done.

I can't remember what we decided, but was there any possibility (or
advantage) to allowing model parameters to vary pre- and post-1996?
Would that allow us to indirectly get at such questions of whether price
matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim. I've
also
added the citations/bib into the manuscript. To make the citations
work, in
Rstudio you'll need to go to the "Build" menu -> Configure build tools
->
choose "Makefile" and select the project directory. From there on,
when you
want to build after adding writing etc, do "build all" (Shift-Ctrl-B).
Outside of Rstudio, open a command window, cd to the first assessment
directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I think
the interesting angle here is conservation vs economics: We could asdd
something more general: we can only capture the conservation status
driver
in the taxonomic component since it would be hard to define some kind of
surrogate for that for stocks without an assessment. This points to a
problem in prioritizing assessments: the conservation status only really
factors once we have some evidence that things are probably going
badly for
a stock. Thus, valuable stocks are potentially well managed early on in
their exploitation history, whereas small stocks probably only get that
level of attention when there is an indication that things are heading
for
disaster. This could have unforeseen ecological consequences if the
importance of such species is high relative to the economic value from
fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed
quantitatively,
but won't figure in our DB (Turtles, Mammals) since they would probably
have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
<notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references
explaining Scorpaenids. I don't have any special knowledge of
groundsharks or flatfishes, so those might require a bit more
sleuthing if anyone is willing to take the lead?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261154300>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261159042>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
.

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261412044>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVdEB4r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj
.

Phil

Phil

@Philipp-Neubauer
Copy link
Owner

Also, Mike - do you have the citations you added in your ref manager? If
so, could you export them as bibtex and copy-paste them into the
FirstAssessment.bib? I can add the refs in, but that would save me having
to get them all individually...

My apologies for not having managed to get involved much with the
discussion - been battling all those grooming rules and little glitches;
and lately even the models didn't spit out anything coherent anymore (got
that back to normal, at least.)

On Wed, Nov 23, 2016 at 10:25 PM, Philipp Neubauer neubauer.phil@gmail.com
wrote:

To add to this:

I tried the 'un-assessed post 1996' effect, and it comes out negative.
I'm not sure if that alone is worth including, as it doesn't say much about
price effects in different decades, etc. Just FYI right now, mean price and
max landings that go into the model are taken before the assessment year
for assessed stocks.

On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer <
neubauer.phil@gmail.com> wrote:

In theory, one could vary the tau parameter to have independent
assessment trends in different periods BUT the way the model is set up, it
has no specific reference to the actual time; i.e., right now, all the
model knows is time from first (recorded) landings to first assessment. We
could possibly put in effect for 'un-assessed post 1996' to see if those
stocks have a higher probability of being assessed if they were still
un-assessed at the time of the fisheries act. I think.

On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
notifications@github.com> wrote:

Sorry, I forgot about this suggestion in your second paragraph when
adding to the Discussion yesterday. I agree that those ideas would be
good to add. We could mention in the discussion that perceived
abundance/status may be a factor affecting when an assessment is first
conducted, but that (obviously) abundance is not actually known before
that assessment is done.

I can't remember what we decided, but was there any possibility (or
advantage) to allowing model parameters to vary pre- and post-1996?
Would that allow us to indirectly get at such questions of whether price
matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim. I've
also
added the citations/bib into the manuscript. To make the citations
work, in
Rstudio you'll need to go to the "Build" menu -> Configure build tools
->
choose "Makefile" and select the project directory. From there on,
when you
want to build after adding writing etc, do "build all" (Shift-Ctrl-B).
Outside of Rstudio, open a command window, cd to the first assessment
directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I
think
the interesting angle here is conservation vs economics: We could asdd
something more general: we can only capture the conservation status
driver
in the taxonomic component since it would be hard to define some kind
of
surrogate for that for stocks without an assessment. This points to a
problem in prioritizing assessments: the conservation status only
really
factors once we have some evidence that things are probably going
badly for
a stock. Thus, valuable stocks are potentially well managed early on in
their exploitation history, whereas small stocks probably only get that
level of attention when there is an indication that things are heading
for
disaster. This could have unforeseen ecological consequences if the
importance of such species is high relative to the economic value from
fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed
quantitatively,
but won't figure in our DB (Turtles, Mammals) since they would probably
have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
<notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and
references
explaining Scorpaenids. I don't have any special knowledge of
groundsharks or flatfishes, so those might require a bit more
sleuthing if anyone is willing to take the lead?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261154300>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261159042>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
.

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261412044>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVdEB4
r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj
.

Phil

Phil

Phil

@mcmelnychuk
Copy link
Collaborator

Sounds good, and seems we have plenty going on already.

We might have to revise a sentence in the methods, I think I incorrectly
mentioned somewhere that the dependency on price was evaluated annually
rather that it being the mean of the pre-assessment portion. In the
figures, we could label price as "Mean ex-vessel price", which would put
it on par with "Maximum landings".

Mike

On 2016-11-23 1:25 AM, Philipp Neubauer wrote:

To add to this:

I tried the 'un-assessed post 1996' effect, and it comes out negative. I'm
not sure if that alone is worth including, as it doesn't say much about
price effects in different decades, etc. Just FYI right now, mean
price and
max landings that go into the model are taken before the assessment year
for assessed stocks.

On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer
neubauer.phil@gmail.com
wrote:

In theory, one could vary the tau parameter to have independent
assessment
trends in different periods BUT the way the model is set up, it has no
specific reference to the actual time; i.e., right now, all the
model knows
is time from first (recorded) landings to first assessment. We could
possibly put in effect for 'un-assessed post 1996' to see if those
stocks
have a higher probability of being assessed if they were still
un-assessed
at the time of the fisheries act. I think.

On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
notifications@github.com> wrote:

Sorry, I forgot about this suggestion in your second paragraph when
adding to the Discussion yesterday. I agree that those ideas would be
good to add. We could mention in the discussion that perceived
abundance/status may be a factor affecting when an assessment is first
conducted, but that (obviously) abundance is not actually known before
that assessment is done.

I can't remember what we decided, but was there any possibility (or
advantage) to allowing model parameters to vary pre- and post-1996?
Would that allow us to indirectly get at such questions of whether
price
matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim.
I've
also
added the citations/bib into the manuscript. To make the citations
work, in
Rstudio you'll need to go to the "Build" menu -> Configure build
tools
->
choose "Makefile" and select the project directory. From there on,
when you
want to build after adding writing etc, do "build all"
(Shift-Ctrl-B).
Outside of Rstudio, open a command window, cd to the first assessment
directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks.
I think
the interesting angle here is conservation vs economics: We could
asdd
something more general: we can only capture the conservation status
driver
in the taxonomic component since it would be hard to define some
kind of
surrogate for that for stocks without an assessment. This points to a
problem in prioritizing assessments: the conservation status only
really
factors once we have some evidence that things are probably going
badly for
a stock. Thus, valuable stocks are potentially well managed early
on in
their exploitation history, whereas small stocks probably only
get that
level of attention when there is an indication that things are
heading
for
disaster. This could have unforeseen ecological consequences if the
importance of such species is high relative to the economic value
from
fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed
quantitatively,
but won't figure in our DB (Turtles, Mammals) since they would
probably
have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
<notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and
references
explaining Scorpaenids. I don't have any special knowledge of
groundsharks or flatfishes, so those might require a bit more
sleuthing if anyone is willing to take the lead?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261154300>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261159042>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
.

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261412044>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVdEB4r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

#5 (comment),
or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj
.

Phil

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AV_oVZ5HnYPrMQIm9P_YEAVf2_ObFaVGks5rBAadgaJpZM4KgiJj.

@mcmelnychuk
Copy link
Collaborator

pasted.

No apologies at all needed - thanks instead to you for doing all of the
hard work!

Mike

On 2016-11-23 1:31 AM, Philipp Neubauer wrote:

Also, Mike - do you have the citations you added in your ref manager? If
so, could you export them as bibtex and copy-paste them into the
FirstAssessment.bib? I can add the refs in, but that would save me having
to get them all individually...

My apologies for not having managed to get involved much with the
discussion - been battling all those grooming rules and little glitches;
and lately even the models didn't spit out anything coherent anymore (got
that back to normal, at least.)

On Wed, Nov 23, 2016 at 10:25 PM, Philipp Neubauer
neubauer.phil@gmail.com
wrote:

To add to this:

I tried the 'un-assessed post 1996' effect, and it comes out negative.
I'm not sure if that alone is worth including, as it doesn't say
much about
price effects in different decades, etc. Just FYI right now, mean
price and
max landings that go into the model are taken before the assessment year
for assessed stocks.

On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer <
neubauer.phil@gmail.com> wrote:

In theory, one could vary the tau parameter to have independent
assessment trends in different periods BUT the way the model is set
up, it
has no specific reference to the actual time; i.e., right now, all the
model knows is time from first (recorded) landings to first
assessment. We
could possibly put in effect for 'un-assessed post 1996' to see if
those
stocks have a higher probability of being assessed if they were still
un-assessed at the time of the fisheries act. I think.

On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
notifications@github.com> wrote:

Sorry, I forgot about this suggestion in your second paragraph when
adding to the Discussion yesterday. I agree that those ideas would be
good to add. We could mention in the discussion that perceived
abundance/status may be a factor affecting when an assessment is first
conducted, but that (obviously) abundance is not actually known before
that assessment is done.

I can't remember what we decided, but was there any possibility (or
advantage) to allowing model parameters to vary pre- and post-1996?
Would that allow us to indirectly get at such questions of whether
price
matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points,
Jim. I've
also
added the citations/bib into the manuscript. To make the citations
work, in
Rstudio you'll need to go to the "Build" menu -> Configure build
tools
->
choose "Makefile" and select the project directory. From there on,
when you
want to build after adding writing etc, do "build all"
(Shift-Ctrl-B).
Outside of Rstudio, open a command window, cd to the first
assessment
directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I
think
the interesting angle here is conservation vs economics: We
could asdd
something more general: we can only capture the conservation status
driver
in the taxonomic component since it would be hard to define some
kind
of
surrogate for that for stocks without an assessment. This points
to a
problem in prioritizing assessments: the conservation status only
really
factors once we have some evidence that things are probably going
badly for
a stock. Thus, valuable stocks are potentially well managed
early on in
their exploitation history, whereas small stocks probably only
get that
level of attention when there is an indication that things are
heading
for
disaster. This could have unforeseen ecological consequences if the
importance of such species is high relative to the economic
value from
fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed
quantitatively,
but won't figure in our DB (Turtles, Mammals) since they would
probably
have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
<notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and
references
explaining Scorpaenids. I don't have any special knowledge of
groundsharks or flatfishes, so those might require a bit more
sleuthing if anyone is willing to take the lead?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261154300>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261159042>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
.

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261412044>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVdEB4
r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

#5 (comment),
or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj
.

Phil

Phil

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AV_oVct79LiQLPRWWgQJhJIrlbYSUKTZks5rBAfygaJpZM4KgiJj.

@Philipp-Neubauer
Copy link
Owner

Have done some editing in the methods to more accurately reflect what we're
doing with time-to-assessment, and covariates. I've also updated the
results to reflect updated model outputs. Did a little bit of editing in
the discussion, mainly in the propensity score paragraph - tried to clarify
a couple of sentences, hope it worked...

3 possible points to add:

  1. In the updated model, the interaction of price and landings is
    negative. So it seems its more a case of Price OR Landings rather than
    price AND landings. We should probably edit the discussion to reflect this
  2. right?
  3. Max length now comes through as significant - a generalisation of the
    charismatic mega-fauna effect beyond conservation to fisheries management?
    Would make for an amusing paragraph, perhaps ;) But more seriously, it
    seems like an argument against the fishing down the food chain hypothesis
    (at least for US waters) - if large fish are preferentially assessed, they
    are probably also managed better, preventing fishing-down patterns from
    occurring in US waters...
  4. With some more though, it seems as though the addition of a post-1996
    effect could be good to support our statements about the projections: right
    now we don't really show any evidence that the rate is actually lower - and
    the model suggests increasing rates. So we could either estimate a separate
    tau parameter for all stocks that were un-assessed as of 1996 OR jsut add
    the effect. Then we can point to that effect as confirmation that rates of
    new assessmment are lower more recently.

On Thu, Nov 24, 2016 at 6:48 AM, Michael Melnychuk <notifications@github.com

wrote:

pasted.

No apologies at all needed - thanks instead to you for doing all of the
hard work!

Mike

On 2016-11-23 1:31 AM, Philipp Neubauer wrote:

Also, Mike - do you have the citations you added in your ref manager? If
so, could you export them as bibtex and copy-paste them into the
FirstAssessment.bib? I can add the refs in, but that would save me having
to get them all individually...

My apologies for not having managed to get involved much with the
discussion - been battling all those grooming rules and little glitches;
and lately even the models didn't spit out anything coherent anymore (got
that back to normal, at least.)

On Wed, Nov 23, 2016 at 10:25 PM, Philipp Neubauer
neubauer.phil@gmail.com
wrote:

To add to this:

I tried the 'un-assessed post 1996' effect, and it comes out negative.
I'm not sure if that alone is worth including, as it doesn't say
much about
price effects in different decades, etc. Just FYI right now, mean
price and
max landings that go into the model are taken before the assessment
year
for assessed stocks.

On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer <
neubauer.phil@gmail.com> wrote:

In theory, one could vary the tau parameter to have independent
assessment trends in different periods BUT the way the model is set
up, it
has no specific reference to the actual time; i.e., right now, all the
model knows is time from first (recorded) landings to first
assessment. We
could possibly put in effect for 'un-assessed post 1996' to see if
those
stocks have a higher probability of being assessed if they were still
un-assessed at the time of the fisheries act. I think.

On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
notifications@github.com> wrote:

Sorry, I forgot about this suggestion in your second paragraph when
adding to the Discussion yesterday. I agree that those ideas would be
good to add. We could mention in the discussion that perceived
abundance/status may be a factor affecting when an assessment is
first
conducted, but that (obviously) abundance is not actually known
before
that assessment is done.

I can't remember what we decided, but was there any possibility (or
advantage) to allowing model parameters to vary pre- and post-1996?
Would that allow us to indirectly get at such questions of whether
price
matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points,
Jim. I've
also
added the citations/bib into the manuscript. To make the citations
work, in
Rstudio you'll need to go to the "Build" menu -> Configure build
tools
->
choose "Makefile" and select the project directory. From there on,
when you
want to build after adding writing etc, do "build all"
(Shift-Ctrl-B).
Outside of Rstudio, open a command window, cd to the first
assessment
directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I
think
the interesting angle here is conservation vs economics: We
could asdd
something more general: we can only capture the conservation status
driver
in the taxonomic component since it would be hard to define some
kind
of
surrogate for that for stocks without an assessment. This points
to a
problem in prioritizing assessments: the conservation status only
really
factors once we have some evidence that things are probably going
badly for
a stock. Thus, valuable stocks are potentially well managed
early on in
their exploitation history, whereas small stocks probably only
get that
level of attention when there is an indication that things are
heading
for
disaster. This could have unforeseen ecological consequences if the
importance of such species is high relative to the economic
value from
fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed
quantitatively,
but won't figure in our DB (Turtles, Mammals) since they would
probably
have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes
sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
<notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and
references
explaining Scorpaenids. I don't have any special knowledge of
groundsharks or flatfishes, so those might require a bit more
sleuthing if anyone is willing to take the lead?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261154300>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261159042>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
.

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/issues/
5#issuecomment-261412044>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVdEB4
r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-261811030>,
or mute the thread

<https://github.com/notifications/unsubscribe-auth/
ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj>
.

Phil

Phil

Phil


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-262467327>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_
oVct79LiQLPRWWgQJhJIrlbYSUKTZks5rBAfygaJpZM4KgiJj>.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACJDC5rpMib7NUW0e1TWs4v2kt37nbANks5rBHxWgaJpZM4KgiJj
.

Phil

@James-Thorson
Copy link
Collaborator Author

James-Thorson commented Nov 23, 2016 via email

@Philipp-Neubauer
Copy link
Owner

Philipp-Neubauer commented Nov 24, 2016 via email

@Philipp-Neubauer
Copy link
Owner

As expected, running a model with differing rates gives a lower rate post-1996. Since we lose interpretability of coefficients (the rate enters into the calculation for the acceleration factor), I wouldn't suggest that we use this model for anything other than to discuss post-1996 and future assessment rates. This backs up the model that includes a psot-1996 effect, which is negative.

Ultimately, what this means is that our interpretation wasn't quite right; since we control for price, landings etc, the interpretation is that rates a lower at a given co-variate combination. So either there is something about post-1996 stocks that the model doesn't capture, or rates at which new stocks are assessed have declined despite the fisheries act (e.g., because there is a fixed budget that allows for a limited set to be assessed, such that beyond that set, it becomes increasingly unlikely to have new assessments.).

@James-Thorson
Copy link
Collaborator Author

Phil and Mike,

I've lost track a bit on where things stand. Should we make a timeline to push writing over the finish line? or are we still waiting on some updated QAQC or analysis?

please tell me how I can help!

@mcmelnychuk
Copy link
Collaborator

mcmelnychuk commented Nov 28, 2016 via email

@Philipp-Neubauer
Copy link
Owner

Philipp-Neubauer commented Nov 28, 2016 via email

@mcmelnychuk
Copy link
Collaborator

mcmelnychuk commented Nov 28, 2016 via email

@Philipp-Neubauer
Copy link
Owner

Philipp-Neubauer commented Nov 28, 2016 via email

@mcmelnychuk
Copy link
Collaborator

mcmelnychuk commented Nov 28, 2016 via email

@Philipp-Neubauer
Copy link
Owner

Given the thorough QAQC and checking of assessments by @mcmelnychuk and Nicole, are we ready to draw a line in the sand and finalise the MS? I'm happy to help resolve remaining issues (can fill in some ?s in the text etc)...

I will add some tasks to the issues list, hopefully we can tick them off reasonably quickly...though the first task should probably be to QAQC the final dataset.csv for errors and omissions.

Phil

@James-Thorson
Copy link
Collaborator Author

yeah, let's call this the final one! Definitely done a good and thorough QAQC from my perspective, given our use of both SIS and careful quality checks.

@James-Thorson
Copy link
Collaborator Author

sorry, I think I mistook which issue this was... Are we ready to return to writing, or is there more updating to do for the analysis given this final dataset?

@James-Thorson James-Thorson reopened this Dec 8, 2016
@mcmelnychuk
Copy link
Collaborator

mcmelnychuk commented Dec 8, 2016 via email

@mcmelnychuk
Copy link
Collaborator

@Philipp-Neubauer
Copy link
Owner

Philipp-Neubauer commented Dec 8, 2016 via email

@mcmelnychuk
Copy link
Collaborator

Thank you for double-checking that, and sorry - you're right. The old FishBase extract I used clearly had some errors in it. I've removed the "freshwater" designation for those species from the "exclude" column in "SpeciesCrossReference.csv".

Mike

@mcmelnychuk
Copy link
Collaborator

mcmelnychuk commented Dec 8, 2016 via email

@James-Thorson
Copy link
Collaborator Author

James-Thorson commented Dec 8, 2016 via email

@Philipp-Neubauer
Copy link
Owner

Philipp-Neubauer commented Dec 8, 2016 via email

@mcmelnychuk
Copy link
Collaborator

mcmelnychuk commented Dec 8, 2016 via email

@mcmelnychuk
Copy link
Collaborator

happy new year to you both.
Just checking in on where things are. I haven't pulled this out for nearly a month - is it a good time for me to take a look at now, or is it in the middle of some edits or internal review? I updated the acknowledgments just now, but that's all I've done.

@James-Thorson
Copy link
Collaborator Author

James-Thorson commented Jan 3, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants