-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manuscript text #5
Comments
@Philipp-Neubauer I see that you added a commit "draft dataset and results". Do you want me to start adding some text for the results section, or how could I be helpful? |
yes; results and figures should be updated now with the newest data, and There are still a few stocks that are dropped - these are stocks where I've added some methods text but its really rough and incomplete. Will aim Happy for you to add some writing around the results and perhaps put some Also, you could start a bibliography by exporting citations as bibtex and On Mon, Oct 31, 2016 at 10:21 AM, Jim Thorson notifications@github.com
Phil |
Cool, I'm starting to look it over. First response: I get an error in "results.Rnw" code-block starting line 219, which contains Error in (function (el, elname) :
"panel.spacing" is not a valid theme element name. I've updated ggplot2 to version 2.1.0, and am using R 3.3.1. It runs if I comment out line 229, which I have done. Just FYI |
I might have the dev version from hadley/ggplot2 installed. You can put On Mon, Oct 31, 2016 at 11:13 AM, Jim Thorson notifications@github.com
Phil |
OK. Next response: I've spent maybe 30 minutes trying to address a question I posed in a previous email. Basically, Fig. 2 doesn't make sense to me because I would think that "landed stocks" should be the set of stocks that were ever previously landed. Given this definition, the number of landed stocks could only increase, but the green line USNE decreases around 1960, prior to any stocks being assessed. So I tried exploring the code for Fig. 2 (line 152-186). However, I basically can't read the code at all because I don't ever use So I have two questions:
|
Yes to question 2; I can fix the code to do that. For question 1: It would be a mission to eliminate dplyr at this stage I'll aim to push the figure 2 change in a few minutes... On Mon, Oct 31, 2016 at 1:08 PM, Jim Thorson notifications@github.com
Phil |
Ok, Fig two should now correspond to cumulative number of species landed; On Mon, Oct 31, 2016 at 1:34 PM, Philipp Neubauer neubauer.phil@gmail.com
Phil |
OK, thanks Phil! makes more sense to me, and also lines up closer to proportions I was getting when I tried to replicate the plot (presumably I missed some small restrictions on which stocks to include -- I was getting the same rank order and qualitative picture) Another request: What about expanding Fig. 8 to include the posterior distribution for each Class and Order? I think an interesting result is which taxa (e.g., Elasmobranchs, Clupeids) have significantly higher or lower assessment rates. |
Yes, totally. jsut pushed that On Mon, Oct 31, 2016 at 2:43 PM, Jim Thorson notifications@github.com
Phil |
Did class only though, could do the same as in figure 4 for order instead On Mon, Oct 31, 2016 at 2:59 PM, Philipp Neubauer neubauer.phil@gmail.com
Phil |
Hmm. I think we'll want to make statements about:
I think we'll want to communicate these through some combination of both numeric values (to reference in the text), tables, and figures. But I don't have an immediate opinion about the best combination of figures/tables/in-text. Any thoughts? jim |
One way to do it could be to list 1 & 3 below in tables, and 2 visually. Mike On 2016-10-30 7:39 PM, Jim Thorson wrote:
|
Yes - agreed. Actually, if we can manage a figure that includes posterior Pretty busy getting stuff ready for the turtle workshop, but will give the Phil On Mon, Oct 31, 2016 at 6:32 PM, Michael Melnychuk <notifications@github.com
Phil |
Hi y'all; have added the plot discussed above, as well as a draft of the projection I've done some writing for methods, need to go through to see that it all Let me know what you guys think about the new figures...happy to iterate on Phil On Mon, Oct 31, 2016 at 9:08 PM, Philipp Neubauer neubauer.phil@gmail.com
Phil |
Phil, I think it all looks really cool! Definitely plenty to write about here, and I think the projection plot is a good "highline" point with which to end the results. Plus the price, landings, and rockfish-plus-dusky-sharks as being faster assessed makes sense, and are worthwhile mid-results section. And I got it compiling after installing the development version of ggplot2 via One more request though: Are you willing to convert the projection plot to a "finite-sample" projection? Obviously in 2016 we know how many were assessed (well, its approximate if I understand the model right, because the censoring year might be earlier than 2016). This could be done by removing stocks with an assessment from the set, setting the other stocks as currently unassessed in their censored-year, tracking their probability of having prior assessment during the forecast period, and then recombining the model-based subset with the withheld subset (where the latter have a 100% probability of previous assessment). Does this sound plausible and reasonable? the point is that the forecast intervals in the finite-sample version will be smaller, particularly in teh earlier years, so it'll be easier to say something specific about the different regions. |
Yes, definitely, will give that a go on my plane-ride home over the next On Sat, Nov 5, 2016 at 4:51 AM, Jim Thorson notifications@github.com
Phil |
Hey Jim; spent some time on this and realised that your description actually sounds I thought there might be another way of doing the projections that would I my code, line 471 calculates the probability of assessment for each stock sapply(seq(lmin,lmin+34),function(t) 1-exp(-l$MCMC*t^tau$MCMC)), where 1-exp(-mu*t^tau) is the probability of assessment up to t, i.e., that Then, the predicted proportion assessed is p_assessed + Hope this makes sense - is this what you described? Definitely open to Phil On Sun, Nov 6, 2016 at 5:59 AM, Philipp Neubauer neubauer.phil@gmail.com
Phil |
Phil, As always, sorry that I can't read the code in this project. A few questions pop up from skimming the model text, and struggling through the code:
Anyway, I would have thought this forecast would have to be done for each stock individually, i.e., to condition on the biological and economic characteristics of that stock, plus its first year of exploitation and its censoring year. Is that what the 467-473 function is doing? To me, it looks more like 467-473 is just conditioning on the unconditional posterior predictive for lambda and tau, without conditioning on the characteristics of each stock...? |
Jim - my apologies for the lack of consistency - I picked up on it in a few So for the Weibull, the JAGS parametrisation is not the same as the The linear predictor is for the scale parameter lambda, hopefully that's For the code, I use a few patters throughout, so perhaps this summary will
I'll try to write the projection method this up in plain English in the MS Hope this helps, sorry again for the confusion! On Mon, Nov 7, 2016 at 6:13 AM, Jim Thorson notifications@github.com
Phil |
Coolcool. So I think we're on the same page. And yes, point 3 was just for the project, I think the rest of the model description is there enough that I can understand and start chipping in clarifications where it seems appropriate. jim |
A quick question that will surely reveal my ignorance - if I don't use A few unrelated questions/comments about some figures in the most recent
Mike |
@mcmelnychuk I'm reading through the definition of assessments, and see that it has no criterion that would exclude stock-reduction analyses e.g., DCAC or DBSRA, CMSY or Catch-MSY etc. However, I don't think we included these. Is it fair to expand the definition to clarify that stock assessments needed to be fitted to biological data to estimate population scale (e.g., an index of abundance or compositional data that allows changes in biomass to be inferred)? |
hi Jim, Yes, that's an improvement to the definition, and is consistent with To double-check I took a quick look at the comments, found a few Mike On 2016-11-06 5:53 PM, Jim Thorson wrote:
|
For the .Rnw; you can either click "compile pdf" to compile the pdf yourself from the most recent changes, or you can run the R chunks one by one (e.g., by going to the chuck you want to see, and clicking "Run all chunks above" (ctrl-alt-p for me) in the run menu on the top right of your editor panel) to produce the plots in your R graphics device. If you want to tweak things, this is usually the better way since you don't have to wait for the whole doc to compile every time you make a change. Once you're happy with your edit, you can re-compile... I like the idea of the "assessed proportion of catch" panel for figure two. Will try and get that in tomorrow.
That shouldn't cause trouble as long as the over-all variance for habitat is well informed, I'd say. Perhaps a column for n for each effect level (for all random effects) listed in the appendix table would be good though.
Yep. I'll deal with typos and colors tomorrow, thanks for pointing those out. And agreed that the colors are still sub-optimal. Will try to find a better color scheme to go with. And yes, agreed that @James-Thorson's amendment of the assessment definition seems like the right one to go with. |
Pushed a third panel for figure 2 in 68c70fc; quite impressive how in terms of catch, Alaska has almost 100% coverage! |
Phil, If you're willing to do another finicky change, I think Fig. 8 would look Also Phil, could you:
Jim On Mon, Nov 7, 2016 at 4:46 PM, Philipp Neubauer notifications@github.com
|
Have added the above; as well as a very rough abstract. Just a listing of what I thought stood out so far...but I may have missed a fair bit as I haven't really taken a step back to look at it in detail yet... |
In theory, one could vary the tau parameter to have independent assessment On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
Phil |
To add to this: I tried the 'un-assessed post 1996' effect, and it comes out negative. I'm On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer neubauer.phil@gmail.com
Phil |
Also, Mike - do you have the citations you added in your ref manager? If My apologies for not having managed to get involved much with the On Wed, Nov 23, 2016 at 10:25 PM, Philipp Neubauer neubauer.phil@gmail.com
Phil |
Sounds good, and seems we have plenty going on already. We might have to revise a sentence in the methods, I think I incorrectly Mike On 2016-11-23 1:25 AM, Philipp Neubauer wrote:
|
pasted. No apologies at all needed - thanks instead to you for doing all of the Mike On 2016-11-23 1:31 AM, Philipp Neubauer wrote:
|
Have done some editing in the methods to more accurately reflect what we're 3 possible points to add:
On Thu, Nov 24, 2016 at 6:48 AM, Michael Melnychuk <notifications@github.com
Phil |
Phil,
I'm not sure how a regression model like my understanding of ours could use
a variable derived from the response as a predictor --it seems like that
would violate the statistical assumption that the predictor is fixed
exogenously. But if you think it's possible and you have time, then I
agree it's worth exploring.
Another way would be to take the finite population prediction of assessment
proportion during retrospective and forecast periods (historical might have
a variance of zero if it's fully observed, I forget if species use
different censoring years). Then taking the 1st difference and plotting
that, it would show by eye if the assessment rate has sped up or slowed
down.
On Nov 23, 2016 2:42 PM, "Philipp Neubauer" <notifications@github.com>
wrote:
Have done some editing in the methods to more accurately reflect what
we're
doing with time-to-assessment, and covariates. I've also updated the
results to reflect updated model outputs. Did a little bit of editing in
the discussion, mainly in the propensity score paragraph - tried to
clarify
a couple of sentences, hope it worked...
3 possible points to add:
1. In the updated model, the interaction of price and landings is
negative. So it seems its more a case of Price OR Landings rather than
price AND landings. We should probably edit the discussion to reflect this
- right?
2. Max length now comes through as significant - a generalisation of the
charismatic mega-fauna effect beyond conservation to fisheries management?
Would make for an amusing paragraph, perhaps ;) But more seriously, it
seems like an argument against the fishing down the food chain hypothesis
(at least for US waters) - if large fish are preferentially assessed, they
are probably also managed better, preventing fishing-down patterns from
occurring in US waters...
3. With some more though, it seems as though the addition of a post-1996
effect could be good to support our statements about the projections:
right
now we don't really show any evidence that the rate is actually lower -
and
the model suggests increasing rates. So we could either estimate a
separate
tau parameter for all stocks that were un-assessed as of 1996 OR jsut add
the effect. Then we can point to that effect as confirmation that rates of
new assessmment are lower more recently.
On Thu, Nov 24, 2016 at 6:48 AM, Michael Melnychuk <
notifications@github.com
> wrote:
> pasted.
>
> No apologies at all needed - thanks instead to you for doing all of the
> hard work!
>
> Mike
>
> On 2016-11-23 1:31 AM, Philipp Neubauer wrote:
> > Also, Mike - do you have the citations you added in your ref manager?
If
> > so, could you export them as bibtex and copy-paste them into the
> > FirstAssessment.bib? I can add the refs in, but that would save me
having
> > to get them all individually...
> >
> > My apologies for not having managed to get involved much with the
> > discussion - been battling all those grooming rules and little
glitches;
> > and lately even the models didn't spit out anything coherent anymore
(got
> > that back to normal, at least.)
> >
> > On Wed, Nov 23, 2016 at 10:25 PM, Philipp Neubauer
> > ***@***.***>
> > wrote:
> >
> > > To add to this:
> > >
> > > I tried the 'un-assessed post 1996' effect, and it comes out
negative.
> > > I'm not sure if that alone is worth including, as it doesn't say
> > much about
> > > price effects in different decades, etc. Just FYI right now, mean
> > price and
> > > max landings that go into the model are taken before the assessment
> year
> > > for assessed stocks.
> > >
> > > On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer <
> > > ***@***.***> wrote:
> > >
> > >> In theory, one could vary the tau parameter to have independent
> > >> assessment trends in different periods BUT the way the model is set
> > up, it
> > >> has no specific reference to the actual time; i.e., right now, all
the
> > >> model knows is time from first (recorded) landings to first
> > assessment. We
> > >> could possibly put in effect for 'un-assessed post 1996' to see if
> > those
> > >> stocks have a higher probability of being assessed if they were
still
> > >> un-assessed at the time of the fisheries act. I think.
> > >>
> > >>
> > >> On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
> > >> ***@***.***> wrote:
> > >>
> > >>> Sorry, I forgot about this suggestion in your second paragraph
when
> > >>> adding to the Discussion yesterday. I agree that those ideas
would be
> > >>> good to add. We could mention in the discussion that perceived
> > >>> abundance/status may be a factor affecting when an assessment is
> first
> > >>> conducted, but that (obviously) abundance is not actually known
> before
> > >>> that assessment is done.
> > >>>
> > >>> I can't remember what we decided, but was there any possibility
(or
> > >>> advantage) to allowing model parameters to vary pre- and
post-1996?
> > >>> Would that allow us to indirectly get at such questions of whether
> > price
> > >>> matters more earlier on or later on in their exploitation history?
> > >>>
> > >>>
> > >>> Mike
> > >>>
> > >>> On 2016-11-17 4:20 PM, Philipp Neubauer wrote:
> > >>> > Hi there;
> > >>> >
> > >>> > have just read through the discussion - lots of good points,
> > Jim. I've
> > >>> > also
> > >>> > added the citations/bib into the manuscript. To make the
citations
> > >>> > work, in
> > >>> > Rstudio you'll need to go to the "Build" menu -> Configure build
> > tools
> > >>> ->
> > >>> > choose "Makefile" and select the project directory. From there
on,
> > >>> > when you
> > >>> > want to build after adding writing etc, do "build all"
> > (Shift-Ctrl-B).
> > >>> > Outside of Rstudio, open a command window, cd to the first
> > assessment
> > >>> > directory and type 'make'....hope this works.
> > >>> >
> > >>> > I agree that we need a bit more about Rockfish and
Groundsharks. I
> > >>> think
> > >>> > the interesting angle here is conservation vs economics: We
> > could asdd
> > >>> > something more general: we can only capture the conservation
status
> > >>> driver
> > >>> > in the taxonomic component since it would be hard to define some
> > kind
> > >>> of
> > >>> > surrogate for that for stocks without an assessment. This points
> > to a
> > >>> > problem in prioritizing assessments: the conservation status
only
> > >>> really
> > >>> > factors once we have some evidence that things are probably
going
> > >>> > badly for
> > >>> > a stock. Thus, valuable stocks are potentially well managed
> > early on in
> > >>> > their exploitation history, whereas small stocks probably only
> > get that
> > >>> > level of attention when there is an indication that things are
> > heading
> > >>> for
> > >>> > disaster. This could have unforeseen ecological consequences if
the
> > >>> > importance of such species is high relative to the economic
> > value from
> > >>> > fishing (e.g., bycatch of benthic inverts in trawl fisheries.)
> > >>> >
> > >>> > Also, other bycatch species are probably assessed/managed
> > >>> quantitatively,
> > >>> > but won't figure in our DB (Turtles, Mammals) since they would
> > probably
> > >>> > have a risk- rather than a stock assessment....
> > >>> >
> > >>> > Happy to add something about this if you guys feel it makes
> sense...
> > >>> >
> > >>> > Phil
> > >>> >
> > >>> > On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
> > >>> > ***@***.***
> > >>> > > wrote:
> > >>> >
> > >>> > > no problem, I can look into those in the next couple days.
> > >>> > >
> > >>> > > Mike
> > >>> > >
> > >>> > > On 2016-11-16 8:45 PM, Jim Thorson wrote:
> > >>> > > >
> > >>> > > > OK, makes sense. I've gone ahead and added a sentence and
> > >>> references
> > >>> > > > explaining Scorpaenids. I don't have any special knowledge
of
> > >>> > > > groundsharks or flatfishes, so those might require a bit
more
> > >>> > > > sleuthing if anyone is willing to take the lead?
> > >>> > > >
> > >>> > > > —
> > >>> > > > You are receiving this because you were mentioned.
> > >>> > > > Reply to this email directly, view it on GitHub
> > >>> > > > <https://github.com/Philipp-Neubauer/FirstAssessment/
> > >>> > > issues/5#issuecomment-261154300>,
> > >>> > > > or mute the thread
> > >>> > > > <https://github.com/notifications/unsubscribe-auth/AV_
> > >>> > > oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.
> > >>> > > >
> > >>> > >
> > >>> > > —
> > >>> > > You are receiving this because you were mentioned.
> > >>> > > Reply to this email directly, view it on GitHub
> > >>> > >
> > >>> > <https://github.com/Philipp-Neubauer/FirstAssessment/issues/
> > >>> 5#issuecomment-261159042>,
> > >>> > > or mute the thread
> > >>> > >
> > >>> > <https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
> > >>> zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
> > >>> > > .
> > >>> > >
> > >>> >
> > >>> >
> > >>> >
> > >>> > --
> > >>> > Phil
> > >>> >
> > >>> > —
> > >>> > You are receiving this because you were mentioned.
> > >>> > Reply to this email directly, view it on GitHub
> > >>> > <https://github.com/Philipp-Neubauer/FirstAssessment/issues/
> > >>> 5#issuecomment-261412044>,
> > >>> > or mute the thread
> > >>> > <https://github.com/notifications/unsubscribe-auth/AV_oVdEB4
> > >>> r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.
> > >>> >
> > >>>
> > >>> —
> > >>> You are receiving this because you were mentioned.
> > >>> Reply to this email directly, view it on GitHub
> > >>>
> > <https://github.com/Philipp-Neubauer/FirstAssessment/
> issues/5#issuecomment-261811030>,
> > >>> or mute the thread
> > >>>
> > <https://github.com/notifications/unsubscribe-auth/
> ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj>
> > >>> .
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> Phil
> > >>
> > >
> > >
> > >
> > > --
> > > Phil
> > >
> >
> >
> >
> > --
> > Phil
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/Philipp-Neubauer/FirstAssessment/
> issues/5#issuecomment-262467327>,
> > or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/AV_
> oVct79LiQLPRWWgQJhJIrlbYSUKTZks5rBAfygaJpZM4KgiJj>.
> >
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <
,
> or mute the thread
> <
> .
>
--
Phil
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
|
Jim - I don't think that this is a problem here - we use time to
assessment, not any aspect of absolute time (i.e., Julian date) as
response. A short time-to-assessment can mean 1960-1970 or 2001-2011. The
reponse is the same, but the decade is different. In many medical studies,
the interest is in effects of policy on public health, and they do just
this; either have some temporal aspect as a predictor OR model tau using an
auto-regressive formulation or separate categories. Ultimately, they tend
to have a LOT more data, and I think there's limits to how far we can push
our model with 600odd data points.But in principle, there shouldn't be an
issue...
On Thu, Nov 24, 2016 at 12:25 PM, Jim Thorson <notifications@github.com>
wrote:
… Phil,
I'm not sure how a regression model like my understanding of ours could use
a variable derived from the response as a predictor --it seems like that
would violate the statistical assumption that the predictor is fixed
exogenously. But if you think it's possible and you have time, then I
agree it's worth exploring.
Another way would be to take the finite population prediction of assessment
proportion during retrospective and forecast periods (historical might have
a variance of zero if it's fully observed, I forget if species use
different censoring years). Then taking the 1st difference and plotting
that, it would show by eye if the assessment rate has sped up or slowed
down.
On Nov 23, 2016 2:42 PM, "Philipp Neubauer" ***@***.***>
wrote:
>
> Have done some editing in the methods to more accurately reflect what
we're
> doing with time-to-assessment, and covariates. I've also updated the
> results to reflect updated model outputs. Did a little bit of editing in
> the discussion, mainly in the propensity score paragraph - tried to
clarify
> a couple of sentences, hope it worked...
>
> 3 possible points to add:
>
> 1. In the updated model, the interaction of price and landings is
> negative. So it seems its more a case of Price OR Landings rather than
> price AND landings. We should probably edit the discussion to reflect
this
> - right?
> 2. Max length now comes through as significant - a generalisation of the
> charismatic mega-fauna effect beyond conservation to fisheries
management?
> Would make for an amusing paragraph, perhaps ;) But more seriously, it
> seems like an argument against the fishing down the food chain hypothesis
> (at least for US waters) - if large fish are preferentially assessed,
they
> are probably also managed better, preventing fishing-down patterns from
> occurring in US waters...
> 3. With some more though, it seems as though the addition of a post-1996
> effect could be good to support our statements about the projections:
right
> now we don't really show any evidence that the rate is actually lower -
and
> the model suggests increasing rates. So we could either estimate a
separate
> tau parameter for all stocks that were un-assessed as of 1996 OR jsut add
> the effect. Then we can point to that effect as confirmation that rates
of
> new assessmment are lower more recently.
>
> On Thu, Nov 24, 2016 at 6:48 AM, Michael Melnychuk <
***@***.***
> > wrote:
>
> > pasted.
> >
> > No apologies at all needed - thanks instead to you for doing all of the
> > hard work!
> >
> > Mike
> >
> > On 2016-11-23 1:31 AM, Philipp Neubauer wrote:
> > > Also, Mike - do you have the citations you added in your ref manager?
If
> > > so, could you export them as bibtex and copy-paste them into the
> > > FirstAssessment.bib? I can add the refs in, but that would save me
having
> > > to get them all individually...
> > >
> > > My apologies for not having managed to get involved much with the
> > > discussion - been battling all those grooming rules and little
glitches;
> > > and lately even the models didn't spit out anything coherent anymore
(got
> > > that back to normal, at least.)
> > >
> > > On Wed, Nov 23, 2016 at 10:25 PM, Philipp Neubauer
> > > ***@***.***>
> > > wrote:
> > >
> > > > To add to this:
> > > >
> > > > I tried the 'un-assessed post 1996' effect, and it comes out
negative.
> > > > I'm not sure if that alone is worth including, as it doesn't say
> > > much about
> > > > price effects in different decades, etc. Just FYI right now, mean
> > > price and
> > > > max landings that go into the model are taken before the assessment
> > year
> > > > for assessed stocks.
> > > >
> > > > On Mon, Nov 21, 2016 at 11:35 AM, Philipp Neubauer <
> > > > ***@***.***> wrote:
> > > >
> > > >> In theory, one could vary the tau parameter to have independent
> > > >> assessment trends in different periods BUT the way the model is
set
> > > up, it
> > > >> has no specific reference to the actual time; i.e., right now, all
the
> > > >> model knows is time from first (recorded) landings to first
> > > assessment. We
> > > >> could possibly put in effect for 'un-assessed post 1996' to see if
> > > those
> > > >> stocks have a higher probability of being assessed if they were
still
> > > >> un-assessed at the time of the fisheries act. I think.
> > > >>
> > > >>
> > > >> On Mon, Nov 21, 2016 at 11:28 AM, Michael Melnychuk <
> > > >> ***@***.***> wrote:
> > > >>
> > > >>> Sorry, I forgot about this suggestion in your second paragraph
when
> > > >>> adding to the Discussion yesterday. I agree that those ideas
would be
> > > >>> good to add. We could mention in the discussion that perceived
> > > >>> abundance/status may be a factor affecting when an assessment is
> > first
> > > >>> conducted, but that (obviously) abundance is not actually known
> > before
> > > >>> that assessment is done.
> > > >>>
> > > >>> I can't remember what we decided, but was there any possibility
(or
> > > >>> advantage) to allowing model parameters to vary pre- and
post-1996?
> > > >>> Would that allow us to indirectly get at such questions of
whether
> > > price
> > > >>> matters more earlier on or later on in their exploitation
history?
> > > >>>
> > > >>>
> > > >>> Mike
> > > >>>
> > > >>> On 2016-11-17 4:20 PM, Philipp Neubauer wrote:
> > > >>> > Hi there;
> > > >>> >
> > > >>> > have just read through the discussion - lots of good points,
> > > Jim. I've
> > > >>> > also
> > > >>> > added the citations/bib into the manuscript. To make the
citations
> > > >>> > work, in
> > > >>> > Rstudio you'll need to go to the "Build" menu -> Configure
build
> > > tools
> > > >>> ->
> > > >>> > choose "Makefile" and select the project directory. From there
on,
> > > >>> > when you
> > > >>> > want to build after adding writing etc, do "build all"
> > > (Shift-Ctrl-B).
> > > >>> > Outside of Rstudio, open a command window, cd to the first
> > > assessment
> > > >>> > directory and type 'make'....hope this works.
> > > >>> >
> > > >>> > I agree that we need a bit more about Rockfish and
Groundsharks. I
> > > >>> think
> > > >>> > the interesting angle here is conservation vs economics: We
> > > could asdd
> > > >>> > something more general: we can only capture the conservation
status
> > > >>> driver
> > > >>> > in the taxonomic component since it would be hard to define
some
> > > kind
> > > >>> of
> > > >>> > surrogate for that for stocks without an assessment. This
points
> > > to a
> > > >>> > problem in prioritizing assessments: the conservation status
only
> > > >>> really
> > > >>> > factors once we have some evidence that things are probably
going
> > > >>> > badly for
> > > >>> > a stock. Thus, valuable stocks are potentially well managed
> > > early on in
> > > >>> > their exploitation history, whereas small stocks probably only
> > > get that
> > > >>> > level of attention when there is an indication that things are
> > > heading
> > > >>> for
> > > >>> > disaster. This could have unforeseen ecological consequences if
the
> > > >>> > importance of such species is high relative to the economic
> > > value from
> > > >>> > fishing (e.g., bycatch of benthic inverts in trawl fisheries.)
> > > >>> >
> > > >>> > Also, other bycatch species are probably assessed/managed
> > > >>> quantitatively,
> > > >>> > but won't figure in our DB (Turtles, Mammals) since they would
> > > probably
> > > >>> > have a risk- rather than a stock assessment....
> > > >>> >
> > > >>> > Happy to add something about this if you guys feel it makes
> > sense...
> > > >>> >
> > > >>> > Phil
> > > >>> >
> > > >>> > On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk
> > > >>> > ***@***.***
> > > >>> > > wrote:
> > > >>> >
> > > >>> > > no problem, I can look into those in the next couple days.
> > > >>> > >
> > > >>> > > Mike
> > > >>> > >
> > > >>> > > On 2016-11-16 8:45 PM, Jim Thorson wrote:
> > > >>> > > >
> > > >>> > > > OK, makes sense. I've gone ahead and added a sentence and
> > > >>> references
> > > >>> > > > explaining Scorpaenids. I don't have any special knowledge
of
> > > >>> > > > groundsharks or flatfishes, so those might require a bit
more
> > > >>> > > > sleuthing if anyone is willing to take the lead?
> > > >>> > > >
> > > >>> > > > —
> > > >>> > > > You are receiving this because you were mentioned.
> > > >>> > > > Reply to this email directly, view it on GitHub
> > > >>> > > > <https://github.com/Philipp-Neubauer/FirstAssessment/
> > > >>> > > issues/5#issuecomment-261154300>,
> > > >>> > > > or mute the thread
> > > >>> > > > <https://github.com/notifications/unsubscribe-auth/AV_
> > > >>> > > oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj>.
> > > >>> > > >
> > > >>> > >
> > > >>> > > —
> > > >>> > > You are receiving this because you were mentioned.
> > > >>> > > Reply to this email directly, view it on GitHub
> > > >>> > >
> > > >>> > <https://github.com/Philipp-Neubauer/FirstAssessment/issues/
> > > >>> 5#issuecomment-261159042>,
> > > >>> > > or mute the thread
> > > >>> > >
> > > >>> > <https://github.com/notifications/unsubscribe-auth/ACJDC8y0p
> > > >>> zRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj>
> > > >>> > > .
> > > >>> > >
> > > >>> >
> > > >>> >
> > > >>> >
> > > >>> > --
> > > >>> > Phil
> > > >>> >
> > > >>> > —
> > > >>> > You are receiving this because you were mentioned.
> > > >>> > Reply to this email directly, view it on GitHub
> > > >>> > <https://github.com/Philipp-Neubauer/FirstAssessment/issues/
> > > >>> 5#issuecomment-261412044>,
> > > >>> > or mute the thread
> > > >>> > <https://github.com/notifications/unsubscribe-auth/AV_oVdEB4
> > > >>> r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj>.
> > > >>> >
> > > >>>
> > > >>> —
> > > >>> You are receiving this because you were mentioned.
> > > >>> Reply to this email directly, view it on GitHub
> > > >>>
> > > <https://github.com/Philipp-Neubauer/FirstAssessment/
> > issues/5#issuecomment-261811030>,
> > > >>> or mute the thread
> > > >>>
> > > <https://github.com/notifications/unsubscribe-auth/
> > ACJDC7JsReqCBM6nEKyxutBzRgua8KBfks5rAMl4gaJpZM4KgiJj>
> > > >>> .
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Phil
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Phil
> > > >
> > >
> > >
> > >
> > > --
> > > Phil
> > >
> > > —
> > > You are receiving this because you were mentioned.
> > > Reply to this email directly, view it on GitHub
> > > <https://github.com/Philipp-Neubauer/FirstAssessment/
> > issues/5#issuecomment-262467327>,
> > > or mute the thread
> > > <https://github.com/notifications/unsubscribe-auth/AV_
> > oVct79LiQLPRWWgQJhJIrlbYSUKTZks5rBAfygaJpZM4KgiJj>.
> > >
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <
#5 (comment)-
262585229
>,
> > or mute the thread
> > <
https://github.com/notifications/unsubscribe-auth/
ACJDC5rpMib7NUW0e1TWs4v2kt37nbANks5rBHxWgaJpZM4KgiJj
>
> > .
> >
>
>
>
> --
> Phil
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
<https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-262647819>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/
AHnqTTIo5u5VkpeDvnsTuEPdDkmk4AiUks5rBMFCgaJpZM4KgiJj>
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJDC6BlcigNxHs18H3vQ0MGnP-PepwIks5rBMuAgaJpZM4KgiJj>
.
--
Phil
|
As expected, running a model with differing rates gives a lower rate post-1996. Since we lose interpretability of coefficients (the rate enters into the calculation for the acceleration factor), I wouldn't suggest that we use this model for anything other than to discuss post-1996 and future assessment rates. This backs up the model that includes a psot-1996 effect, which is negative. Ultimately, what this means is that our interpretation wasn't quite right; since we control for price, landings etc, the interpretation is that rates a lower at a given co-variate combination. So either there is something about post-1996 stocks that the model doesn't capture, or rates at which new stocks are assessed have declined despite the fisheries act (e.g., because there is a fixed budget that allows for a limited set to be assessed, such that beyond that set, it becomes increasingly unlikely to have new assessments.). |
Phil and Mike, I've lost track a bit on where things stand. Should we make a timeline to push writing over the finish line? or are we still waiting on some updated QAQC or analysis? please tell me how I can help! |
hi Jim and Phil,
Our assistant Nicole has found several US stock assessments that were
not originally on our list. She is still going through the list of
species:state landing entities and searching for an assessment for each
entity for which we don't already have an assessment linked. She was
delayed in this for a couple weeks having to work on a different project
for Ray, but she's been on this full time for a week or so and will
likely finish by tomorrow.
Because she's found a few others, the results will change, but probably
only slightly. I'll update the input files as soon as she is done and
post them. Until then, we may want to hold off on adding too much more
taxon-specific detail to the results & discussion, in case the taxonomic
effects change somewhat.
Mike
…On 2016-11-28 11:16 AM, Jim Thorson wrote:
Phil and Mike,
I've lost track a bit on where things stand. Should we make a timeline
to push writing over the finish line? or are we still waiting on some
updated QAQC or analysis?
please tell me how I can help!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVYaXykQGVqeIu7mNIB6Km28r2Ft3ks5rCyidgaJpZM4KgiJj>.
|
Sounds good, Mike. I'll re-run the grooming and model once I see the files
updated in the dropbox.
Any comment on the post-1996 effect? Is this interesting and do we want to
keep this in? See my last comment in this issue...
On Tue, Nov 29, 2016 at 8:32 AM, Michael Melnychuk <notifications@github.com
… wrote:
hi Jim and Phil,
Our assistant Nicole has found several US stock assessments that were
not originally on our list. She is still going through the list of
species:state landing entities and searching for an assessment for each
entity for which we don't already have an assessment linked. She was
delayed in this for a couple weeks having to work on a different project
for Ray, but she's been on this full time for a week or so and will
likely finish by tomorrow.
Because she's found a few others, the results will change, but probably
only slightly. I'll update the input files as soon as she is done and
post them. Until then, we may want to hold off on adding too much more
taxon-specific detail to the results & discussion, in case the taxonomic
effects change somewhat.
Mike
On 2016-11-28 11:16 AM, Jim Thorson wrote:
>
> Phil and Mike,
>
> I've lost track a bit on where things stand. Should we make a timeline
> to push writing over the finish line? or are we still waiting on some
> updated QAQC or analysis?
>
> please tell me how I can help!
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-263365767>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AV_
oVYaXykQGVqeIu7mNIB6Km28r2Ft3ks5rCyidgaJpZM4KgiJj>.
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJDC4Je0WYoo1N-ba6mrnRkDfN_WtGHks5rCyw8gaJpZM4KgiJj>
.
--
Phil
|
I took a quick look at the species:state landings database to see if the
rate of when landings time series were first recorded may have something
to do with the post-1996 effect observed. I predicted that with the act
in 1996, there would be a big increase in the number of stocks with
recorded landings (which are a precursor to whether they may be assessed
some time after that). What I found was actually the opposite: there was
a buildup in the number of species:state entities with landings data
first being recorded from about 1970-1995, but then a steep decline in
the number of new entities with landings first recorded after 1996.
Year of first recorded landings data and median landings of the first
recorded year for species:state entities:
year range count median (t)
<=1950 1052 40.9
1951-1955 234 0.85
1956-1960 120 1.3
1961-1965 113 4.7
1966-1970 66 4.2
1971-1975 123 3.5
1976-1980 267 2.8
1981-1985 325 2.3
1986-1990 368 1.45
1991-1995 375 0.9
1996-1990 200 0.6
2001-2005 210 1.05
2006-2010 170 1.1
2011-2015 68 0.55
This may reflect saturation of species available to be recorded. The
requirements for building up comprehensive records of landings data may
have been in place well before the act in 1996, so that post-1996 there
just weren't that many species left for which to record landings for the
first time. I think all this means is that the drop in assessment rate
post-1996 cannot be explained by a big increase in the stocks
available-but-not-yet-assessed.
Mike
…On 2016-11-28 11:49 AM, Philipp Neubauer wrote:
Sounds good, Mike. I'll re-run the grooming and model once I see the files
updated in the dropbox.
Any comment on the post-1996 effect? Is this interesting and do we want to
keep this in? See my last comment in this issue...
On Tue, Nov 29, 2016 at 8:32 AM, Michael Melnychuk
***@***.***
> wrote:
> hi Jim and Phil,
>
> Our assistant Nicole has found several US stock assessments that were
> not originally on our list. She is still going through the list of
> species:state landing entities and searching for an assessment for each
> entity for which we don't already have an assessment linked. She was
> delayed in this for a couple weeks having to work on a different project
> for Ray, but she's been on this full time for a week or so and will
> likely finish by tomorrow.
> Because she's found a few others, the results will change, but probably
> only slightly. I'll update the input files as soon as she is done and
> post them. Until then, we may want to hold off on adding too much more
> taxon-specific detail to the results & discussion, in case the taxonomic
> effects change somewhat.
>
> Mike
>
> On 2016-11-28 11:16 AM, Jim Thorson wrote:
> >
> > Phil and Mike,
> >
> > I've lost track a bit on where things stand. Should we make a timeline
> > to push writing over the finish line? or are we still waiting on some
> > updated QAQC or analysis?
> >
> > please tell me how I can help!
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/Philipp-Neubauer/FirstAssessment/
> issues/5#issuecomment-263365767>,
> > or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/AV_
> oVYaXykQGVqeIu7mNIB6Km28r2Ft3ks5rCyidgaJpZM4KgiJj>.
> >
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
>
<#5 (comment)>,
> or mute the thread
>
<https://github.com/notifications/unsubscribe-auth/ACJDC4Je0WYoo1N-ba6mrnRkDfN_WtGHks5rCyw8gaJpZM4KgiJj>
> .
>
--
Phil
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVYsJ28XFRgNx6CeN70KZXoeccjMxks5rCzA2gaJpZM4KgiJj>.
|
Cool - the effect I coded was "assessment year > 1996", so basically,
stocks could have been first recorded pre or post-1996, but their "event"
is post 1996. So essentially this boils down to a comparison of
time-to-assessment of stocks assessed pre and post 1996. The big caveat
here is that I took 1960 as the first year, because there were no
assessments prior to that. So the effect could well be an artefact of the
cutoff since taht limits the time to assessment for stocks assessed
pre-1996 more so than for stocks post 1996...so perhaps too flawed to
consider?
On Tue, Nov 29, 2016 at 8:59 AM, Michael Melnychuk <notifications@github.com
… wrote:
I took a quick look at the species:state landings database to see if the
rate of when landings time series were first recorded may have something
to do with the post-1996 effect observed. I predicted that with the act
in 1996, there would be a big increase in the number of stocks with
recorded landings (which are a precursor to whether they may be assessed
some time after that). What I found was actually the opposite: there was
a buildup in the number of species:state entities with landings data
first being recorded from about 1970-1995, but then a steep decline in
the number of new entities with landings first recorded after 1996.
Year of first recorded landings data and median landings of the first
recorded year for species:state entities:
year range count median (t)
<=1950 1052 40.9
1951-1955 234 0.85
1956-1960 120 1.3
1961-1965 113 4.7
1966-1970 66 4.2
1971-1975 123 3.5
1976-1980 267 2.8
1981-1985 325 2.3
1986-1990 368 1.45
1991-1995 375 0.9
1996-1990 200 0.6
2001-2005 210 1.05
2006-2010 170 1.1
2011-2015 68 0.55
This may reflect saturation of species available to be recorded. The
requirements for building up comprehensive records of landings data may
have been in place well before the act in 1996, so that post-1996 there
just weren't that many species left for which to record landings for the
first time. I think all this means is that the drop in assessment rate
post-1996 cannot be explained by a big increase in the stocks
available-but-not-yet-assessed.
Mike
On 2016-11-28 11:49 AM, Philipp Neubauer wrote:
> Sounds good, Mike. I'll re-run the grooming and model once I see the
files
> updated in the dropbox.
>
> Any comment on the post-1996 effect? Is this interesting and do we want
to
> keep this in? See my last comment in this issue...
>
> On Tue, Nov 29, 2016 at 8:32 AM, Michael Melnychuk
> ***@***.***
> > wrote:
>
> > hi Jim and Phil,
> >
> > Our assistant Nicole has found several US stock assessments that were
> > not originally on our list. She is still going through the list of
> > species:state landing entities and searching for an assessment for each
> > entity for which we don't already have an assessment linked. She was
> > delayed in this for a couple weeks having to work on a different
project
> > for Ray, but she's been on this full time for a week or so and will
> > likely finish by tomorrow.
> > Because she's found a few others, the results will change, but probably
> > only slightly. I'll update the input files as soon as she is done and
> > post them. Until then, we may want to hold off on adding too much more
> > taxon-specific detail to the results & discussion, in case the
taxonomic
> > effects change somewhat.
> >
> > Mike
> >
> > On 2016-11-28 11:16 AM, Jim Thorson wrote:
> > >
> > > Phil and Mike,
> > >
> > > I've lost track a bit on where things stand. Should we make a
timeline
> > > to push writing over the finish line? or are we still waiting on some
> > > updated QAQC or analysis?
> > >
> > > please tell me how I can help!
> > >
> > > —
> > > You are receiving this because you were mentioned.
> > > Reply to this email directly, view it on GitHub
> > > <https://github.com/Philipp-Neubauer/FirstAssessment/
> > issues/5#issuecomment-263365767>,
> > > or mute the thread
> > > <https://github.com/notifications/unsubscribe-auth/AV_
> > oVYaXykQGVqeIu7mNIB6Km28r2Ft3ks5rCyidgaJpZM4KgiJj>.
> > >
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> >
> <https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-263369962>,
> > or mute the thread
> >
> <https://github.com/notifications/unsubscribe-auth/ACJDC4Je0WYoo1N-
ba6mrnRkDfN_WtGHks5rCyw8gaJpZM4KgiJj>
> > .
> >
>
>
>
> --
> Phil
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-263374497>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AV_
oVYsJ28XFRgNx6CeN70KZXoeccjMxks5rCzA2gaJpZM4KgiJj>.
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJDCwlPmyZE5b6hnOGp0ADkLRydP5E5ks5rCzKEgaJpZM4KgiJj>
.
--
Phil
|
It might not make that big of a difference, but a different cutoff could
be 1950 instead of 1960, aligning with the availability of landings
data. That would give an extra 10 years for time-to-assessment for the
pre-1996 stocks. (But this might not make sense; you have your head
wrapped around the implementation much better than I do.)
Mike
…On 2016-11-28 12:17 PM, Philipp Neubauer wrote:
Cool - the effect I coded was "assessment year > 1996", so basically,
stocks could have been first recorded pre or post-1996, but their "event"
is post 1996. So essentially this boils down to a comparison of
time-to-assessment of stocks assessed pre and post 1996. The big caveat
here is that I took 1960 as the first year, because there were no
assessments prior to that. So the effect could well be an artefact of the
cutoff since taht limits the time to assessment for stocks assessed
pre-1996 more so than for stocks post 1996...so perhaps too flawed to
consider?
On Tue, Nov 29, 2016 at 8:59 AM, Michael Melnychuk
***@***.***
> wrote:
> I took a quick look at the species:state landings database to see if the
> rate of when landings time series were first recorded may have something
> to do with the post-1996 effect observed. I predicted that with the act
> in 1996, there would be a big increase in the number of stocks with
> recorded landings (which are a precursor to whether they may be assessed
> some time after that). What I found was actually the opposite: there was
> a buildup in the number of species:state entities with landings data
> first being recorded from about 1970-1995, but then a steep decline in
> the number of new entities with landings first recorded after 1996.
>
> Year of first recorded landings data and median landings of the first
> recorded year for species:state entities:
>
> year range count median (t)
> <=1950 1052 40.9
> 1951-1955 234 0.85
> 1956-1960 120 1.3
> 1961-1965 113 4.7
> 1966-1970 66 4.2
> 1971-1975 123 3.5
> 1976-1980 267 2.8
> 1981-1985 325 2.3
> 1986-1990 368 1.45
> 1991-1995 375 0.9
> 1996-1990 200 0.6
> 2001-2005 210 1.05
> 2006-2010 170 1.1
> 2011-2015 68 0.55
>
> This may reflect saturation of species available to be recorded. The
> requirements for building up comprehensive records of landings data may
> have been in place well before the act in 1996, so that post-1996 there
> just weren't that many species left for which to record landings for the
> first time. I think all this means is that the drop in assessment rate
> post-1996 cannot be explained by a big increase in the stocks
> available-but-not-yet-assessed.
>
> Mike
>
>
> On 2016-11-28 11:49 AM, Philipp Neubauer wrote:
> > Sounds good, Mike. I'll re-run the grooming and model once I see the
> files
> > updated in the dropbox.
> >
> > Any comment on the post-1996 effect? Is this interesting and do we
want
> to
> > keep this in? See my last comment in this issue...
> >
> > On Tue, Nov 29, 2016 at 8:32 AM, Michael Melnychuk
> > ***@***.***
> > > wrote:
> >
> > > hi Jim and Phil,
> > >
> > > Our assistant Nicole has found several US stock assessments that
were
> > > not originally on our list. She is still going through the list of
> > > species:state landing entities and searching for an assessment
for each
> > > entity for which we don't already have an assessment linked. She was
> > > delayed in this for a couple weeks having to work on a different
> project
> > > for Ray, but she's been on this full time for a week or so and will
> > > likely finish by tomorrow.
> > > Because she's found a few others, the results will change, but
probably
> > > only slightly. I'll update the input files as soon as she is
done and
> > > post them. Until then, we may want to hold off on adding too
much more
> > > taxon-specific detail to the results & discussion, in case the
> taxonomic
> > > effects change somewhat.
> > >
> > > Mike
> > >
> > > On 2016-11-28 11:16 AM, Jim Thorson wrote:
> > > >
> > > > Phil and Mike,
> > > >
> > > > I've lost track a bit on where things stand. Should we make a
> timeline
> > > > to push writing over the finish line? or are we still waiting
on some
> > > > updated QAQC or analysis?
> > > >
> > > > please tell me how I can help!
> > > >
> > > > —
> > > > You are receiving this because you were mentioned.
> > > > Reply to this email directly, view it on GitHub
> > > > <https://github.com/Philipp-Neubauer/FirstAssessment/
> > > issues/5#issuecomment-263365767>,
> > > > or mute the thread
> > > > <https://github.com/notifications/unsubscribe-auth/AV_
> > > oVYaXykQGVqeIu7mNIB6Km28r2Ft3ks5rCyidgaJpZM4KgiJj>.
> > > >
> > >
> > > —
> > > You are receiving this because you were mentioned.
> > > Reply to this email directly, view it on GitHub
> > >
> > <https://github.com/Philipp-Neubauer/FirstAssessment/
> issues/5#issuecomment-263369962>,
> > > or mute the thread
> > >
> > <https://github.com/notifications/unsubscribe-auth/ACJDC4Je0WYoo1N-
> ba6mrnRkDfN_WtGHks5rCyw8gaJpZM4KgiJj>
> > > .
> > >
> >
> >
> >
> > --
> > Phil
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/Philipp-Neubauer/FirstAssessment/
> issues/5#issuecomment-263374497>,
> > or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/AV_
> oVYsJ28XFRgNx6CeN70KZXoeccjMxks5rCzA2gaJpZM4KgiJj>.
> >
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
>
<#5 (comment)>,
> or mute the thread
>
<https://github.com/notifications/unsubscribe-auth/ACJDCwlPmyZE5b6hnOGp0ADkLRydP5E5ks5rCzKEgaJpZM4KgiJj>
> .
>
--
Phil
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVU7oigvB1wU0yM61KZAwlWhvDbI9ks5rCzb1gaJpZM4KgiJj>.
|
Given the thorough QAQC and checking of assessments by @mcmelnychuk and Nicole, are we ready to draw a line in the sand and finalise the MS? I'm happy to help resolve remaining issues (can fill in some ?s in the text etc)... I will add some tasks to the issues list, hopefully we can tick them off reasonably quickly...though the first task should probably be to QAQC the final dataset.csv for errors and omissions. Phil |
yeah, let's call this the final one! Definitely done a good and thorough QAQC from my perspective, given our use of both SIS and careful quality checks. |
sorry, I think I mistook which issue this was... Are we ready to return to writing, or is there more updating to do for the analysis given this final dataset? |
A couple comparisons today:
1) I've gone through the dataset.csv and dataset_missed.csv files and
compared them to the initial dataset. All species, assessment years, and
habitats match up. However, there are two stocks (among the just-added
ones) that should have their region manually assigned as USEC-NE instead
of USEC-SE. It's the North Carolina issue again. They are:
USEC smooth dogfish shark
USNE midAtl red drum
2) I took an older FishBase dataset and linked by species to the
"migration" variable, for the non-assessed stocks. Three stocks (2
species) came out that I didn't previously identify as "freshwater".
I've just added these to the "exclude" column in
"SpeciesCrossReference.csv" in the Dropbox folder. They are:
USEC-NE SNAPPER, DOG Lutjanus jocu
USEC-SE PINFISH Lagodon rhomboides
USEC-SE SNAPPER, DOG Lutjanus jocu
In this latter comparison, there were still ~200 stocks that didn't have
a corresponding FishBase entry (mostly invertebrates, or else
alternative Latin names). There is a chance that some of these are
freshwater species. Do either of you have a SeaLifeBase dataset on hand,
and could link some kind of marine/brackish/freshwater designation to
Latin names, to check if there are other freshwater species still on the
list (attached)?
Mike
…On 2016-12-08 9:52 AM, Jim Thorson wrote:
sorry, I think I mistook which issue this was... Are we ready to
return to writing, or is there more updating to do for the analysis
given this final dataset?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AV_oVY6PovZ2uBx1LNVScvo5XqRH7so-ks5rGEPZgaJpZM4KgiJj>.
|
Hi Mike - I can fix the region issue;
For the freshwater, both of those species are Marine fish with no
diadromous behaviour as far as I can find. Dog Snapper use estuaries as
nurseries, so are occasionally found in freshwater tributaries, but not
because of migratory behaviour...
Jim; yes I think we can start writing again - I fixed a few missing values
in the text already, would be good to add in some missing references and
final edits to have a full V1 manuscript.
Phil
On Fri, Dec 9, 2016 at 7:32 AM, Michael Melnychuk <notifications@github.com>
wrote:
… A couple comparisons today:
1) I've gone through the dataset.csv and dataset_missed.csv files and
compared them to the initial dataset. All species, assessment years, and
habitats match up. However, there are two stocks (among the just-added
ones) that should have their region manually assigned as USEC-NE instead
of USEC-SE. It's the North Carolina issue again. They are:
USEC smooth dogfish shark
USNE midAtl red drum
2) I took an older FishBase dataset and linked by species to the
"migration" variable, for the non-assessed stocks. Three stocks (2
species) came out that I didn't previously identify as "freshwater".
I've just added these to the "exclude" column in
"SpeciesCrossReference.csv" in the Dropbox folder. They are:
USEC-NE SNAPPER, DOG Lutjanus jocu
USEC-SE PINFISH Lagodon rhomboides
USEC-SE SNAPPER, DOG Lutjanus jocu
In this latter comparison, there were still ~200 stocks that didn't have
a corresponding FishBase entry (mostly invertebrates, or else
alternative Latin names). There is a chance that some of these are
freshwater species. Do either of you have a SeaLifeBase dataset on hand,
and could link some kind of marine/brackish/freshwater designation to
Latin names, to check if there are other freshwater species still on the
list (attached)?
Mike
On 2016-12-08 9:52 AM, Jim Thorson wrote:
>
> sorry, I think I mistook which issue this was... Are we ready to
> return to writing, or is there more updating to do for the analysis
> given this final dataset?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-265806936>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AV_
oVY6PovZ2uBx1LNVScvo5XqRH7so-ks5rGEPZgaJpZM4KgiJj>.
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJDC5FzVjMH3MKfnJRLDr9PrDnPi3ijks5rGE0pgaJpZM4KgiJj>
.
--
Phil
|
Thank you for double-checking that, and sorry - you're right. The old FishBase extract I used clearly had some errors in it. I've removed the "freshwater" designation for those species from the "exclude" column in "SpeciesCrossReference.csv". Mike |
hi Phil,
Thank you for double-checking that, and sorry - you're right. The old
FishBase extract I used clearly had some errors in it. I've removed the
"freshwater" designation for those species from the "exclude" column in
"SpeciesCrossReference.csv".
Mike
|
Phil and Mike,
I've just committed some edits, and added some references in-text and
citations to the "FirstAssessment.bib" file. Its not really compiling
right on my machine, but the warning isn't useful to debug, so I can't
really confirm whether the references are working right or not.
At this point, I think that its probably important for a single person to
try and edit extensively to get the tone right (currently, it reads poorly
perhaps because there's "too many cooks in the kitchen" for proper editing)
Phil -- would you be willing to do a last sweep for edits? I would offer,
but I can't get the Rstudio to compile right, so I probably can't get it
very polished.
And if you're willing, just tell me when you think its in a good place and
I'll start doing my internal review -- I'm planning to ask Rick Methot
given his ongoing interest in the topic.
cheers,
jim
…On Thu, Dec 8, 2016 at 12:36 PM, Michael Melnychuk ***@***.*** > wrote:
hi Phil,
Thank you for double-checking that, and sorry - you're right. The old
FishBase extract I used clearly had some errors in it. I've removed the
"freshwater" designation for those species from the "exclude" column in
"SpeciesCrossReference.csv".
Mike
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHnqTWsvhRbi14XCkBXK0kgKIwrC1Xuqks5rGGo5gaJpZM4KgiJj>
.
|
Great, thanks Jim.
yes, happy to go through for a final, thorough edit. I'll try and do so
early on Mon morning, so you should have it back on Mon your time.
cheers
On Fri, Dec 9, 2016 at 10:24 AM, Jim Thorson <notifications@github.com>
wrote:
… Phil and Mike,
I've just committed some edits, and added some references in-text and
citations to the "FirstAssessment.bib" file. Its not really compiling
right on my machine, but the warning isn't useful to debug, so I can't
really confirm whether the references are working right or not.
At this point, I think that its probably important for a single person to
try and edit extensively to get the tone right (currently, it reads poorly
perhaps because there's "too many cooks in the kitchen" for proper editing)
Phil -- would you be willing to do a last sweep for edits? I would offer,
but I can't get the Rstudio to compile right, so I probably can't get it
very polished.
And if you're willing, just tell me when you think its in a good place and
I'll start doing my internal review -- I'm planning to ask Rick Methot
given his ongoing interest in the topic.
cheers,
jim
On Thu, Dec 8, 2016 at 12:36 PM, Michael Melnychuk <
***@***.***
> wrote:
> hi Phil,
>
>
> Thank you for double-checking that, and sorry - you're right. The old
> FishBase extract I used clearly had some errors in it. I've removed the
> "freshwater" designation for those species from the "exclude" column in
> "SpeciesCrossReference.csv".
>
> Mike
>
> —
> You are receiving this because you modified the open/close state.
> Reply to this email directly, view it on GitHub
> <https://github.com/Philipp-Neubauer/FirstAssessment/
issues/5#issuecomment-265848104>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/
AHnqTWsvhRbi14XCkBXK0kgKIwrC1Xuqks5rGGo5gaJpZM4KgiJj>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJDC9ZjAAWtl-VSsml5h7tAocxwot7Uks5rGHWEgaJpZM4KgiJj>
.
--
Phil
|
Sounds good, thanks, guys. I won't be able to give this a careful look
until after the 18th - I'm swamped getting ready for a NCEAS workshop
next week.
I will try to put together an acknowledgments paragraph before then
(especially since Rick Methot will be in it!)
Mike
|
happy new year to you both. |
Its on Rick's desk for internal review -- I'll email him now to check up
…On Tue, Jan 3, 2017 at 2:07 PM, Michael Melnychuk ***@***.***> wrote:
happy new year to you both.
Just checking in on where things are. I haven't pulled this out for nearly
a month - is it a good time for me to take a look at now, or is it in the
middle of some edits or internal review? I updated the acknowledgments just
now, but that's all I've done.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHnqTSyrENCo0ZDHkd3SyhM00peBv1Y4ks5rOsapgaJpZM4KgiJj>
.
|
I just added a few more paragraphs to the introduction (Rnw file), but still can't compile to the PDF (see other issue for next bug).
This intro is basically a standard 5-paragraph style intro giving:
I'm happy to heavily modify it, but think its a decent place to start
The text was updated successfully, but these errors were encountered: