Skip to content
This repository has been archived by the owner on Apr 29, 2022. It is now read-only.

MVP #5

Closed
sballesteros opened this issue Sep 17, 2019 · 8 comments
Closed

MVP #5

sballesteros opened this issue Sep 17, 2019 · 8 comments

Comments

@sballesteros
Copy link
Contributor

sballesteros commented Sep 17, 2019

Update: first round of mockups is available in the designs directory (be sure to download the PDF to view all the 3 screens as the github preview doesn't seem to always render the first one).

Note: in what follows all the authentication / user account / profile creation & display is done by PREreview.

In terms of "release strategy" main idea would be to release this work under a rapid.prepreview.org subdomain under an experimental/beta flag to get user feedback asap, iterate on that feedback and then integrate the good parts to PREreview

Main page

The goal of the main page is to establish Rapid PREreview as the place where the outbreak science community comes to request, get and provide rapid feedback on their preprints.

The page is centered around:

  • a searchable list of preprints for which users have provided or expressed a desire to get feedback through Rapid PREreviews
  • clear call to actions to:
    • create a new Rapid PREreviews
    • get feedback on existing preprint content (request for Rapid PREreviews)
    • install the web extension to do the same thing even more easily (see section further down)

List

To start as simple as possible, the first implementation of the list will sort items by date of last Rapid PREreview created.

Later we could refine that with a score better suited to provide visibility to preprints with a high demand for feedback. Tentative definition:

score = (v+r) / (t+1)^g

where,
v = number of votes of an item
r = number of Rapid PREreviews of an item
t = time since request submission or first Rapid PREreview (unit (hours, days, weeks) to be determined)
g = tuning factor (to be determined)

Displayed data / controls for each item:

  • preprint title
  • preprint server
  • preprint DOI
  • number of Rapid PREreviews (and/or reviewer name (or anonymous alias) with link to their PREreview profile)
  • number of upvotes (request for Rapid PREreviews)
  • visualization of the aggregated data collected from the structured reviews
  • date of first & last review made
  • call to action to add a Rapid PREreview
  • call to action to request a Rapid PREreview or express the desire to see more reviews (upvote)

Search options:

  • Facets (filter by):
    • preprints with requests for Rapid PREreviews (votes)
    • preprints with Rapid PREreviews
    • Preprint server
    • Dates of last posted review (last week, last month, etc.)
    • Structured data collected by the Rapid PREreview creation form (see Rapid PREreview form #6 for the list of facets)
  • Full text search:
    • Author name / username / anonymous alias of the Rapid PREreview creator
    • Rapid PREreview textual content (if any)
    • preprint title
  • Other indexes
    • preprint DOI
    • preprint URL

When users perform a search the end of the list will re-iterate the call to actions to start a new Rapid PREreview or request for Rapid PREreview in case the desired content wasn't found.

Users opting to view details about an item are taken to the Rapid PREreview display page (see section further down)

Note: this list (and associated search indexes) is only generated from the data collected during the rapid PREreview creation process to minimize complexity and avoid having to maintain a sync engine / index of all the different preprint servers content. However, PREreview already provides and maintains an index of all the different preprint servers so we could merge with PREreview at a later stage. In the same way, if the maintenance of the index is difficult, PREreview could adopt the simpler approach described here.

Create new Rapid PREreview call to action

Users clicking on the call to action are asked to provides a DOI (or URL) of the preprint to review. At that point we try to get as much metadata as we can from this information (see #9) and transition the user to the Rapid PREreview creation and display page (see section further down). As a fallback, if we couldn't get metadata about the preprint the user will be asked to enter the DOI and title manually.

(if user is not logged in, user is sent through the PREreview login workflow and redirected to the page on login success)

Request for Rapid PREreviews call to action

Logged in users can:

  • add entries by providing a DOI (or URL) using the same workflow as described above
  • upvote existing entries (see score definition above)

Install web extension call to action

(see section further down for description)

Rapid PREreview creation and display page

This page is designed to work as a "fallback" in cases when users haven't / cannot / don't want to install the web extension (see dedicated section below). It is however fully functional (and styled).

The rendering logic depends on the Content Security Policy set by the preprint server hosting the content to review:

The list of existing rapid PREreviews will be complemented with a summary view (interactive data visualization) allowing the user to quickly visualize aggregated results.

In both case a (permanently dismissible) call to action to install the Rapid PREreview browser extension is displayed.

When the user is not logged in (or is logged in but has already posted a Rapid PREreview), the UI to create new Rapid PREreviews is not displayed (a call to log in is added when the user is not logged in).

Note: this page is relatively similar to the one provided by PREreview with the main difference that the UI is designed to be able to also work in an extension context where we do not have control on the host page. It will be relatively easy to backport the structured review functionality to PREreview (allowing users to select the Rapid PREreview template from PREreview) or to modify the PREreview page to use a similar overlay UI and leverage the web extension described below.

Web extension

Note: development of the web extension will only start after the other parts of the MVP have been completed.

The goal of the web extension is to bring the features of Rapid PREreview where / when the user need it (therefore reducing the effort needed to write or request Rapid PREreviews). The extension is designed so that if a user visit a preprint, he/she is informed (in a non intrusive way) that he/she can Rapid PREreview it immediately (and without context switch) or ask for it to be reviewed (see upvotes and scores above).

Popup icon

the popup icon is part of the browser and lives next to the browser URL and as such is visible at all time

The icon is "highlighted" (different color / icon etc.) when:

  • user is on a preprint content page that has been reviewed by members of the Rapid PREreview community (in this case the icon is complemented by a badge indicating the number of Rapid PREreviews)
  • user is on a preprint content page that could be reviewed with a rapid PREreview
  • user is on a preprint content page object of a "request for rapid PREreview" (in this case the icon is complemented by a badge indicating the number of requests (upvotes))

Popup UI

Clicking on the popup icon opens a popup window/menu (located right below the popup icon) containing (when relevant):

  • An invitation for the user to start to write a Rapid PREreview for the page currently visited by injecting new overlay UI to the page (see content script section below)
  • An invitation for the user to request for Rapid PREreview for the page currently visited
  • The list of preprints for which users have requested Rapid PREreviews sorted by score (see definition above)
  • A feedback section to improve the quality of the extension and allow users to easily:
    • report URLs where the popup icon should have activated but didn't
    • report URLs where the popup icon should not have activated but did

Content script

features injected by the extension to the webpage currently visited by the user

The content script will share its code with the Rapid PREreview creation and display page (see section above) and inject 2 UI elements (working as overlay) to the page visited:

Rapid PREreview editor

Rapid PREreview creation form displayed in a shell

Existing Rapid PREreviews panel

Panel (or shell) containing a list of existing Rapid PREreviews for the visited page content

Note: the web extension could easily be generalized to PREreview if proven useful (and we will track usage / adoption data).

See also:
#1, #3, #4, #6


History:

@dasaderi
Copy link
Member

dasaderi commented Oct 2, 2019

Great MVP summary, @sballesteros. Thank you!

A couple of points:

  • Domain name: the domain agreed upon by PREreview and Outbreak Science team is outbreaksci.prereview.org. This would be a subdomain of the already owned prereview.org.

  • Displayed data / controls for each item: I don't think we want to incorporate upvotes for reviews at this point. But would only like to have a way for preprint authors to solicit feedback. It would be helpful if that solicitation appeared in the search with a visible badge and if users could select for them as you select for issues with a specific tag on GitHub. The workflow could be that if you click on the badge it will show you the results of the search that have the solicitation badge. Open to other ideas for this of course. @majohansson thoughts?

  • User's interaction with the rapid prereviews: with PREreview v2 we have an integration with Plaudit.pub that allows users with orcid ids to applaud a review (only once, if you click again the applaud will be removed). Do we want to have that also for Rapid? I think it's a nice way to give quick feedback to reviewers. Another thing we have is a comment box for other users to comment on the review, for example the authors. Do we want that? I would perhaps lean against this last one as we are trying to get concise content. @majohansson any thoughts about that?

@majohansson
Copy link
Collaborator

  • Regarding controls for items, I agree that requesting a review (as an author) or recommending something for review (as a reviewer) are valuable actions to include at this point. They are also different things, so being able to differentiate those would be nice. Even just in searchable metadata if not in the landing display. I also feel like upvoting reviews can wait.

  • Plaudit integration: I am not sure we need that right now. It seems to me another way to do the above (i.e. upvote). Although I do like this option better than having reviews of the reviews. I think reviews of the reviews is too meta for rapid reviews. @dasaderi makes good points about feedback, so maybe if plaudit integration is easy that would be a good simple option.

  • General: Thinking about how all these components (here and in other threads) will be pulled together. I like that there is lots of components that are potentially derived from reviews that are still very easy to do. I think that for the landing page to deal with those elegantly, we will have to be careful about prioritizing. The priority components should be large, general, and obvious. Meanwhile, I do think we can show some of the second or third tier components possibly with badges or a small set of color indicating certain facets or something like that. Those could be so small that they would not be intrusive. Though they would almost certainly lack intuitive meaning, they could provide a very useful screening tool for more experienced users.

@sballesteros
Copy link
Contributor Author

Just to clarify, so far we were considering the upvotes to be the same thing as a request for reviews (upvoting === requesting for reviews).

@majohansson
Copy link
Collaborator

Got it. Apologies for added confusion. Thinking through... how do you turn that down? If there are 10 review requests and then you get 5 reviews that are in agreement, do you turn it off? What's the threshold? In an outbreak that might have different meaning than in a preparedness phase. "Requests/recommendations for review ever" and "Requests/recommendations for review in the last 30 days"? Just thinking on this a little.

@sballesteros
Copy link
Contributor Author

We were thinking to not turn it off but just add up the reviews and requests and to treat the sum as a proxy for a sort of "activity" / "hotness" of a preprint. I just completed the "boring" part of the data model (see https://github.com/PREreview/rapid-prereview/tree/master/src/db and the associated tests in the test/ directory) and will work on the search & score tomorrow. Will be sure to ping you on slack when i reach decision points with respect to the score.

@majohansson
Copy link
Collaborator

right. that makes more sense now too!

@sballesteros
Copy link
Contributor Author

Otherwise Plaudit.pub integration should be super easy, so just open an issue if you want it and we can get to it once we have a working prototype.

@sballesteros
Copy link
Contributor Author

🎉

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants