You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before marketing the museum display, we need to make sure that it is reasonably "sandboxed", in that there is no way to navigate to a page or popup that has an external link in it. Perhaps this could be done in an automated manner, using some sort of web crawler? This includes some of the internal OneZoom website pages.
At the moment this is done by using a get request with nolinks=1 set which
(a) redefines the A handler in internal web2py pages, where appropriate (e.g. in about.load, sponsor_node_price.load, etc.)
(b) flags the wikipedia-loading code (which gets wikipedia pages using their REST API) to remove tags that link out to external websites, but keeping internal links within a page (see function sanitise_links() in treeLayout.html: these include BASE, A, and tags with href=xxx, such as AREA - question: are any more needed?)
(c) turns off the ability to right click for context menus, so that we can't e.g. visit the source of an image by right clicking.
(d) Others??? List here.
The text was updated successfully, but these errors were encountered:
(a) describe the pros / cons to potential users of the museum display. E.g. you have all the species etc wikipedia pages in multiple languages, but we have gone some way to mitigating abuse by ensuring that users can't just display any web page on the screen. However, we can't guarantee the appropriateness of the wikipedia pages (NB: we should make a museum display with WP turned off - this is trivial), and we can't stop someone editing the WP page to include inappropriate material, then going to that page on the display (this seems unlikely, though). Trivial hacks of this sort could be partially mitigated by looking at the revisions of a page and not using revisions within e.g. the past hour, but this might be overkill for us.
(b) document the methods used somewhere, so they can be checked over. E.g. in order to avoid using iframes, the wikipedia linkouts have to do fancy stuff like inject the WP css into the existing page, which might be considered rather fragile. Other examples of methods are using html definitions like '''href="mailto:mail@onezoom.org" onclick="if (typeof sponsor_page_link == 'function') {return sponsor_page_link(this);}"''' so that we fall back to a mailto link by default, but use the function sponsor_page_link if it is available - this function can be redefined to pop up a context box pointing out that mailto links are not valid on the museum display (is this a sensible hack?).
Before marketing the museum display, we need to make sure that it is reasonably "sandboxed", in that there is no way to navigate to a page or popup that has an external link in it. Perhaps this could be done in an automated manner, using some sort of web crawler? This includes some of the internal OneZoom website pages.
At the moment this is done by using a get request with
nolinks=1
set which(a) redefines the A handler in internal web2py pages, where appropriate (e.g. in about.load, sponsor_node_price.load, etc.)
(b) flags the wikipedia-loading code (which gets wikipedia pages using their REST API) to remove tags that link out to external websites, but keeping internal links within a page (see function
sanitise_links()
in treeLayout.html: these include BASE, A, and tags with href=xxx, such as AREA - question: are any more needed?)(c) turns off the ability to right click for context menus, so that we can't e.g. visit the source of an image by right clicking.
(d) Others??? List here.
The text was updated successfully, but these errors were encountered: