-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
making links clickable #3
Conversation
@@ -112,6 +112,27 @@ RvH.common.Util = { | |||
}, | |||
|
|||
/** | |||
* If there is no regex in the link, make it clickable | |||
*/ | |||
parseAsLink: function(text, host) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be simplified using regular expressions, something like this could work:
return text
.replace(/((Disallow|Allow|Sitemap):\s+)(\/[^\s\*]+)(\n|$)/ig, '$1<a href="' + host + '$3">$3</a>$4')
.replace(/(\s|^)(((https?:\/\/)?[\w-]+(\.[\w-]+)+\.?(:\d+)?(\/\S*)?))/gi, '$1<a href="$2">$2</a>');
This will also match Sitemap directives and any other full urls
I agree, that's a really elegant solution, thanks for coming up with that. Quick testing shows that it works great. |
//Regex to match only the protocol and host of a url | ||
tablink = tablink.match(/^[\w-]+:\/{2,}\[?([\w\.:-]+)\]?(?::[0-9]*)?/)[0]; | ||
if (robots !== false) { | ||
var htmlEntities = RvH.common.Util.htmlEntities(robots); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of repeating going on in these files to get the host and then convert html entities and add links. This could all be converted to a common util function and then used for both the robots and humans text display e.g.
RvH.common.Util.parseText(robots, 'robots.txt', function(text) {
$('#robots').html(text);
});
...
// common/rvh/Util.js
parseText(data, file, callback) {
if (data === false) {
callback('<div class="alert alert-danger">' + chrome.i18n.getMessage("fileNotFound", [file]) + '</div>');
}
else {
chrome.tabs.query({
'active': true,
'windowId': chrome.windows.WINDOW_ID_CURRENT
}, function(tabs) {
var tablink = tabs[0].url.match(/^[\w-]+:\/{2,}\[?([\w\.:-]+)\]?(?::[0-9]*)?/)[0];
var parsedData = this.addLinks( // <-- rename the parseAsLink method
this.htmlEntities(data)
);
callback('<pre>' + parsedData + '</pre>');
});
}
}
Hey thanks I like the idea! I also added another comment to reduce the amount of repetition when rendering the contents of each text file. |
Didn't even occur to me to do it for humans.txt, forgot those had links as well. |
$('#humans').html('<p>' + chrome.i18n.getMessage("fileNotFound", ["humans.txt"]) + '</p>'); | ||
} | ||
parseText(data.humans, 'humans.txt', function(data) { | ||
$('#robots').html(data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be #humans
It wouldn't be monday if something didn't slip through the cracks, thanks for seeing that. |
There were issues with links like |
This will make it so that all of the applicable robots.txt links are clickable. This aids in quick navigation to any of these links. There is a little issue with whitespace, where some site cause an extra newline, but it does not affect usability at all. Github messed up some of the formatting (indentation, etc...), but not much we can do about that.
Thanks!
Jake Reynolds (https://jakereynolds.co)