New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ecau): rich copy-paste of webpage images and links #546
Conversation
We now also accept input from copied web pages and parse URLs from anchors and images on the copied portion.
/deploy-preview for beta testing because the extraction may not be ideal yet and the easiest way to find areas for improvement is by having people use the new version 🙂 |
feat(ecau): extract input from copied web page portions (#546)
This PR changes 1 built userscript(s):
|
Codecov Report
@@ Coverage Diff @@
## main #546 +/- ##
==========================================
- Coverage 99.17% 98.48% -0.70%
==========================================
Files 57 58 +1
Lines 1341 1385 +44
Branches 212 220 +8
==========================================
+ Hits 1330 1364 +34
- Misses 8 13 +5
- Partials 3 8 +5
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tested this feature a bit and it's a great thing to have (as a substitute for a generic provider). There are two things which don't bother me personally but might confuse the casual user:
- The pasted HTML fragment may contain irrelevant weblinks which are then of course leading to error messages (unsupported provider etc.). Personally I would have mitigated that by only extracting URLs from
<img>
tags, but your demo video shows a use case where the extraction from<a>
tags makes perfect sense. Maybe anchors could be an optional feature or we could ignore them unless there are no<img>
tags present in the HTML fragment? - Every image is handled as separate input, so the final status message (in case of success) will always be "Successfully added 1 image(s)", no matter how many images were actually extracted.
697abd4
to
c51b5af
Compare
It's entirely possible for someone to copy-and-paste a whole chunk of a document that includes both images and anchors. We probably don't want to extract from the anchors if there are images. So we'll first attempt to extract images, and fall back on anchors if there are no images. If there are neither anchors nor images, we'll fall back on generic parsing of plain text.
With the new URL parsing, it'd be possible that no URLs were parsed at all. Let's provide some feedback for that.
c51b5af
to
58353ea
Compare
feat(ecau): extract input from copied web page portions (#546)
8a4eaec
to
9d7a132
Compare
/deploy-preview it's been over half a year, I think it's about time to release this |
feat(ecau): extract input from copied web page portions (#546)
🚀 Released 1 new userscript version(s):
|
feat(ecau): rich copy-paste of webpage images and links (#546)
We now also accept input from copied web pages and parse URLs from anchors and images on the copied portion, which should make it easier to copy images from providers which are currently not supported (and might never be, depending on their popularity).
Some demos because a picture says a thousand words:
1.mp4
2.mp4
The ebay demo is a bit weird because it doesn't highlight the selection, but the copied portion is just the two sidebar images and not the full-page selection that appeared earlier in the clip.
This probably needs a forum post after release because the feature notification won't properly explain what this is.