-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Displaying machine-generated content #4192
Comments
Subscribe to Label Actioncc @WordPress/gutenberg-design, @WordPress/openverse
This issue or pull request has been labeled: "🖼️ aspect: design"
Thus the following users have been cc'd because of the following labels:
To subscribe or unsubscribe from this label, edit the |
Nice work. Both can work, and I like the short and concise tab names. |
This is interesting! In my mind I was thinking of having the tags all in the same group, but having them potentially be distinguished by color/shape/some other visual marker (along with whatever markers we would need to indicate for screen readers that the content was different). The separate tabs is a really interesting idea because it gives us a way to provide explicit explanations that the idea I had in mind wouldn't be able to do. I think your suggestion is great! We also have an On the tag view note - I don't think we have a way to distinguish the type of tag on the tags collection page. As in, when you visit a tag for the collections view, it matches both creator- and machine-generated tags with no way of distinguishing which one caused the result to show up. It might be possible to add this, but it would require some changes to Elasticsearch (which I'm proposing that we don't do at this moment in #4189). We will be able to know which results have that machine-generated tag once we have the results though, so in the tags page we could visually distinguish them (or use the "Source" and "Generated" tabs for that page as well!) to separate the results. |
This looks really nice! Does putting the generated tags behind a tab and user interaction deprioritize them too much? Our expectation in adding these tags at all is that they will be useful to users and have an impact on search relevancy (or at least that, the pipeline set up in #431 to add these tags will enable the generation of even more machine tags in the future which will improve search relevancy). I think this approach may cause too strong of a separation. One other thought: we don't really know that the existing tags are "human generated", do we? Some sources might have their own processes for machine-generated tags. I am also a bit concerned that we are giving the existing tags too much "authority" by claiming they are human-derived. It might be better to have a general system for categorizing tags ("source tags", "generated tags", etc.). |
Could be true for some sources (Flickr), but please keep in mind that GLAM institutions have highly trained and expert indivduals describing works, where tags are intentional and professionally produced output. "Source tags" is 100% the way to go to prevent a misprepresentation1. If there are sources that tags are less useful (Flickr), then we should change our search to reflect our scepticism of their accuracy. It's a different issue, and it must be source aware. We know our collected metadata is neither uniform in quality nor in the intended audience. But, we do not currently handle metadata in a meaningfully source-aware way, I'd hoped the Rekognition tags would be the first time we could incorporate that, but if they aren't integrated into search (need to read the IP still), then we're really not moving in that direction anyway. Footnotes
|
Thanks for the feedback. Great points were raised. Undoubtedly, Splitting the content into two sections through tabs deprioritizes the one placed in the inactive tab. In that vein, I need to insist on the question of how many tags will be displayed. The amount certainly conditions the layout design. For a large amount, we decided to hide a portion under the “show all tags” action (I don’t recall the final label but the expand/collapse action was already implemented). Therefore, in those cases where source tags exceed the visible limit, the generated ones will not be visible. Here is the mockup made for that scenario. I iterated other ideas with designer contributors, and the versions below were among the options. From left to right, you can notice how both sections split up.
On the other hand, I forgot to include another question in the list:
|
Ooo, thank you for sharing the iterations! I have to say my personal preference is on option 3, since it creates a clear distinction between the source of the tags while also offering the option to provide additional context in each header. I'm not the biggest fan of using emojis (per option 1) since I think it's possible we might have emojis in tags already. Option 2 feels less ideal because of how the tags wrap around if there are more than one row's worth; the fact that the generated tags appear under the source pill in that case is a little confusing. Would it make sense to leverage color at all for the distinction? Perhaps source tags could be the magenta and generated ones could be that grey? We also have accuracy & provider information for each tag, would that information be best shared as a tooltip on hover?
Not at this time, and it isn't part of the plan for incorporating Rekognition label data. |
I understand the concern with whitespace in option 3, but I also think that option is ideal. It gives a clear title to each tag section, which is also a nice accessibility improvement, and ensures that all tags are visible. It also works well in situations where we do not have any generated tags and will not show the section at all. |
I don't think we should use colour to indicate something like this. It's more abstract and unnecessarily easy to confuse compared with the options Francisco has shared. Additionally, colours have cultural/social meanings that are impossible to avoid and would add additional unnecessary implications to tags. What is the benefit to colours compared to the separated sections? The separated sections are clear and leave no room for misunderstanding. The explanatory content for the section is immediately accessible at the location of the tags, rather than dislocated or dispersed. |
I also prefer options 3 and 4, particularly 3. I prefer the very clear labels to emojis or colors. +1 for "source" vs "generated" terminology. I like the mockup for the |
It seems most agree with option 3 ✨ To @stacimc's point
I think we can go with the same limit and component interaction for expanding/collapsing set for the current source tags.
In addition to this. The colors used need to meet the 3:1 contrast ratio as the difference relies on color. To reach that, we would need to either change the gray to be darker or use a darker pink that risks to look too similar to the primary button. |
I updated the mockups page with the designs and added some front-end notes. |
How important is it to distinguish between the source and machine-generated tags in the tag view, @fcoveram ? Do you think it would confuse the users if they click on a machine-generated tag, and it opens a tag with the items that are tagged with it both by machine-generated process and by the source? I think it might be slightly confusing for someone who pays a great attention to the difference between the two, but for most users it would not be important. From the implementation point of view, making a distinction between the machine-generated tag views and the source tag views would require a lot of work on the API side (we would need to adjust the tag query to also include the tag provider of the source). Also, the frontend URL would probably have to have the information of whether the tag is machine-generated or from a source. @AetherUnbound, what do you think? |
I don't think we need to make the distinction. I added that part for the case we had to. But I agree it could confuse users. |
I also think it's alright to have the machine-generated tags link to a global tag view (with both creator/machine tags mixed)! Fewer changes required right now to support that, and the API change is already made! |
To add colour to that decision beyond the expediency of it requiring quite a bit more work: we do not distinguish between results that appear in a search solely due to a machine generated tag as opposed to ones that appear due to some other matching field. I think at the search level, that information is noise. It would be the same with the collections pages. It's important to distinguish at the single-result level because we want to communicate why something appeared in a particular query (despite #2594). That isn't the case at the search level, certainly not right now (disregarding whether it would be in the future). |
Design for #4039, as part of "Incorporate Rekognition data into the catalog" project (#431).
Description
To display machine-generated tags in the media details page, the idea that convinces me most is separating the media info section into two tabs: Source and Generated.
Here is a prototype of how it could work.
computer.generated.content.flow.compressed.mp4
Placing the content in a separated tab would help us with the following:
While testing the ideas, I had two doubts that condition the design decision:
Mockups
Figma: Machine-generated tags
The text was updated successfully, but these errors were encountered: