New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tag based content discovery and crowdsourcing #6214
Comments
Comments regarding the tag-based organisation and why need this in the first place: Why tag-centric organisation?
What is tag-centric even?Tag-centric organization is people, content, metadata.
The vision for Tribler:
|
This issue re-does the work already conducted 14 years ago. But then it failed. Our 'recent' tag work of only 10 years ago. High performance implementation of tag-based discovery: https://dl.acm.org/doi/abs/10.1145/2063576.2063852 |
Draft Wireframes for tags (updated 10.09.21): Source: https://drew2a.notion.site/Tags-c1567365a1c94ce78271257f0aa19b06 |
Solid progress. |
Trying out something new here. When it comes to interaction design, it is common to work with user stories that help to think about the user. Even after writing the simple user stories below, I feel that they are very helpful in getting rid of the bias towards the developer that we most likely all have. Feel free to improve/give feedback on the following user stories. PersonasI can think of the two personas:
EpicWithin this issue, we focus on persona 2 since the goals of persona 1 are addressed by different components. Tribler currently has a subpar user experience when it comes to finding and recommending content. As discussed before, we want to see if tags are able to improve this situation. As overarching goal, Tribler should be extended with the following functionality:
User storiesFor a minimal version of tags, I see the following three user stories:
Note: each user story should be clear, feasible, and testable, also see this article. |
After some discussions and mock-ups, here's a GUI preview of the resolution of user story 1: Note that I use the "GUI test mode" for prototyping so the tags/titles do not make sense yet. I'm hovering over the 'edit' button in the first row. Clicking on the pencil will bring up a dialog where a user can suggest/remove tags (we reached majority consensus on using a dialog), but that dialog is not ready yet. Color scheme/margins/paddings/sizes have not been finalized yet. |
We made a few design decisions:
|
Something to think about. Do we want to build a community of "taggers" after launch of 7.11? Or let our users know using TorrentFreak after a few months of more iterations and improvements.
|
Building a community would be great, but would probably require more work beyond a minimal version (e.g., making the contributions of a particular user visible). So let's first iterate and improve the current system. |
Design decisions behind the DB:
class TorrentTagOp(db.Entity):
id = orm.PrimaryKey(int, auto=True)
torrent_tag = orm.Required(lambda: TorrentTag)
peer = orm.Required(lambda: Peer)
operation = orm.Required(int)
time = orm.Required(int)
signature = orm.Required(bytes)
updated_at = orm.Required(datetime.datetime, default=datetime.datetime.utcnow)
orm.composite_key(torrent_tag, peer)
class TorrentTag(db.Entity):
...
added_count = orm.Required(int, default=0)
removed_count = orm.Required(int, default=0)
class TorrentTag(db.Entity):
...
local_operation = orm.Optional(int) cc: @kozlovsky |
@kozlovsky , wouldn't using a separate DB make it impossible to do complex queries involving both Metadata store data and Tags data? |
@ichorid I think that with a separate tag database the development of an initial version of the tag-based system may be easier. Regarding queries, with our current approach for FTS search, it should be no difference between a single database and two separate databases. If it would be necessary, we can combine databases later, or even just attach the tag database to the metadata store DB. |
Tag ReinforcementNot sure if the suggestion below is applicable/suitable for the first version, but it is open for discussion. Problem: To address the most trivial poisoning attacks, we decided that a particular tag will only be displayed when two identities have suggested it (thresholding). However, the chance that two users independently come up with the same tags for the same content is rather low. Even with a threshold of 2, I predict that much content will remain visibly untagged. Potential solution: We can help the user by showing tags that have been suggested by other users but don't have enough support yet (i.e., haven't reached the threshold). This indication (e.g., "Suggestions: X, Y, Z" or "Suggested by others: A, B, C") should be part of the dialog where a user can add/remove tags, for example, below the input field. To prevent visual clutter, we should limit the number of suggestions shown. |
If you need inspiration, there have been some academic works that look at the tag reinforcement of user-generated tags in the Steam Tags system (e.g., http://dx.doi.org/10.1145/3377290.3377300). |
I started to implement The processing of remote queries works in such a way that it is not tolerated for any extensions (it is not backward compatible for extensions): Lines 184 to 192 in d8cf392
This happens because of the implementation of Lines 25 to 41 in d8cf392
Therefore I need to find another approach for implementing remote search. |
After the discussion with @ichorid I decided to continue work on remote search by tags as an extension of The reason to continue: search by tags will not be backward capable with previous versions of Tribler, but it doesn't affect users and it is safe to implement. The remote search has been implemented and merged in #6708 |
The first iteration of the tags mechanism has been running successfully for over two weeks now. I wrote a few scripts to analyse my local tags database (source code can be found here) and to see if there are any interesting results. In total, I analysed 1476 tag operations, created by 52 unique users. Of these tags, 1460 are “add” operations and 16 are “delete” operations. A total of 500 torrents have been tagged. I think user engagement with the tag mechanism is somewhat low even though we made tags a prominent part of the GUI. Maybe the fact that there are no tags visible yet is a main reason? It also hints at that it would be helpful to look into more automated approaches to generate missing tags since currently, an absolute minimal fraction of our content space has been tagged. A total of 34 tags are currently visible in the GUI, meaning that they have been suggested by at least two unique users. This is merely 2.36% of all added tags. First, I looked at the frequency of each tag, see the log-log plot below. Note that this plot hints at a power-law distribution of tag frequencies. There are a few popular tags which seem to be generic. Most of the tags, however, are unique and only used once or twice. These results are in line with prior work on social tagging systems. The log-log plot below shows how active users have been tagging content. For each user, we show the number of tags it has created. Again, observe the power-law distribution. We have one user that is quite active and added over 1000 tags. Most users have only created a few tag. Finally, let’s have a look at the distribution of content being tagged. The log-log plot attached shows for each content item how many tags it has received. There is one piece of content that is subject to 20 tags operations. Most content has received four tags. It is important to keep in mind that popular content is more likely to be tagged and as such, there might be a correlation between torrent popularity and the number of tags it has received. While we still have a relatively small dataset, it is interesting to look at and interpret the results above. |
Proposed Tags Roadmap for Q1/Q2 2022Two weeks ago, we have successfully deployed the first iteration of our tags mechanism. In this first version, users can add and remove tags to and from individual torrents. To address spam, we have implemented a threshold where a tag is only shown when it is suggested by two distinct users. Today, @drew2a and I discussed the next steps and upcoming improvements. The main focus points are spam and attack-resilience, and automated approaches to generate accurate tags. These two focus points are not mutually exclusive, but we have devised a roadmap for the coming months that allows us to implement the proposed changes incrementally and while avoiding major, breaking changes (probably). Step 1: automated generation of tagsThis is based on very basic and generic heuristics. This will help to populate the user interface with existing tags, and hopefully increase user engagement. The PR that implements this feature has been filed and is almost ready for review. Release target: v7.12 (Feb. 1st). Step 2.1: binary voting on tagsI’m currently working on this mechanism - research notes are private. We introduce a simple, binary voting mechanism on tags. Users can vote for the accuracy/inaccuracy of tags and share these votes with other users. When estimating the trustworthiness of a tag, a user takes the opinion of other users into consideration. Users that created tags that are downvoted consistently have their tags suppressed in the user interface whereas users that create accurate tags have their tags promoted. This mechanism should be resilient against Sybils, collusion and a whitewashing attack. Step 2.2: agent-based metadata enrichmentWe open up our ecosystem to contributions of “metadata enrichment agents”. Each user can launch their own agent to enrich metadata. Users can vote on the generated tags of each agent. This is a key step towards content organisation in the Web3. Release target: v7.13 (April 1st). Step 3: introducing rules to automatically annotate contentWe introduce lightweight rules to generate tags. The first iteration of this mechanism will introduce basic regex-based rules. A rule is short description that is capable of quickly generating tags on content that matches the regex. Agents can craft rules and share them with users in the network. These rules are also assigned a reputation and malicious rules should be quickly suppressed by the voting mechanism. This will build upon the results and findings of step 2.1. Release target: v7.14 (June 1st). The combined effect of these components will hopefully be a thriving, decentralized and self-organising ecosystem of tags, secure against various forms of manipulation. In parallel, we can also start working on search and tag navigation but there are no concrete plans for these features as of yet. |
Semantic overlay related work. Useful for 'keyword search' on tags. |
Dataset for tagging: https://github.com/MTG/mtg-jamendo-dataset |
The current version of tags has been running successfully for a few months now, and we have seen several tags that have been created by different users. As the next step, we want to use these tags and our existing infrastructure to improve the search experience. Concretely, our first goal is to identify and bundle torrents that describe similar content (for example: Our upcoming improvements are also a key step towards readying our infrastructure to build and maintain a global knowledge graph. This knowledge graph can act as fundamental primitive for upcoming science in the domain of content search, content navigation, and eventually content recommendation. |
library science knowledge - related work. The manifestation versus item abstraction plus tagging. |
"Justin Bieber is gay" scientific problem - tag spamMeritRank is needed to fix this spam issue in the Tribler future. Fans and fame of artists also attracts Internet trolls. We have in the past cofounded the Musicbrainz music metadata library. This crowdsourcing library has a unique dataset of votes on tags with explicit spam. See the
Bieber has a profile page. Next step in our semantic search roadmap is modelling the split between concept and materialisation. The knowledge graph should contains both types of entries. See the 1994 early scientific beginnings of solution: gossip, signals, and reputation. Simple central reputation system of central profiles Publication venue: https://www.frontiersin.org/research-topics/19868/human-centered-ai-crowd-computing or |
Dev meeting brainstorm outcome: Martijn has/had a crawler running with Tag-crowdsourcing. Check status @drew2a and 1-day dataset analysis with live "remove tag" within Tribler 7.13 release? |
To describe the current state of the DatabaseThe full schema is available at It describes tribler/src/tribler/core/components/key/key_component.py Lines 24 to 26 in 26b0be8
In the tribler/src/tribler/core/components/database/db/layers/knowledge_data_access_layer.py Lines 60 to 65 in 76de562
Where tribler/src/tribler/core/components/database/db/layers/knowledge_data_access_layer.py Lines 32 to 57 in 26b0be8
Statement examples: SimpleStatement(subject_type=ResourceType.TORRENT, subject='infohash1', predicate=ResourceType.TAG, object='tag1')
SimpleStatement(subject_type=ResourceType.TORRENT, subject='infohash2', predicate=ResourceType.TAG, object='tag2')
SimpleStatement(subject_type=ResourceType.TORRENT, subject='infohash3', predicate=ResourceType.CONTENT_ITEM, object='content item') Due to the inherent lack of trust in peers, we cannot simply replace an existing statement with a newly received one. Instead, we store all There are two operations available for peers: tribler/src/tribler/core/components/database/db/layers/knowledge_data_access_layer.py Lines 26 to 29 in 26b0be8
All operations are recorded in the database, allowing for the calculation of the final score of a specific operation based on the cumulative actions taken by all peers. This approach enables a comprehensive assessment of each operation's overall impact within the network. Currently, a simplistic approach is employed, which involves merely summing all the 'add' operations (+1) and subtracting the 'remove' operations (-1) across all peers. This method is intended to be replaced by a more sophisticated mechanism, the tribler/src/tribler/core/components/database/db/layers/knowledge_data_access_layer.py Lines 98 to 100 in 26b0be8
ER diagramerDiagram
Peer {
int id PK "auto=True"
bytes public_key "unique=True"
datetime added_at "Optional, default=utcnow()"
}
Statement {
int id PK "auto=True"
int subject_id FK
int object_id FK
int added_count "default=0"
int removed_count "default=0"
int local_operation "Optional"
}
Resource {
int id PK "auto=True"
string name
int type "ResourceType enum"
}
StatementOp {
int id PK "auto=True"
int statement_id FK
int peer_id FK
int operation
int clock
bytes signature
datetime updated_at "default=utcnow()"
bool auto_generated "default=False"
}
Misc {
string name PK
string value "Optional"
}
Statement }|--|| Resource : "subject_id"
Statement }|--|| Resource : "object_id"
StatementOp }|--|| Statement : "statement_id"
StatementOp }|--|| Peer : "peer_id"
|
The next chapter is dedicated to the community itself. CommunityThe algorithm of the community's operation:
tribler/src/tribler/core/components/knowledge/community/knowledge_payload.py Lines 8 to 18 in 44e2235
tribler/src/tribler/core/components/knowledge/community/knowledge_community.py Lines 126 to 128 in 44e2235
tribler/src/tribler/core/components/knowledge/community/knowledge_community.py Lines 119 to 124 in 44e2235
Autogenerated KnowledgeIn addition to the user-added knowledge statements, there is also auto-generated statements. The KnowledgeRulesProcessor was developed for the automatic generation of knowledge, which analyzes the records in the database and generates knowledge based on predefined regex patterns found in them. For example here is a definition of autogenerated tags:
This is a definition of Ubuntu, Debian and Linux Mint content items.
Auto-generation of knowledge occurs through two mechanisms:
Auto-generated knowledge does not participate in gossip among the network. |
The third paragraph is dedicated to the UI. UIThree changes have been made to the UI:
Also, a feature for searching by tags was added, but this feature hasn't been introduced to the users yet. |
After a thorough discussion, we came to the following architecture for the Tags system:
The text was updated successfully, but these errors were encountered: