Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Data Buckets] Implement scoped data buckets #3498

Merged
merged 9 commits into from Jul 16, 2023

Conversation

Akkadius
Copy link
Member

What

This PR implements scoped data buckets

This means that for common primitives such as character npc or bot that use data-buckets that we represent them as their own columns within the data model and have a unique constraints applied against them.

For example, Quest API methods that use mob:SetData(name, value) will automatically scope the key to the entity under the hood without an operator needing to additionally add nested keying.

This PR also is pre-cursor to other work, such as distributed data bucket caching and other features.

This PR still needs to be tested in its entirety

Goal(s)

  • Allow for easier manipulation of data_buckets as it relates to a mechanism for storing character progress and flags to copy a character, migrate a character's flags and easily select what flags are unique to a specific player
  • Maintain the simplicity of using data buckets
  • Allow EQEmu Server developers and operators alike to maintain a simple way to manage flagging on a per-entity basis
  • This will allow the creation of a simplified flagging interface for characters so they can be a common language across servers and Quest API's

Current

Right now, data-buckets are arbitrary k/v pairs, which should be the case when using a k/v store and this mostly works without issue, you can store things as arbitrary k/v and key however you wish.

describe data_buckets;
+---------+---------------------+------+-----+---------+----------------+
| Field   | Type                | Null | Key | Default | Extra          |
+---------+---------------------+------+-----+---------+----------------+
| id      | bigint(11) unsigned | NO   | PRI | NULL    | auto_increment |
| key     | varchar(100)        | YES  | MUL | NULL    |                |
| value   | text                | YES  |     | NULL    |                |
| expires | int(11) unsigned    | YES  |     | 0       |                |
+---------+---------------------+------+-----+---------+----------------+
4 rows in set (0.07 sec)

Challenge

The part where this breaks down is when you want to easily query what flags or buckets that are tied to a specific entity, unless you have a consistent keying convention that all servers use, it is difficult to understand what is tied to a character.

In the source we prepend identifiers, but unless you are going through the quest abstraction of using Mob::SetBucket your values may or may not be tied to that entity. Some servers use their own set of prefixes or sometimes even suffixes to append uniqueness to their keys and does not allow for simplicity of the API.

We need a way to scope this, still keep pairs arbitrary but can be specific to at least a character.

std::string Mob::GetBucketKey() {  
	if (IsClient()) {  
		return fmt::format("character-{}", CastToClient()->CharacterID());  
	} else if (IsNPC()) {  
		return fmt::format("npc-{}", GetNPCTypeID());  
	} else if (IsBot()) {  
		return fmt::format("bot-{}", CastToBot()->GetBotID());  
	}  
	return std::string();  
}

Solution: Scoped Columns

We can add columns that scope to the player, but it will need to change how we query the data.

Instead of just having the key be the absolute unique component, it will need to extend to each scoped entity, in this case character_id, npc_id and bot_id.

Schema

If we were to implement the equivalent character_id npc_id bot_id scoped entries the schema would resemble the following

ALTER TABLE `data_buckets` 
ADD COLUMN `character_id` bigint(11) NOT NULL DEFAULT 0 AFTER `expires`,
ADD COLUMN `npc_id` bigint(11) NOT NULL DEFAULT 0 AFTER `character_id`,
ADD COLUMN `bot_id` bigint(11) NOT NULL DEFAULT 0 AFTER `npc_id`,
DROP INDEX `key_index`,
ADD UNIQUE INDEX `keys`(`key`,`character_id`,`npc_id`,`bot_id`)

We drop the old key_index which only enforced uniqueness on the key column, we need to extend this uniqueness to other types.

Testing Schema

If we were to test this schema it would end up looking something like

Pasted image 20230715232849

We're using something as a key and we have a unique entry globally character_id and npc_id are 0 while we have entries that are scoped to a specific character_id and a npc_id separately.

The moment we try to add a 4th entry that is then the same as a record previously added for the same npc_id we get:

Pasted image 20230715232953

Similar test applies when we add the bot_id field, we can add a key that is unique to that bot_id without issue while existing with the other flags.

Pasted image 20230715233140

Migration

We'd need to have a means to migrate from the existing code-level abstraction of how flags are defined per-character or per-npc in this case. We could do so while doing a select using the code format that is listed in the code above. This would result in a one-time migration of flags and while that is done, code changes would also have to be made in the current abstractions to add a scope to what column is being selected and written to during the Get and Set operations of the Quest API

Conversion example query

Below we show character_id being pulled out of the original format we had in the code.

We also pull out bucket_name similarly.

SELECT
	`key`,
	SUBSTRING_INDEX(SUBSTRING_INDEX( `key`, '-', 2 ), '-', -1) as character_id,
	SUBSTR(SUBSTRING_INDEX(`key`, SUBSTRING_INDEX( `key`, '-', 2 ), -1), 2) as bucket_name
FROM
	data_buckets 
WHERE
	`key` LIKE 'character-%';

Pasted image 20230716034423

We want to see the bucket name containing the character prefix and the ID get removed from the bucket name itself and exploded onto the new scoped column

Pasted image 20230716034440

Final Conversion

UPDATE data_buckets SET character_id = SUBSTRING_INDEX(SUBSTRING_INDEX( `key`, '-', 2 ), '-', -1), `key` = SUBSTR(SUBSTRING_INDEX(`key`, SUBSTRING_INDEX( `key`, '-', 2 ), -1), 2) WHERE `key` LIKE 'character-%';
UPDATE data_buckets SET npc_id = SUBSTRING_INDEX(SUBSTRING_INDEX( `key`, '-', 2 ), '-', -1), `key` = SUBSTR(SUBSTRING_INDEX(`key`, SUBSTRING_INDEX( `key`, '-', 2 ), -1), 2) WHERE `key` LIKE 'npc-%';
UPDATE data_buckets SET bot_id = SUBSTRING_INDEX(SUBSTRING_INDEX( `key`, '-', 2 ), '-', -1), `key` = SUBSTR(SUBSTRING_INDEX(`key`, SUBSTRING_INDEX( `key`, '-', 2 ), -1), 2) WHERE `key` LIKE 'bot-%';

@Kinglykrab
Copy link
Contributor

Testing Results

Perl

sub EVENT_SAY {
	if ($text=~/#a/i) {
		$client->SetBucket("Test", 1, "1D");
	} elsif ($text=~/#b/i) {
		quest::message(315, $client->GetBucketRemaining("Test") . " " . $client->GetBucketExpires("Test"));
	}
}
sub EVENT_SAY {
    if ($text=~/#a/i) {
        $client->SetBucket("Test", 1, "D1");
    } elsif ($text=~/#b/i) {
        my $data = $client->GetBucket("Test");
        quest::message(315, $data ne "" ? $data : 0);
    } elsif ($text=~/#c/i) {
        $client->DeleteBucket("Test");
    }
}

Images

imageimage

Lua

function event_say(e)
	if e.message:find("#a") then
		e.self:CastToMob():SetBucket("TestTwo", "1", "3D")
	elseif e.message:find("#b") then
		eq.message(315, e.self:CastToMob():GetBucketRemaining("TestTwo") .. " " .. e.self:CastToMob():GetBucketExpires("TestTwo"))
	end
end

Image

image

@Kinglykrab Kinglykrab merged commit 3f3bbe9 into master Jul 16, 2023
2 checks passed
@Kinglykrab Kinglykrab deleted the akkadius/scoped-data-buckets branch July 16, 2023 18:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants