Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Item is not getting updated #39

Closed
bazuker opened this issue Jul 2, 2017 · 5 comments
Closed

Item is not getting updated #39

bazuker opened this issue Jul 2, 2017 · 5 comments

Comments

@bazuker
Copy link

bazuker commented Jul 2, 2017

var (
	ErrMediaNotFound = errors.New("Media file not found")
	ErrInvalidId = errors.New("Invalid ID")
	cache *bigcache.BigCache
	db *database.Instance
)

type MediaReference struct {
	URL 	string  `json:"url"`
	Views 	int		`json:"views"`
}

func DecodeHex(s string) (bson.ObjectId, error) {
	d, err := hex.DecodeString(s)
	if err != nil || len(d) != 12 {
		return "", ErrInvalidId
	}
	return bson.ObjectId(d), nil
}

func GetMediaURL(key string) (string, error) {
	// First check in the BigCache
	if entry, err := cache.Get(key); err == nil {
		ref := MediaReference{}
		err = ref.UnmarshalJSON(entry)
		if err != nil {
			return "", err
		}
		ref.Views++
		data, err := ref.MarshalJSON()
		if err != nil {
			return "", err
		}

		fmt.Println("count", string(data))

		err = cache.Set(key, data)
		if err != nil {
			return "", err
		}

		return GS_URL + ref.URL, err
	}
	// Then perform the search in the database
	obj, err := DecodeHex(key)
	if err != nil {
		return "", err
	}
	media, err := db.FindMediaById(obj)
	if err != nil {
		return "", ErrMediaNotFound
	}
	// Store the URL in the BigCache
	ref := MediaReference{URL: media.Address, Views: 1}
	data, err := ref.MarshalJSON()
	if err != nil {
		return "", err
	}
	cache.Set(key, data)

	return GS_URL + media.Address, nil
}

func onRemoveEntity(key string, entry []byte) {
	ref := MediaReference{}
	err := ref.UnmarshalJSON(entry)
	if err == nil {
		fmt.Println("delete", string(entry))
		go db.AddViewsToPostBasedOnMedia(bson.ObjectIdHex(key), ref.Views)
	}
}

func InitBigCache(development bool, database *database.Instance) error {
	config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,
		// time after which entry can be evicted
		LifeWindow: 10 * time.Second,
		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,
		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,
		// prints information about additional memory allocation
		Verbose: development,
		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,
		// callback fired when the oldest entry is removed because of its
		// expiration time or no space left for the new entry. Default value is nil which
		// means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: onRemoveEntity,
	}

	db = database

	var err error
	cache, err = bigcache.NewBigCache(config)
	if err != nil {
		return err
	}

	return nil
}

The code above performs view counting for cached items.
A view counts when the record is accessed and by the time the record should be evicted it saves the number of views to the database.
The problem is when the "OnRemove" callback fires and the stored json is unwrapped the view counter is wrong. Looks like it's either not updating the record or returns me some different cached item.

The log trace

count {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":2}
count {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":3}
count {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":4}
delete {"url":"5918e11a3cc6b2e22682218b/1497228828418322900.jpg","views":1} <--- ???
@janisz
Copy link
Collaborator

janisz commented Jul 2, 2017

Thanks for reporting. I'll take a look in to this. Code you posted contains some additional variables and logic. Can you prepare minimal and complete working example that will show what exactly is wrong? Best if it can be expressed as a unit test.

@bazuker
Copy link
Author

bazuker commented Jul 3, 2017

@janisz sure. Here is the code which fully illustrates the problem. While testing have a look at "Views" counter when the "onRemove" function is called.

package main

import (
	"fmt"
	"errors"
	"encoding/json"
	"github.com/allegro/bigcache"
	"time"
)

type someData struct {
	Views int
	URL string
}

var cache *bigcache.BigCache

// Every time you call this function it increases the total amount of views for a record
func getDataURL(key string) (string, error) {
	
	if entry, err := cache.Get(key); err == nil {
		ref := someData{}
		err = json.Unmarshal(entry, &ref)
		if err != nil {
			return "", err
		}

		ref.Views++
		data, err := json.Marshal(ref)
		if err != nil {
			return "", err
		}

		fmt.Println("get", key, string(data))
		
		// Update the record
		err = cache.Set(key, data)
		if err != nil {
			return "", err
		}

		return ref.URL, err
	}

	return "", errors.New("data not found")
}

func setData(key string, views int, url string) error {
	data, err := json.Marshal(someData{views, url})
	if err != nil {
		return err
	}

	fmt.Println("set", key, string(data))

	return cache.Set(key, data)
}

func onRemove(key string, entry []byte) {
	fmt.Println("delete", key, string(entry))
}

func main() {
	config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,
		// time after which entry can be evicted
		LifeWindow: 2 * time.Second,
		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,
		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,
		// prints information about additional memory allocation
		Verbose: true,
		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,
		// callback fired when the oldest entry is removed because of its
		// expiration time or no space left for the new entry. Default value is nil which
		// means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: onRemove,
	}

	var err error
	cache, err = bigcache.NewBigCache(config)
	if err != nil {
		panic(err)
		return
	}

	fmt.Println("Set the data")

	setData("key1", 1, "example.com")
	setData("key2", 1, "github.com")
	setData("key3", 1, "google.com")

	fmt.Println()

	getDataURL("key1") // 2
	getDataURL("key1") // 3
	getDataURL("key1") // 4

	fmt.Println("\nSleep")

	time.Sleep(time.Second * 5)

	fmt.Println()

	getDataURL("key1") // 5
	getDataURL("key1") // 6
	getDataURL("key1") // 7

	fmt.Println("\nSleep")

	time.Sleep(time.Second * 5)

	getDataURL("key1") // 8
	getDataURL("key1") // 9
	getDataURL("key1") // 10
}
~ go run main.go 
Set the data
set key1 {"Views":1,"URL":"example.com"}
set key2 {"Views":1,"URL":"github.com"}
set key3 {"Views":1,"URL":"google.com"}

get key1 {"Views":2,"URL":"example.com"}
get key1 {"Views":3,"URL":"example.com"}
get key1 {"Views":4,"URL":"example.com"}

Sleep

get key1 {"Views":5,"URL":"example.com"}
delete key1 {"Views":1,"URL":"example.com"}
get key1 {"Views":6,"URL":"example.com"}
delete key1 {"Views":2,"URL":"example.com"}
get key1 {"Views":7,"URL":"example.com"}
delete key1 {"Views":3,"URL":"example.com"}

Sleep
get key1 {"Views":8,"URL":"example.com"}
delete key1 {"Views":4,"URL":"example.com"}
get key1 {"Views":9,"URL":"example.com"}
delete key1 {"Views":5,"URL":"example.com"}
get key1 {"Views":10,"URL":"example.com"}
delete key1 {"Views":6,"URL":"example.com"}

@janisz
Copy link
Collaborator

janisz commented Jul 4, 2017

@oxygenD I checked the output of your example. Can you describe the problem one more time? It looks like in this example proper keys are deleted. Old value is replaced with new.

In your original code you might fall into situation when different key was removed to make a space for new inserted key.

@bazuker
Copy link
Author

bazuker commented Jul 5, 2017

@janisz as you can see from the example, I am increasing amount of views under the key "key1" until the counter reaches 4,

get key1 {"Views":2,"URL":"example.com"}
get key1 {"Views":3,"URL":"example.com"}
get key1 {"Views":4,"URL":"example.com"}

then I am putting the program to sleep for 5 seconds so the record under "key1" can be evicted.

Sleep

After that, I am adding one more view to the counter under "key1" and right after that "onRemove" callback triggers deleteing the record under "key1" because its lifetime is over.

get key1 {"Views":5,"URL":"example.com"}
delete key1 {"Views":1,"URL":"example.com"}

Now question. why is it deleting the record under "key1" where the views counter equaling to 1 ("Views":1) when I am already elevated this counter to 5?

@janisz
Copy link
Collaborator

janisz commented Jul 6, 2017

Memory used by bigcache is append only. There is no chance to override given part of memory. It's a circular buffer. So when you try to update specific key, old value is persisted until it need to be deleted to free space for new entries and when it occures onRemove callback is fired.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants