-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add block time to block table #5
Comments
@HarleyAppleChoi @kwunyeung When taking a look at this issue, I've found how that we cannot implement this inside Juno. The reason is that in order to compute the time block, you have to have:
These two requirements collide with the current working of Juno. Currently, you can run Juno in three different modes:
While (2) does not have any problems, (1) and (3) do. Particularly, the listening for new blocks might cause some problems:
This conflict would cause extremely incorrect block time calculations. SolutionWhen thinking how to solve this problem, I figured out that it might be a lot easier and faster to perform this kind of computation client side. The idea is the following:
The time complexity of this solution is actually To better show my solution, I've created a repository: AvgBlockTimeCalculator. Inside it you can find and test the code by yourself. Please let me know what you guys think about this. |
@RiccardoM I guess calculating each block time would be fine being done on client side but not the average block time. For example, the current block height of From the repo, the average block time is only being calculated by the subscription which only taking the responded data into account. It will not have the average block time from genesis block. Saving each block time in the database is good as this data won't be changed after a single processing. I'm thinking if the following solution is possible.
There are two types of To calculate the
The
|
@kwunyeung Let me answer you by points
I honestly don't think this is a problem at all. Hasura has shown us that it can scale pretty well. You can see this inside their "See Hasura scale to 1 million active GraphQL subscriptions" article. You can see that they had 1 million clients and they were updating 1 million rows per seconds. At this workload, Postgres was at only 28% load. I don't think we will ever reach this number, so I think we can safely assume we will not have any problem about this.
Actually, that subscription gets all the blocks ever created, starting from the first up to the latest. So the blocktime is the most accurate possible. It takes all the blocks, gets their time and computes the average block time of all of them.
The problem with this approach is that we might have to run this process a lot of times before getting all the block times since it's recursive (to compute Let's take an example. Let's assume that we now start BDJuno against a chain that has 1.000.000 blocks already created. Let's also assume that in order to parse all the blocks, and get to the latest height, this will take BDJuno 1 hour per 10.000 blocks. Let's also assume we update the block time once per hour.
This will result in a block time that is never in real time, since it's only computed once every X amount of time. |
Ok, then we don't have to store the block time of every block. Just calculate on client side. The average block time can be done by the simple math I mentioned and they should be saved in the table when a new block is processed. This data is being treated as the latest status of the chain. The aggregated data would help us create more statistical analysis. In the current version, I have |
@kwunyeung I think that can be done inside BDJuno then, am I correct? If so I will take care of creating an issue to describe how to implement this properly. |
@RiccardoM yes it can be done inside BDJuno. Thanks! |
Currently when storing a new block, we save the block timestamp only. However, some applications might want to get the block time as well. For this reason, we need to add a row computing the time as follows:
References forbole/callisto#39
The text was updated successfully, but these errors were encountered: