Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Load-time under 1 sec #271

Closed
5 of 6 tasks
SgtPooki opened this issue Jan 17, 2023 · 9 comments
Closed
5 of 6 tasks

feat: Load-time under 1 sec #271

SgtPooki opened this issue Jan 17, 2023 · 9 comments
Assignees
Labels
effort/weeks enhancement New feature or request P0 status/needs-clarification This issue requires further information prior to becoming ready for work

Comments

@SgtPooki
Copy link
Contributor

SgtPooki commented Jan 17, 2023

There is a goal of having load-times under 1 second, this issue is for tracking the proposal, agreement, and implementation of the work targeting that goal.

Current problems

  1. Async loading is currently blocking use of the app. We are loading all data asynchronously, from the clientside, but are currently blocking the rendering of the page content until all data is obtained. (we're keeping globalLoading=true)
  2. We are not caching data the users views, so all requests are making the same calls to github
  3. We require a backend to store our github tokens (see Loading starmaps entirely on the clientside #192) but have no access to storage on the host with vercel
  4. vercel cache invalidates between deployments. see https://vercel.com/docs/concepts/edge-network/caching#cache-invalidation

Goal

Setting expectations

The goal is to get load times, essentially the user "experience" to under 1 second. As @whizzzkid has pointed out previously, loading all data in under 1 second from github is... not likely. Especially for multi-team and org-wide roadmaps. So, we must do the best we can for our users, having a usable state as soon as possible. Below, in "The goal" is how we will define usable state for the sake of this task.

The goal

For a usable state, all of the below items are to be true within less than 1 second from page request.

  1. Initial response from server occurs (already true)
  2. rendering of detailed/simple/list views happens (already somewhat true when disabling the globalLoading blocker.. but see frontend proposal)
  3. Any visible links, inputs, etc, are all actionable
    • if it's not ready, don't display it, but the app layout and any loaded issues should be fully visible and usable within 1 sec
  4. The user is made aware of the number of pending issues that are being loaded
  5. Requesting a root roadmap URL I have loaded before (or anything included in that root node) should result in fully rendering the application and all children

Proposal

This proposal is broken up into two sections, one for the frontend and one for the backend.

Frontend

Backend

Additional options we should discuss

  1. Polling github for starmap compatible issues and storing in ipfs. Utilize ipfs as our cache
  2. Migrate away from vercel and use our own host

Related/sub items

@SgtPooki SgtPooki added enhancement New feature or request P0 status/needs-clarification This issue requires further information prior to becoming ready for work effort/days The work in this issue, which should be status/ready, should take less than a week to complete effort/weeks labels Jan 17, 2023
@whizzzkid whizzzkid self-assigned this Jan 25, 2023
@whizzzkid
Copy link
Collaborator

@SgtPooki the draft looks great, some additional pointers and context:

Maybe we should define performance targets by the number of issues, having 1second load times is a bit vague. Say a starmap with Max(100 issues | (5 Levels * 5 Nodes/level)) <= 1000ms Even this sounds unrealistic. If we have a sample problematic issue (which say takes 5 seconds to load) we can target an improvement budget 80% and bring that time down as much as we can.

Display a full count of pending issues in the UI so users know the progress percentage (we already have the counts available as pendingChildren.length)

We can have a progress bar of some sorts that shows on top of page or on an overlay to make it obvious what's happening.

#254 - additional caching on the client-side to prevent overloading vercel.. but could confuse cache validity

For this I would like to explore staleWhileReValidate caching strategy as explained here. This will build a model locally which then we can use to update every few hours in the background and also when the page loads.

Polling github for starmap compatible issues and storing in ipfs. Utilize ipfs as our cache

This would be hard to implement, how would caching starmap content map to ipfs? I think this would need to be thought through.

Migrate away from vercel and use our own host

Yes, I think we've discussed about this in the past, we can have a simple in-memory cache to have the model cached per node and just load that very fast. We can force-refetch if need be or update every few hours to make the process smooth. Should that be the first thing to explore?

@SgtPooki
Copy link
Contributor Author

Thanks for the feedback @whizzzkid

Maybe we should define performance targets by the number of issues, having 1second load times is a bit vague. Say a starmap with Max(100 issues | (5 Levels * 5 Nodes/level)) <= 1000ms Even this sounds unrealistic. If we have a sample problematic issue (which say takes 5 seconds to load) we can target an improvement budget 80% and bring that time down as much as we can.

I love the idea of having a baseline we can target. I think the bacalhau roadmap is a great one to target since there are hundreds of issues.

Display a full count of pending issues in the UI so users know the progress percentage (we already have the counts available as pendingChildren.length)

We can have a progress bar of some sorts that shows on top of page or on an overlay to make it obvious what's happening.

I think a progress bar would be a great solution, but could easily look odd and take a lot of development time for cases where the progress bar loads too fast. I am a fan of simple counts. I think we should stick to simply displaying the total number of issues being requested for the given URL, and then the count of fetched issues. Those numbers will adjust as appropriate and users have an accurate expectation without having to see a fluctuating progress bar (7/8 -> 7/15 will look odd)

#254 - additional caching on the client-side to prevent overloading vercel.. but could confuse cache validity

For this I would like to explore staleWhileReValidate caching strategy as explained here. This will build a model locally which then we can use to update every few hours in the background and also when the page loads.

Absolutely down with that.

Polling github for starmap compatible issues and storing in ipfs. Utilize ipfs as our cache

This would be hard to implement, how would caching starmap content map to ipfs? I think this would need to be thought through.

Yea, it would definitely take some time to nail down a proper solution here, but this would help reduce costs of maintaining infrastructure if we go with our own hosts. For now, I think this idea could be punted.

Migrate away from vercel and use our own host

Yes, I think we've discussed about this in the past, we can have a simple in-memory cache to have the model cached per node and just load that very fast. We can force-refetch if need be or update every few hours to make the process smooth. Should that be the first thing to explore?

We should probably not start here.. there's a lot of optimization to be done elsewhere and vercel gives a lot of devops and devex improvements that I don't want us handling right now


feel free to adjust the issue description as you see fit so that the plan is accurate

@BigLep
Copy link

BigLep commented Feb 17, 2023

@SgtPooki : are the checklist items above in frontend/backend the remaining items for closing this out?

@whizzzkid whizzzkid removed the effort/days The work in this issue, which should be status/ready, should take less than a week to complete label Feb 21, 2023
@SgtPooki
Copy link
Contributor Author

SgtPooki commented Feb 22, 2023

@BigLep I don't think the checklist above is accurate. I will update it.

@whizzzkid I think deferring the work to remove the globalLoadingState blocker is reasonable, which only leaves a single checklist item. Do we have tracking items for those?

I think both frontend tasks can be deferred, and one backend task can be deferred, but i do want to look into using vercel's server side function cache. Have you done any of that work yet? If not, I can take that on immediately after kubo-rpc-client work.

@SgtPooki
Copy link
Contributor Author

Moved out the remaining front-end tasks to #334

@SgtPooki
Copy link
Contributor Author

moved out the backend item #186 and cleaned up the remaining item into #335

@whizzzkid
Copy link
Collaborator

@SgtPooki can this be closed?

@SgtPooki
Copy link
Contributor Author

@SgtPooki can this be closed?

Im working on vercel caching by adding cache-control header and then we'll close this out

Should have PR out for that shortly. Almost had it finished last night but ran out of time.

@SgtPooki
Copy link
Contributor Author

Closing this as #337 was merged. we can revert if it causes issues, and I will inform #engres-milestone-tool to let us know if they experience any stale data or other potential caching issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort/weeks enhancement New feature or request P0 status/needs-clarification This issue requires further information prior to becoming ready for work
Projects
Archived in project
Development

No branches or pull requests

3 participants