Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI Beta Launch #40

Closed
CMCDragonkai opened this issue Oct 19, 2023 · 88 comments
Closed

CLI Beta Launch #40

CMCDragonkai opened this issue Oct 19, 2023 · 88 comments
Assignees
Labels
development Standard development epic Big issue with multiple subissues

Comments

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Oct 19, 2023

Specification

This epic focuses on the CLI beta launch targeting November 10 2023.

This follows from the 6th testnet deployment MatrixAI/Polykey#551 and focuses on any UX issues that should be resolved before we launch. Plus any documentation related things and metrics for tracking how the launch went, as well as content we need to write to prepare for it.

We should also get our demo video fully polished as well.

One of the things we need to do is:

  1. Add a audit or logs domain to PK at first to track information about the network, we will need this as part of our metrics.
  2. Create a simple dashboard on testnet.polykey.com and mainnet.polykey.com so we can see what's going on and show everybody how the network is building up.

We are currently working through this list of issues.
#40 (comment)

Additional context

Tasks

  1. Spec out the experience for the launch
  2. Schedule content releases
  3. Plan out all UX-related issues
@CMCDragonkai
Copy link
Member Author

MatrixAI/Polykey#600 was merged incorrectly, MatrixAI/Polykey#601 is being done by @addievo to fix up the additional options of privateKey and privateKeyPath and recoveryCode. Also they would need to be exposed on PolykeyAgent.start too. See review comments in MatrixAI/Polykey#600

@tegefaulkes
Copy link
Contributor

I finished up and merged MatrixAI/Polykey#601. Having a look over the CLI now.

@CMCDragonkai
Copy link
Member Author

The PolykeyClient is going to be create/destroy only, I think it's pretty clear that can work.

@CMCDragonkai
Copy link
Member Author

The change to the GitHubProvider to using graphql for getting connected identities is now causing occassional failures in the IdentitiesManager.test.ts. I've seen it on CI and also locally.

 FAIL  tests/identities/IdentitiesManager.test.ts (10.559 s)
  IdentitiesManager
    ✓ IdentitiesManager readiness (154 ms)
    ✓ get, set and unset tokens (with seed=1486144507) (1350 ms)
    ✓ start and stop preserves state (with seed=1486144507) (966 ms)
    ✓ fresh start deletes all state (with seed=1486144507) (922 ms)
    ✓ register and unregister providers (103 ms)
    ✓ using TestProvider (with seed=1486144507) (3075 ms)
    ✕ handleClaimIdentity (with seed=1486144507) (2837 ms)

  ● IdentitiesManager › handleClaimIdentity (with seed=1486144507)

    Property failed after 1 tests
    { seed: 1486144507, path: "0:0:0:0:0:0:0:0:0:0:0:0:0:3:0:1:1:0:3:0:0:0:0:1:2:1:1:1:2:1", endOnFailure: true }
    Counterexample: ["",{"accessToken":"          ","refreshToken":"","accessTokenExpiresIn":1698180425,"refreshTokenExpiresIn":0}]
    Shrunk 29 time(s)
    Got ErrorProviderUnauthenticated: Access token expired

      70 |     ) {
      71 |       if (!providerToken.refreshToken) {
    > 72 |         throw new identitiesErrors.ErrorProviderUnauthenticated(
         |               ^
      73 |           'Access token expired',
      74 |         );
      75 |       }

      at TestProvider.checkToken (src/identities/Provider.ts:72:15)
      at TestProvider.checkToken [as publishClaim] (tests/identities/TestProvider.ts:152:16)
      at src/identities/IdentitiesManager.ts:272:39
      at constructor_.addClaim (src/sigchain/Sigchain.ts:449:20)
      at withF (node_modules/@matrixai/resources/src/utils.ts:24:12)
      at constructor_.handleClaimIdentity (src/identities/IdentitiesManager.ts:261:5)
      at numRuns (tests/identities/IdentitiesManager.test.ts:361:7)
      at AsyncProperty.run (node_modules/fast-check/lib/check/property/AsyncProperty.generic.js:49:28)
      Hint: Enable verbose mode in order to have the list of all failing values encountered during the run
      at buildError (node_modules/fast-check/lib/check/runner/utils/RunDetailsFormatter.js:131:15)
      at asyncThrowIfFailed (node_modules/fast-check/lib/check/runner/utils/RunDetailsFormatter.js:148:11)

@amydevs

@amydevs
Copy link
Member

amydevs commented Oct 24, 2023

The change to the GitHubProvider to using graphql for getting connected identities is now causing occassional failures in the IdentitiesManager.test.ts. I've seen it on CI and also locally.

 FAIL  tests/identities/IdentitiesManager.test.ts (10.559 s)
  IdentitiesManager
    ✓ IdentitiesManager readiness (154 ms)
    ✓ get, set and unset tokens (with seed=1486144507) (1350 ms)
    ✓ start and stop preserves state (with seed=1486144507) (966 ms)
    ✓ fresh start deletes all state (with seed=1486144507) (922 ms)
    ✓ register and unregister providers (103 ms)
    ✓ using TestProvider (with seed=1486144507) (3075 ms)
    ✕ handleClaimIdentity (with seed=1486144507) (2837 ms)

  ● IdentitiesManager › handleClaimIdentity (with seed=1486144507)

    Property failed after 1 tests
    { seed: 1486144507, path: "0:0:0:0:0:0:0:0:0:0:0:0:0:3:0:1:1:0:3:0:0:0:0:1:2:1:1:1:2:1", endOnFailure: true }
    Counterexample: ["",{"accessToken":"          ","refreshToken":"","accessTokenExpiresIn":1698180425,"refreshTokenExpiresIn":0}]
    Shrunk 29 time(s)
    Got ErrorProviderUnauthenticated: Access token expired

      70 |     ) {
      71 |       if (!providerToken.refreshToken) {
    > 72 |         throw new identitiesErrors.ErrorProviderUnauthenticated(
         |               ^
      73 |           'Access token expired',
      74 |         );
      75 |       }

      at TestProvider.checkToken (src/identities/Provider.ts:72:15)
      at TestProvider.checkToken [as publishClaim] (tests/identities/TestProvider.ts:152:16)
      at src/identities/IdentitiesManager.ts:272:39
      at constructor_.addClaim (src/sigchain/Sigchain.ts:449:20)
      at withF (node_modules/@matrixai/resources/src/utils.ts:24:12)
      at constructor_.handleClaimIdentity (src/identities/IdentitiesManager.ts:261:5)
      at numRuns (tests/identities/IdentitiesManager.test.ts:361:7)
      at AsyncProperty.run (node_modules/fast-check/lib/check/property/AsyncProperty.generic.js:49:28)
      Hint: Enable verbose mode in order to have the list of all failing values encountered during the run
      at buildError (node_modules/fast-check/lib/check/runner/utils/RunDetailsFormatter.js:131:15)
      at asyncThrowIfFailed (node_modules/fast-check/lib/check/runner/utils/RunDetailsFormatter.js:148:11)

@amydevs

this is failing because we are taking we are using the falsiness of the values in the statement rather than matching == null:

public async checkToken(
    providerToken: ProviderToken,
    identityId?: IdentityId,
  ): Promise<ProviderToken> {
    const now = Math.floor(Date.now() / 1000);
    // this will mean that accessTokenExpiresIn = 0 will be false
    if (
      providerToken.accessTokenExpiresIn &&
      providerToken.accessTokenExpiresIn >= now
    ) {
      // this will mean that refreshToken = '' will throw
      if (providerToken.refreshToken) {
        throw new identitiesErrors.ErrorProviderUnauthenticated(
          'Access token expired',
        );
      }
      // this will mean that refreshTokenExpiresIn = 0 does not throw
      if (
        providerToken.refreshTokenExpiresIn &&
        providerToken.refreshTokenExpiresIn >= now
      ) {
        throw new identitiesErrors.ErrorProviderUnauthenticated(
          'Refresh token expired',
        );
      }
      return await this.refreshToken(providerToken, identityId);
    }
    return providerToken;
  }

This would mean that a value of 0 for the ExpiresInmean that they never expire.

Is this behaviour intended? If so I'll keep that as it is.

@CMCDragonkai
Copy link
Member Author

I think you found out that 0 means infinity.

@CMCDragonkai
Copy link
Member Author

We got about 2 weeks, so let's focus hard on getting stability and bugs fixed.

@CMCDragonkai
Copy link
Member Author

A bunch of different issues have been thrown into this epic. We have to go over prioritisation of which is most important and delegate the tasks. Remember we got a tight timeframe here.

@tegefaulkes
Copy link
Contributor

tegefaulkes commented Oct 25, 2023

Critical issues

  1. Prevent PolykeyAgent crashes from connection failures Polykey#592 Pretty major problem, results in nodes randomly crashing.
  2. Testnet nodes generates multiple root certificates - it should only have 1 Polykey#597 - An oddity and doesn't really break things, but should be looked into.
  3. Update handler and caller timeouts to be able to overwrite server and client default timeouts regardless of value. js-rpc#47 GitHub Auth Timeout is too low Polykey#588 Integration of js-rpc timeout and middleware changes to fix Auth Provider timeout issue Polykey#589 - Critical from a usability standpoint. Otherwise not strictly broken.
  4. CLI polish and fixes for deployment #44 - Critical from a usability stand point.
  5. Timer not cleaned up when cancelled. js-timer#15 - macro task resource leak.

needed but functional without

  1. Memory Leak Tracking and Remote Debugging Polykey#598 - Performance problem, Something to address sooner than later but things like this take a fair amount of work to track down the source.
  2. Fix hole punching to work in both directions Polykey#605 - Not super big issue, Right now it means more stricter nats can't be punched to. I have an idea for a fix.
  3. feeding associated data in GCM cipher mode js-encryptedfs#5 - Helps cud
  4. Implement timeoutMiddleware and Remove metadata.authorization for src/nodes/agent Domain Polykey#572
  5. Fix verifyServerCertificateChain failing to pass with old nodeId Polykey#593
  6. NAT Busting for keynodes behind NAT layers Polykey#37
  7. Change tests depending on KeyRing change propagation to stop using sleep, and instead use mocks or multi-event barrier Polykey#594, Passphrase Strength in Key Generation Polykey#38
  8. Prototype Pollution of getToken and putToken when identityIdis __proto__ Polykey#608
  9. Encryption of vault keys Polykey#22
  10. Accessing Vault after cloning fails Polykey#518 - Is this even still a bug? I think we would've seen it during testnet testing
  11. Review toError and fromError sensitivity filter, agent service sensitive information removal, PK specific error passthrough, and unknown errors Polykey#564 - has been expanded since this issue was created. Just needs a look over.
  12. Factoring out the Timeout Middleware from PK to js-rpc js-rpc#19 feat: incorporate timeoutMiddleware to allow for server to utilise client's caller timeout js-rpc#42

stretch

  1. Peer Store for each Polykey node Polykey#36
  2. Testnet & Mainnet Status Page testnet.polykey.com mainnet.polykey.com Polykey#599
  3. Secret Management - Subcommands to interact with Secrets Polykey#30
  4. Pagination Deployment to Service Handlers and CLI Commands and Domain Collections Polykey#237
  5. The pk secrets env command for meeting Development Environment Usecase #31
  6. Git alternatives - propagating changes Polykey#6
  7. Integration Tests for testnet.polykey.com #71

@CMCDragonkai
Copy link
Member Author

Stretch 8. should be done to make sure timeouts are working.

Add a audit or logs domain to PK at first to track information about the network, we will need this as part of our metrics.

No issue for this, but it's really part of MatrixAI/Polykey#599, and we will need that for metrics for the launch, can't launch without some metrics.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Oct 26, 2023

We need to solve that MatrixAI/Polykey#598 to ensure that we don't get hit with a hug of death.

We need to solve our memory leaks - all nodes connect to all testnet nodes. We need randomised round-robin for connecting to the testnet.

We can do this synthetically, load test it.

@CMCDragonkai
Copy link
Member Author

MatrixAI/Polykey#605 - regardless of how strong the strong NAT - it should work either way.

@tegefaulkes
Copy link
Contributor

I'm going to focus on fixing MatrixAI/js-timer#15 really quick. It's causing some minor issues across the code bases.

@tegefaulkes
Copy link
Contributor

tegefaulkes commented Oct 26, 2023

Not sure how to move forward with MatrixAI/js-timer#15, It's an easy fix assuming my assumptions about how it should work is correct. But js-timer has been updated to ESM. So I can't make a new release we can use. Unless I can release it as 1.x versions.

@CMCDragonkai
Copy link
Member Author

Timer is fixed.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Oct 27, 2023

For the CLI beta launch to scale, because potentially many people may try PK and thus connect to the testnet, we need to avoid connecting to every seed node in the testnet at the same time. However this means hole punching won't work without a common seed node between source and target. To achieve this we need to revisit the decentralized hole punching problem in MatrixAI/Polykey#365

At the same time due to MatrixAI/Polykey#599 we would likely want to switch to using SRV records, so that A and AAAA records can be reserved for the testnet.polykey.com or mainnet.polykey.com dashboard.

Also for the CLI beta launch to work well, we might want to deploy onto mainnet. OR default our network to testnet for now. One can even throw an error if attempting to connect to mainnet that isn't configured at all atm. We should prefer a smoother environment... so switching to just mainnet is fine. This launch will be the first version of mainnet then.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Oct 30, 2023

Task assignments - for the 2 weeks to launch.

@amydevs

Firstly to fix up the timeouts for RPC, and then apply to both the agent service and the github auth timeout issue.

@tegefaulkes

@okneigres

@addievo

@CMCDragonkai review stuff!

Stretch:

@amydevs
Copy link
Member

amydevs commented Nov 3, 2023

currently Polykey agents domain do not use ctx at all, so server-side handlers will never time out. This will need to be done, and tests will need to be written with timeouts in that domain

@tegefaulkes
Copy link
Contributor

We get the following warning when running the CLI

(node:610526) [DEP0112] DeprecationWarning: Socket.prototype._handle is deprecated
(Use `node --trace-deprecation ...` to show where the warning was created)

@amydevs mentioned it's due to MDNS. I think we need to look into removing the warning. I don't see any issues about this however so maybe we should make a new one and look into it?

It seems that MDNS is getting the handle (fd?) of the socket to modify it. I can see us needing direct access to the socket in native code for js-quic as well for a performance upgrade. So solving this can be applied there as well.

@tegefaulkes
Copy link
Contributor

Secrets env command is completed now. I'll be moving on to working on the short short list OR fixing and re-enabling windows and mac CI test jobs

@amydevs
Copy link
Member

amydevs commented Mar 15, 2024

pr MatrixAI/Polykey-Docs#56 should be ready now

@tegefaulkes
Copy link
Contributor

I've noticed that the test NodeManager › with peers in network › findNode by signalled connections › handles offline nodes fails randomly and more often than I'd like. We need to look into it.

@tegefaulkes
Copy link
Contributor

We're getting close to completing the main stuff. @amydevs is looking into the issues with discovery we found while testing. I'm thinking I should look into making polykey more tolerant of network failures and changes. This can tie into MatrixAI/Polykey#461 as well.

I've already made some fixes to js-quic to handle network problems but it's far from comprehensive. Polykey itself needs to handle the QUICServer going down for network reasons. It should even support the network being unavailable for extended periods.

This ties in nicely with MatrixAI/Polykey#461 since we'd need to support not having networking running for extended periods of time, but also support dynamically starting or stopping the QUICServer whenever we want. Either in respose to the user requesting it or having to restart it when the network changes or comes back up. Also, being able to toggle the QUICServer at will can be seen as a kind of stealth mode.

I just did some testing with Polykey-CLI. It seems things are tolerant of this already so the changes I made before seem more than fine for now. I tested this by starting a node and just turning off wifi. I did the expected and timeout after the timeout time. Re-enabling the wifi and pinging a seed node caused it to re-connect to the Polykey network just fine as expect. The process did not crash at any point while doing this. Previous crashing due to network issues may have been fixed when I fixed a race condition relating to this just recently.

That said, there may be corner cases where the socket could just fail for one reason or another. The question is how tolerant do we want to be about this? How does this handle switching between wifi networks or interfaces? If we have two active interfaces and one starts to fail then do we seamlessly switch, or are we just tolerant of that? I suppose all theses questions will bear out with more rigorous user testing.

So based on that quick test I think as things are we're fine and tolerant of network failures in that the process will no crash in these cases. I don't think there's anything to address about that right now but we should keep it in the back of our mind with further testing.

@tegefaulkes
Copy link
Contributor

tegefaulkes commented Mar 26, 2024

Priorities going forward..

Discovery problems

Right now there are problems with discovery

  1. Bug when processing duplicate and missing cryptolinks
  2. No feedback when doing discovery, back-grounded or active discovery.
  3. Currently periodic re-discovery happening. So discovery always needs to be manually triggered.
  4. GG doesn't seem to be updated when processing new information during discovery.
  5. We want to seamlessly trigger discovery on any friends/followers on Github
  6. No removal of old or invalid nodes from the gestalt.

Reviewing the repos I found the following old issues that relate to these problems.

Looks like all points are addressed across this. Some double up. Should we create a single PR addressing all of these? Seems like a decent re-work/update of discovery. All of these are on Polykey except for point 2 which is both front-end and back-end.

Performance problems.

There are 2 performance issues.

  1. Slow startup time of commands. This is likely to be two things. Either Commander doing a bunch of runtime processing for constructing commands. Or PKG doing a bunch of unpacking to run. CLI commands are slow to start #102
  2. The EFS runs pretty slow which results in very slow vaults performance. Benchmark the EFS js-encryptedfs#76

Lingering issues

We still need to finish up the following active issues

Moving forward

Moving forward we want to focus on any UI/UX issues and bugs we encounter during user testing.

@tegefaulkes
Copy link
Contributor

I'm going to make a new epic for tracking discovery problems.

Copy link
Member Author

CMCDragonkai commented Apr 19, 2024

@pablo.padillo this is the main important engineering issue atm.

Status update:

  1. Remaining subissues here are not part of CLI beta 30, they can be moved out to be worked on in the backlog.
  2. The main bug discovered during your testing with externals and internals in relation to having double claims should be fixed. Pending that this is all part of a released PK CLI executable - if it is not, it requires a 0.3.1 release.
  3. If so, then you must re-attempt another demo run - but this time without bothering with removing your old claims. It's fine to have multiple claims to the same identity from multiple nodes.
  I1
 /  \
N1   N2

Even if N1 is destroyed, from the perspective of I1, and N2, the entire gestalt is all 3 vertexes.

Therefore if N3 and I2 exists.

I2
|
N3

And I1...I2 were connected in some way.

Then N3 should be able to discover I2 and I1, and also discover N1 and N2.

Thus N3 should be able to connect to N2 - either via manual discovery or automatic discovery.

The work that @brian.botha and @amydevs is on right now, are all UX oriented features that help automate or reduce the amount of work for the entire demo process.

This includes:

  1. Being able for N3 to share with ANY vertex of I1, so it can share to I1 or N2 or N1, and it achieves the same thing. So that N1 and N2 can both pull/clone the vault. This reduces the demo process steps. - ENG-132
  2. Background discovery should be automatic between I2 gestalt and I1 gestaslt. So that it should not be needed for N3 to manual-discover of N2/I1. Background discovery should be interpretable and predictable to the first-order - the immediate neighbourhood. - ISSUE?
  3. Being able to have notifications be completely delay tolerant. - ENG-37 Backgrounding of Notifications Domain Polykey#695

These 3 things may come after version 0.3.1.

Copy link
Member Author

So upon release of 0.3.1, remove those other remaining subissues, as I consider the beta release to be done. @pablo.padillo should be redoing the demo with the understanding the more than 1 claim is fine to work with. Subsequent release of PK should be addressing the 3 items posted above.

@tegefaulkes
Copy link
Contributor

@pablo.padillo this is the main important engineering issue atm.

Status update:

  1. Remaining subissues here are not part of CLI beta 30, they can be moved out to be worked on in the backlog.
  2. The main bug discovered during your testing with externals and internals in relation to having double claims should be fixed. Pending that this is all part of a released PK CLI executable - if it is not, it requires a 0.3.1 release.
  3. If so, then you must re-attempt another demo run - but this time without bothering with removing your old claims. It's fine to have multiple claims to the same identity from multiple nodes.
  I1
 /  \
N1   N2

Even if N1 is destroyed, from the perspective of I1, and N2, the entire gestalt is all 3 vertexes.

Therefore if N3 and I2 exists.

I2
|
N3

And I1...I2 were connected in some way.

Then N3 should be able to discover I2 and I1, and also discover N1 and N2.

Thus N3 should be able to connect to N2 - either via manual discovery or automatic discovery.

The work that @brian.botha and @amydevs is on right now, are all UX oriented features that help automate or reduce the amount of work for the entire demo process.

This includes:

  1. Being able for N3 to share with ANY vertex of I1, so it can share to I1 or N2 or N1, and it achieves the same thing. So that N1 and N2 can both pull/clone the vault. This reduces the demo process steps. - ENG-132
  2. Background discovery should be automatic between I2 gestalt and I1 gestaslt. So that it should not be needed for N3 to manual-discover of N2/I1. Background discovery should be interpretable and predictable to the first-order - the immediate neighbourhood. - ISSUE?
  3. Being able to have notifications be completely delay tolerant. - ENG-37 Backgrounding of Notifications Domain Polykey#695

These 3 things may come after version 0.3.1.

https://linear.app/matrix-ai/issue/ENG-298/follow-permission-and-social-links-during-discovery has been created to track point 2. above.

@CMCDragonkai
Copy link
Member Author

@CryptoTotalWar I think upon closing this issue, we should be planning for a hard launch. You should create a new issue for that and associate subissues for a hard launch. I'm not sure if it should go into Polykey - but we could associate issues to the MatrixAI-Graph for any general work.

@CryptoTotalWar
Copy link

NOTES Rough Draft WIP. Will clean up.

Critical Issues

  1. Prevent PolykeyAgent crashes from connection failures Polykey#592 Pretty major problem, results in nodes randomly crashing. "General Fixes for connection stability" - important from a network stability standpoint. We don't want the program or network to fail on you.

  2. Testnet nodes generates multiple root certificates - it should only have 1 Polykey#597 - An oddity and doesn't really break things, but should be looked into. "was producing multiple node certificates when it should have only been producing one" - Minotr bug dont include

  3. Update handler and caller timeouts to be able to overwrite server and client default timeouts regardless of value. js-rpc#47 GitHub Auth Timeout is too low Polykey#588 Integration of js-rpc timeout and middleware changes to fix Auth Provider timeout issue Polykey#589 - Critical from a usability standpoint. Otherwise not strictly broken. "increasing the Authentication Timeout window for PK Identities Authenticate - important from a usability standpoint

  4. CLI polish and fixes for deployment #44 - Critical from a usability stand point. "General PK CLI polish and fixes" - pretty major (I will say this is a feature enhancement from a usability standpoint).

  5. Timer not cleaned up when cancelled. js-timer#15 - macro task resource leak. "just a fix where the timers are not being cleaned up properly... a resource leak where they weree lingering in memory when they should have been done cleaned up" - major - resource leaks always a problem and the memory would pile up and crash the node so an improtant fix.

Needed but functional without

  • major fix where we fixed NAT hole punching in multiple directions whereas in certain network situations there could be single direction hole punching whereas it should have been symmetric in certain situation. - Important

MatrixAI/Polykey#605 - Not super big issue, Right now it means more stricter nats can't be punched to. I have an idea for a fix. Not strictly a major error, just something we did not want to be broken. Important for reliability of usage in different network situations.

A lot of this was just fixing up bugs.

"Major Feature enhancement

Features WIP

Copy link
Contributor

0.3.1 has been release a few days ago. I'm going to resolve this issue now.

@CryptoTotalWar
Copy link

Engineering Updates Summary for Blog Post

Critical Fixes

  • Connection Stability: Implemented general fixes to enhance network stability, addressing critical issues where nodes could crash unexpectedly. This fix is crucial for ensuring reliability. Issue #592 - Important

  • Authentication Enhancements: Increased the Authentication Timeout window for PK Identities Authenticate to improve user experience during the login process. PR MatrixAI/Polykey-Docs#47, Issue #588, PR #589 - Important

  • CLI Usability: Enhancements and polish applied to the Polykey CLI to improve usability. Issue MatrixAI/Polykey-Docs#44 - Important

  • Resource Leak Fix: Addressed a significant issue where timers were not being properly cleaned, leading to potential memory buildup and crashes. Issue MatrixAI/Polykey-Docs#15 - Important

Feature Enhancements

  • NAT Hole Punching: Refined our NAT hole punching mechanisms to support more symmetric network configurations, crucial for the reliability of node communications. Issue #605 - Important

  • Node Discovery Overhaul: Decentralized the node discovery process to enhance network connectivity and reduce reliance on seed nodes. PR #618 - Major Feature Enhancement

  • Environment Variable Command: Added the Polykey ENV command for secure injection of secrets into the environment, simulating a more secure method of handling environment variables. Issue MatrixAI/Polykey-Docs#31 - Major Feature Enhancement

  • CLI Output Standardization: Standardized the output formats of the CLI, improving interface consistency across commands. Issue MatrixAI/Polykey-Docs#22 - Important

Features in Progress (WIP)

  • Audit Domain: Developing an audit domain for better monitoring and tracking of network activities. Issue #177 - WIP

  • Discovery Feedback: Enhancing feedback mechanisms for node discovery to allow real-time insights into the discovery process. Issue #162 - WIP

Not Included

  • Minor bugs and enhancements not critical to the user experience were deemed not important for the blog post, ensuring focus on impactful updates.

This summary will guide the blog post update regarding the CLI enhancements and fixes since the beta release. Please review and confirm the inclusion of these points. - @tegefaulkes

@CMCDragonkai
Copy link
Member Author

I think that should go to your blog issue? This issue is closed. @CryptoTotalWar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
development Standard development epic Big issue with multiple subissues
Development

No branches or pull requests

6 participants