Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom serialisers to improve the footprint of aldica caches #34

Merged
merged 13 commits into from
Jul 6, 2020

Conversation

AFaust
Copy link
Collaborator

@AFaust AFaust commented Jul 1, 2020

CHECKLIST

We will not consider a PR until the following items are checked off--thank you!

  • There aren't existing pull requests attempting to address the issue mentioned here
  • Submission developed in a feature branch - not master

CONVINCING DESCRIPTION

This PR adds a basic set of custom binary serialisers to the Repository module of aldica in order to optimise the serialised state of cache keys and values, as well as in some cases avoid warnings emitted by Ignite due to Alfresco classes implementing Externalizable or Serializable hook methods. The various serialisers are all configurable in detail, and there are also simplified, high-level configuration properties provided to toggle all non-trivial optimisations on/off. In the default configuration, the serialisers will use a streamlined serial format without class field metadata and with potentially compressed value fields (e.g. writing primitives instead of their nullable wrapper objects), and will substitute both well known and dynamic values with placeholders that can be easily resolved back to the actual value during deserialisation. The dynamic value substitution is by default limited to entities / values that can be expected to be always stored in the immutableEntitySharedCache, a cache that should be fully replicated among all grid members and thus would be extremely fast to lookup any required value.

All custom serialisers come with their own test cases verifying correct functionality and a relative change in memory footprint for instances of the affected classes. Not all serialisers actually provide improved memory footprints in all situations, but the very few instances where the footprint actually grows can either be accepted because it affects rarely used value classes (ModuleVersionNumber), be ignored because the difference is negligible, or be ignored because we do not intend the specific constellation of configuration properties resulting in the higher footprint to be used in real envrionments.

The memory benchmark has been enhanced, redone and re-documented. The expansion of the test to also cover content data and properties of type d:noderef and d:qname has partially compensated for the reduced footprint resulting from our custom serialisers, so the overall improvement compared to Alfresco default is still in the 20 - 25% range with regards to reduced memory requirements. Additionally, the bechnmark now also takes a very high-level look at the throughput of read operations on already initialised caches, and here we are able to show that aldica caches can generally provide better performance with less memory, despite the added overhead of serialisation / deserialisation.

I still have further ideas for optimising our serialisation, but want to have the current state ready and merged in time for the 1.0.0 release and Tech Talk Live. Any future improvements will be done in new feature branches.

RELATED INFORMATION

N/A

@AFaust AFaust requested a review from andreaskring July 1, 2020 23:38
@AFaust AFaust force-pushed the feature/serialisation-improvements branch from 651ccf2 to 85cbcf9 Compare July 3, 2020 10:59
andreaskring
andreaskring previously approved these changes Jul 3, 2020
Copy link
Collaborator

@andreaskring andreaskring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Impressive amount of work! It looks good to me. I just have the one comment about the default enabling of the "useIdsWhenPossible" as most users of the module would probably enable this anyway given the results of the memory BM tests

${moduleId}.core.binary.optimisation.enabled=true
${moduleId}.core.binary.optimisation.useRawSerial=\${${moduleId}.core.binary.optimisation.enabled}
${moduleId}.core.binary.optimisation.useIdsWhenReasonable=\${${moduleId}.core.binary.optimisation.enabled}
${moduleId}.core.binary.optimisation.useIdsWhenPossible=false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be set to true as the default value (as you suggested earlier), since the memory footprint / throughput numbers are very good in this case.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In updated branch, useIdsWhenPossible now also inherites from enabled, meaning it is enabled by default.

- analysis of unexpected results for aspectsSharedCache
- optional preloadQNames URL parameter for benchmark to verify limited
  impact of TransactionalCache flaw
- enable useIdsWhenPossible by default
- add missing specific cache types for various local/invalidating caches
@AFaust
Copy link
Collaborator Author

AFaust commented Jul 4, 2020

Note: PR is not yet ready to be merged. I have been running the Repository-tier integration tests and when the second Repository instance tries to join the cache grid, there appears to be a bit of a partition exchange deadlock issue. I am not sure why / for what purpose, but some grid cache message handling is triggering a deseralisation of a node properties cache entries, which in turn performs a lookup on the content data cache for reconstituting the value map, and gets stuck there. It may just be an issue with our Ignite thread pool configuration, which was previously set to be very conservative, e.g. everything that was not clear as to how it was used / needed was reduced to a minimal value.

2020-07-04 20:46:52,750 WARN  [org.apache.ignite.internal.util.typedef.G] [grid-timeout-worker-#23%repositoryGrid%] >>> Possible starvation in striped pool.
    Thread name: sys-stripe-5-#6%repositoryGrid%
    Queue: [o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$DeferredUpdateTimeout@791ea6e0, o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$DeferredUpdateTimeout@398bd866, o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$DeferredUpdateTimeout@7a9b6984]
    Deadlock: false
    Completed: 32
Thread [name="sys-stripe-5-#6%repositoryGrid%", id=60, state=WAITING, blockCnt=0, waitCnt=28]
        at java.base@11.0.1/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base@11.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:323)
        at o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
        at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
        at o.a.i.i.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4867)
        at o.a.i.i.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4826)
        at o.a.i.i.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1484)
        at o.a.i.i.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1090)
        at o.a.i.i.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
        at org.aldica.repo.ignite.cache.SimpleIgniteBackedCache.get(SimpleIgniteBackedCache.java:257)
        at jdk.internal.reflect.GeneratedMethodAccessor148.invoke(Unknown Source)
        at java.base@11.0.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base@11.0.1/java.lang.reflect.Method.invoke(Method.java:566)
        at org.aldica.repo.ignite.cache.SimpleCacheInvoker.invoke(SimpleCacheInvoker.java:49)
        at org.aldica.repo.ignite.cache.CacheFactoryImpl$SimpleLazySwapCacheInvoker.invoke(CacheFactoryImpl.java:721)
        at com.sun.proxy.$Proxy21.get(Unknown Source)
        at org.alfresco.repo.cache.TransactionalCache.getSharedCacheValue(TransactionalCache.java:460)
        at org.alfresco.repo.cache.TransactionalCache.get(TransactionalCache.java:663)
        at org.alfresco.repo.cache.lookup.EntityLookupCache.getByKey(EntityLookupCache.java:312)
        at org.alfresco.repo.domain.contentdata.AbstractContentDataDAOImpl.getContentData(AbstractContentDataDAOImpl.java:189)
        at org.aldica.repo.ignite.binary.NodePropertiesBinarySerializer.readPropertiesRawSerialForm(NodePropertiesBinarySerializer.java:430)
        at org.aldica.repo.ignite.binary.NodePropertiesBinarySerializer.readBinary(NodePropertiesBinarySerializer.java:177)
        at o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:876)
        at o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
        at o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
        at o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
        at o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
        at o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
        at o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:888)
        at o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
        at o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
        at o.a.i.i.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:792)
        at o.a.i.i.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
        at o.a.i.i.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:176)
        at o.a.i.i.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67)
        at o.a.i.i.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:136)
        at o.a.i.i.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1808)
        at o.a.i.i.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1796)
        at o.a.i.i.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.completeFuture(GridNearAtomicAbstractUpdateFuture.java:353)
        at o.a.i.i.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:301)
        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3324)
        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$500(GridDhtAtomicCache.java:141)
        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:292)
        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:287)
        at o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
        at o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
        at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
        at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
        at o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:109)
        at o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
        at o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
        at o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
        at o.a.i.i.managers.communication.GridIoManager.access$5200(GridIoManager.java:229)
        at o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
        at o.a.i.i.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
        at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:120)
        at java.base@11.0.1/java.lang.Thread.run(Thread.java:834)

@AFaust
Copy link
Collaborator Author

AFaust commented Jul 5, 2020

So the problem - for some reason - does not occur when running the same state of Ignite in some external, Docker-Compose based deployment. Repeated integration tests also consistently show a slightly different stack trace. Looking at these additional details, it appears that the initialisation of Repository-tier web scripts is processing Data Dictionary deployed web scripts, loading the properties of a node therein with ContentDataWithId as one property value, and that subsequently leads to the ContentDataDAO being asked to resolve the ID. This actually is a result of the last change to enable ${moduleId}.core.binary.optimisation.useIdsWhenPossible. The problem manifests itself as somehow all system / stripe thread pool threads are waiting / blocked by this single operation.

@AFaust
Copy link
Collaborator Author

AFaust commented Jul 5, 2020

Disabling ${moduleId}.core.binary.optimisation.useIdsWhenPossible again results in the integration test succeeding again. Though there is no explanation yet why the test / startup does not fail when running with enabled useIdsWhenPossible in a separate Docker-Compose setup.

@AFaust
Copy link
Collaborator Author

AFaust commented Jul 5, 2020

Alright - problem seems to be fixed now. Please verify as part of review by running integration test - I have actually added a new profile in the Repository-tier sub-module to run integration test but skip the regular unit tests, which now can take quite some time, using mvn clean install -Ddocker.tests.enabled=true -P surefireSuppression.

The main purpose of the fix is trying to avoid any deserialisation occuring on Ignite threads, since deserialisation with our serialisation improvements can mean access to another cache. And whenever a cache is accessed, Ignite internally uses a Future on a disconnected thread to load the actual value, which may require a remote call in case of a partitioned cache, and may also in turn involve another deserialsiation. By using withKeepBinary on Ignite caches during retrieval-like operations, we completely move all deserialisation handling to client code, meaning the original thread of the cache call, freeing up Ignite threads and avoiding lock ups.

@andreaskring
Copy link
Collaborator

I have tried to run mvn clean install -Ddocker.tests.enabled=true -P surefireSuppression from the project root folder with both Java 8 and Java 11 (the latter in this case), and I experience som issues:

[INFO] DOCKER> [aldica-repository-test:latest] "repository": Start container 4e1d94b97a0d
[INFO] DOCKER> [aldica-repository-test:latest] "repository": Waiting on url http://localhost:8180/alfresco/favicon.ico with method GET for status 200.
[ERROR] DOCKER> [aldica-repository-test:latest] "repository": Timeout after 180468 ms while waiting on url http://localhost:8180/alfresco/favicon.ico
[ERROR] DOCKER> Error occurred during container startup, shutting down...
[INFO] DOCKER> [aldica-repository-test:latest] "repository": Stop and removed container 4e1d94b97a0d after 0 ms
[INFO] DOCKER> [postgres:11.4] "postgres": Stop and removed container d953107a23a8 after 0 ms
[ERROR] DOCKER> I/O Error [[aldica-repository-test:latest] "repository": Timeout after 180468 ms while waiting on url http://localhost:8180/alfresco/favicon.ico]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Alternative/Alfresco Distributed Cache - Parent 1.0.0-SNAPSHOT:
[INFO] 
[INFO] Alternative/Alfresco Distributed Cache - Parent .... SUCCESS [  1.006 s]
[INFO] Alternative/Alfresco Distributed Cache - Common Ignite Library SUCCESS [ 20.861 s]
[INFO] Alternative/Alfresco Distributed Cache - Repository Ignite Module FAILURE [03:20 min]
[INFO] Alternative/Alfresco Distributed Cache - Share Ignite Module SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  03:42 min
[INFO] Finished at: 2020-07-06T09:09:49+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:docker-maven-plugin:0.31.0:start (start-test-containers) on project aldica-repo-ignite: I/O Error: [aldica-repository-test:latest] "repository": Timeout after 180468 ms while waiting on url http://localhost:8180/alfresco/favicon.ico -> [Help 1]

Inspecting the Docker logs for the repo gives:

$ docker logs -f aldica-repository-test-1

06-Jul-2020 07:06:48.653 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
06-Jul-2020 07:06:48.662 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/alfresco]
06-Jul-2020 07:06:49.126 WARNING [localhost-startStop-1] org.apache.juli.ClassLoaderLogManager.readConfiguration Reading /usr/local/tomcat/webapps/alfresco/WEB-INF/classes/logging.properties is not permitted. See "per context logging" in the default catalina.policy file.
06-Jul-2020 07:06:49.129 WARNING [localhost-startStop-1] org.apache.juli.ClassLoaderLogManager.readConfiguration Reading /usr/local/tomcat/webapps/alfresco/WEB-INF/classes/logging.properties is not permitted. See "per context logging" in the default catalina.policy file.
06-Jul-2020 07:06:55.390 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.ignite.internal.util.GridUnsafe$2 (file:/usr/local/tomcat/webapps/alfresco/WEB-INF/lib/ignite-core-2.8.1.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of org.apache.ignite.internal.util.GridUnsafe$2
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
06-Jul-2020 07:07:11.960 INFO [localhost-startStop-1] org.artofsolving.jodconverter.office.ProcessPoolOfficeManager.<init> ProcessManager implementation is LinuxProcessManager
06-Jul-2020 07:07:11.968 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeProcess.start Using original OpenOffice command: [/opt/libreoffice5.4/program/soffice.bin, -accept=socket,host=127.0.0.1,port=8100;urp;, -env:UserInstallation=file:///usr/local/tomcat/temp/.jodconverter_socket_host-127.0.0.1_port-8100, -headless, -nocrashreport, -nodefault, -nofirststartwizard, -nolockcheck, -nologo, -norestore]
06-Jul-2020 07:07:11.968 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeProcess.start starting process with acceptString 'socket,host=127.0.0.1,port=8100,tcpNoDelay=1' and profileDir '/usr/local/tomcat/temp/.jodconverter_socket_host-127.0.0.1_port-8100'
06-Jul-2020 07:07:11.972 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeProcess.start started process; pid = 91
06-Jul-2020 07:07:12.523 WARNING [OfficeProcessThread-0] org.artofsolving.jodconverter.office.ManagedOfficeProcess$6.attempt office process died with exit code 81; restarting it
06-Jul-2020 07:07:12.535 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeProcess.start Using original OpenOffice command: [/opt/libreoffice5.4/program/soffice.bin, -accept=socket,host=127.0.0.1,port=8100;urp;, -env:UserInstallation=file:///usr/local/tomcat/temp/.jodconverter_socket_host-127.0.0.1_port-8100, -headless, -nocrashreport, -nodefault, -nofirststartwizard, -nolockcheck, -nologo, -norestore]
06-Jul-2020 07:07:12.536 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeProcess.start starting process with acceptString 'socket,host=127.0.0.1,port=8100,tcpNoDelay=1' and profileDir '/usr/local/tomcat/temp/.jodconverter_socket_host-127.0.0.1_port-8100'
06-Jul-2020 07:07:12.556 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeProcess.start started process; pid = 101
06-Jul-2020 07:07:12.882 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.OfficeConnection.connect connected: 'socket,host=127.0.0.1,port=8100,tcpNoDelay=1'
06-Jul-2020 07:07:12.912 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
06-Jul-2020 07:07:12.916 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [/alfresco] startup failed due to previous errors
06-Jul-2020 07:07:13.071 INFO [localhost-startStop-1] org.artofsolving.jodconverter.office.ProcessPoolOfficeManager.stop stopping
06-Jul-2020 07:07:13.138 INFO [MessageDispatcher] org.artofsolving.jodconverter.office.OfficeConnection$1.disposing disconnected: 'socket,host=127.0.0.1,port=8100,tcpNoDelay=1'
06-Jul-2020 07:07:13.327 INFO [OfficeProcessThread-0] org.artofsolving.jodconverter.office.ManagedOfficeProcess.doEnsureProcessExited process exited with code 0
06-Jul-2020 07:07:13.367 INFO [localhost-startStop-1] org.artofsolving.jodconverter.office.ProcessPoolOfficeManager.stop stopped
06-Jul-2020 07:07:13.743 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [alfresco] appears to have started a thread named [QuartzScheduler_Worker-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.1/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
06-Jul-2020 07:07:13.743 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [alfresco] appears to have started a thread named [QuartzScheduler_Worker-2] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.1/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
06-Jul-2020 07:07:13.744 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [alfresco] appears to have started a thread named [Thread-5] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.1/java.lang.Thread.sleep(Native Method)
 org.quartz.core.QuartzScheduler$1.run(QuartzScheduler.java:562)
 java.base@11.0.1/java.lang.Thread.run(Thread.java:834)
06-Jul-2020 07:07:13.744 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [alfresco] appears to have started a thread named [Timer-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.1/java.lang.Object.wait(Native Method)
 java.base@11.0.1/java.lang.Object.wait(Object.java:328)
 java.base@11.0.1/java.util.TimerThread.mainLoop(Timer.java:527)
 java.base@11.0.1/java.util.TimerThread.run(Timer.java:506)
...

@andreaskring
Copy link
Collaborator

The problem above was due to some old Docker volumes... everything works now

@AFaust
Copy link
Collaborator Author

AFaust commented Jul 6, 2020

As per your last feedback and our web meeting this morning, I am merging this PR although you have not formally given your approval. This is to ensure we continue working on the remaining, outstanding PR to target a release before this week's TTL.

@AFaust AFaust merged commit ac0d0b0 into master Jul 6, 2020
@AFaust AFaust deleted the feature/serialisation-improvements branch July 7, 2020 21:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants