Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement io_per_cpu method for both Asio and Beast consumer #5

Closed
wants to merge 1 commit into from

Conversation

denisbertini
Copy link
Collaborator

This PR add an additionnal option to the geneva exectuables with allow the user to use an io_per_cpu enabled asio or beast consumer.

The boost::asio documentation is actually giving no motivation for changing between different IO designs.
That's why i have done here @GSI some profiling studies and i noticed an improvement of the clients cpu usage using the io_per_cpu method in our cluster @ GSI.
That 's the reason why i propose this PR.

See the details of client cpu efficiency study here.

This PR also reinstate the changes i was proposing in a previous PR, link to the gcc 8.x serie which requires linking of an extra library: stdc++fs. The current implementation is not working.

@denisbertini
Copy link
Collaborator Author

The details of client cpu efficiency study can be found here.

namespace net = boost::asio;

class GIoContexts
: private boost::noncopyable
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

boost::noncopyable is actually no longer needed with current C++. You can delete the corresponding copy-, assignment and possibly move constructors and operators.


for (auto &ioc: m_ioContexts)
{
std::shared_ptr<std::thread> thread(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatting seems funny, at least in Clion. For now we do not have a good clang format file, so suggest to ignore this for now. Need to discuss tabs vs. spaces.

Copy link
Collaborator Author

@denisbertini denisbertini Mar 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not use clang for now but only emacs auto-formatting


namespace net = boost::asio;

class GIoContexts
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add short description of what this class is for, using doxygen format

{
public:

GIoContexts(int c_size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest to add a small doxygen description to each member function, describing purpose and arguments



private:
int m_size;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest to use std-types here, not int. Also comment what the variable does, using a doxygen-style comment

// Add Just anOther consumer

template<typename processable_type>
class GAsioConsumerPT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is essentially no documentation here. What is this class for, what differentiates it from the others, etc. Also, what is the rationale behind the "P" in the name? The T in Geneva convention stands for "Template".

public:
GAsioConsumerPT() = default;

GAsioConsumerPT(int io_context_pool_size)
Copy link
Collaborator

@rberlich rberlich Feb 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a plain int type? Suggest to use std-types when possible

namespace po = boost::program_options;

visible.add_options()
("asio_ip", po::value<std::string>(&m_server)->default_value(GCONSUMERDEFAULTSERVER),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear to me whether this class is meant as an alternative to the GAsioConsumerT class or is meant to live alongside it? If the latter, than you are duplicating command line options here?

@@ -1508,6 +1513,445 @@ class GWebsocketConsumerT
//-------------------------------------------------------------------------
};


// Add just another consumer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really enough as documentation :-)

// Add just another consumer

template<typename processable_type>
class GWebsocketConsumerPT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the rationale behind this? Is this an alternative to the "old" GWebSocketConsumerT or meant to be used alternatively?

/** @brief The default constructor */
GWebsocketConsumerPT() = default;

GWebsocketConsumerPT(int io_context_pool_size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatting is funny, in my editor. Probably spaces vs. tabs. We need a suitable clang format file

}
//-------------------------------------------------------------------------
// Deleted copy-/move-constructors and assignment operators.
GWebsocketConsumerPT(const GWebsocketConsumerPT<processable_type>&) = delete;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This for me is the proper way to make a class non-copyable. I had seen in another file (GIoContexts?) the use of boost::noncopyable. Could have still been "inherited" from my code (then it is my fault :-) ) Nevertheless please eliminate boost::noncopyable when you see it and relplace with =delete constructs.

namespace po = boost::program_options;

visible.add_options()
("beast_ip", po::value<std::string>(&m_server)->default_value(GCONSUMERDEFAULTSERVER),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to duplicate the command line settings from GWebsocketConsumerT. It is not clear to me whether this class is meant as an alternative or a replacement?

("consumer,c", po::value<std::string>(&m_consumer_name)->default_value("stc"), consumer_help.str().c_str())
("ioc,i", po::value<int>(&m_ioc)->default_value(0),
"io_per_cpu (network based consumers only");

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Funny formatting, at least in my CLion


("consumer,c", po::value<std::string>(&m_consumer_name)->default_value("stc"), consumer_help.str().c_str())
("ioc,i", po::value<int>(&m_ioc)->default_value(0),
"io_per_cpu (network based consumers only");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest a better description what this is for, or where to find information. Also ) missing

@@ -891,6 +893,18 @@ void Go2::parseCommandLine(
);
}

if ( vm.count("ioc") && m_ioc != 0 ){
Copy link
Collaborator

@rberlich rberlich Feb 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not look like the right place to register the consumer. The geneva namespace has a GIndividualStandardConsumers.hpp. This may be a funny way to do things, but we should not just introduce registering consumers in places where others are not. So I suggest to follow the way of GWebSocketConsumerT etc. My understanding is that you want to replace that class, and that your version works better. If you are sure about that, remove that class, document yours and register the consumer in the same way as all other consumers in Geneva.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well here i did not found a easy way to implement that, may be you could help?

@rberlich
Copy link
Collaborator

rberlich commented Mar 7, 2021

Hi Denis, I have seen no further commits in this branch. I would like to try and unify the use of the io_per_cpu in GAsioConumerT and GWebSocketConsumerT, as a single, possibly default-argumented constructor option. Most code in your version seems to be identical to the old version, and the relevant changes seem to be concentrated in the GIoContexts, so this should not be too difficult. I have for this purpose now cloned your PR into rb_db_feature_cpueff and will submit it for a new PR to you when ready. I will try to address my comments to your code in my edits.

@denisbertini
Copy link
Collaborator Author

Hi Ruediger,
It will be nice very nice if you could implement io_per_cpu as an option to the original class.
Thanks !

@rberlich
Copy link
Collaborator

rberlich commented Mar 7, 2021

Hi Denis, sched.h seems to be POSIX-specific and hence not readily available in Windows. How would one implement the same functionality there?

@rberlich
Copy link
Collaborator

rberlich commented Mar 7, 2021

Hi Denis, there is no license header in GIoContexts.hpp. Can I assume that it stands under the Apache v2 license, as do most other files in Geneva?
I have no idea here, i am not fit at all with licence issues

@rberlich
Copy link
Collaborator

rberlich commented Mar 7, 2021

Yet another question: In GIoContexts.hpp you create multiple io_context objects, then call run() on each of them in its own thread. Following this documentation: https://www.boost.org/doc/libs/develop/doc/html/boost_asio/overview/core/threads.html it is well possible to just call run() on the same io_context object multiple times from each thread in a thread group. So I do not understand for what reason multiple io_context objects are created in your case? Edit: just tried to research this further. Boris Schäling does not have a clear answer on this topic -- cmp. https://dieboostcppbibliotheken.de/boost.asio-skalierbarkeit . Did you do any benchmarks of both options? Calling multiple runs on the same io_context seems easier to me ...

@denisbertini
Copy link
Collaborator Author

denisbertini commented Mar 8, 2021

Hi Denis, sched.h seems to be POSIX-specific and hence not readily available in Windows. How would one implement the same functionality there?

Yes it is Linux/Unix specific.
I have unfortunately no windows system to find out the Linux equivalent to what i used.
May be one of you could find out?

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

Hi Denis, sched.h seems to be POSIX-specific and hence not readily available in Windows. How would one implement the same functionality there?
Yes it is Linux/Unix specific.
I have unfortunately no windows system to find out the Linux equivalent to what i used.
May be one of you could find out?

O.k., I have already implemented a means of detecting Linux from within the application (through a CMake-define), and the if ( m_pinned ) {-section is very isolated. So it should be possible to build this into Geneva conditionally, depending on the OS. Mind you, we need to maintain compatibility to MSVC and MacOS.

Do I understand it correctly that the if ( m_pinned ) {-block is the essence of your submission? I.e. the rest of the edits mostly covers tranferring the number of threads for the io_contexts into the program, right?

@denisbertini
Copy link
Collaborator Author

denisbertini commented Mar 8, 2021

Yet another question: In GIoContexts.hpp you create multiple io_context objects, then call run() on each of them in its own thread. Following this documentation: https://www.boost.org/doc/libs/develop/doc/html/boost_asio/overview/core/threads.html it is well possible to just call run() on the same io_context object multiple times from each thread in a thread group. So I do not understand for what reason multiple io_context objects are created in your case? Edit: just tried to research this further. Boris Schäling does not have a clear answer on this topic -- cmp. https://dieboostcppbibliotheken.de/boost.asio-skalierbarkeit . Did you do any benchmarks of both options? Calling multiple runs on the same io_context seems easier to me ...

Agree.
Furthermore, i also thought that using one io_context instance and multiple threads to process
incoming data is more performant than using a separate io_context instance per
thread.
But, with this approach, we have in our case a real scalability problem and i just was investigating if other techniques exists.
In the boost doc differents techniques exists but no hints are given for the user to choose for one ot the other according to the
application context.
And using multiple io_contexts objects is just one of the various techniques used in the Boost::asio library.
If for example you look at
boost::asio http 2 server example
you'll see an io_context pool where a couple of boost::asio::io_contexts
instances are created in the constructor and packed in a std::vector.
And each io_context is then
started in its own thread. As the example is referred to as
io_service-per-CPU design it may be for demonstration only.
But i could not found in the boost documentation which techniques to used
for scalabity performance which is our interest here at GSI.
It is unfortunately left to the user to try the various techniques and evaluate her/himself the
pros/cons of it.
So i tryied it in our case and it seems to perform better in the case of pur Boost::Asio consumer and quite similar in the case of boost:beast consumer.
In fact in my benchmarks, the best results are obtained when i

  • use one io_context per logical processor.
  • only allocate memory from the CPU/NUMA node the thread belongs to (the pinned option)
    the draw back of the approach compare to the original Geneva one, is that it can lead to a lot of potential cross-talk and cache
    thrashing between processors - it might even be worse on some systems.
    That is the reason why the io_per_cpu approach is not a replacement of the original Geneva one.

The scalability problem is not really solved though with either techniques and when using more then a few hundred (~400) clients the efficiency drops and impact the overall performance of the Geneva application.
Meanwhile i have implemented a MPI based Geneva Producer-Consumer prototype and i am able to compare the scalability performance of both MPI and Boost:asio Boost:Beast approach,
With MPI i obtained now a good scalabity up to 1000 and more processes which is very promising.
I will present today the first results in our group meeting.

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

Hi Denis, there is no license header in GIoContexts.hpp. Can I assume that it stands under the Apache v2 license, as do most other files in Geneva?
I have no idea here, i am not fit at all with licence issues

Hi Denis, all merges to the Geneva library have to be put under the Apache v2 license, otherwise we will have a mix of licenses and unclear usage rights. It would be sufficient for you to give me permission to put your contributions under Apache v2. If necessary, consult with Kilian.

@denisbertini
Copy link
Collaborator Author

Hi Denis, sched.h seems to be POSIX-specific and hence not readily available in Windows. How would one implement the same functionality there?
Yes it is Linux/Unix specific.
I have unfortunately no windows system to find out the Linux equivalent to what i used.
May be one of you could find out?

O.k., I have already implemented a means of detecting Linux from within the application (through a CMake-define), and the if ( m_pinned ) {-section is very isolated. So it should be possible to build this into Geneva conditionally, depending on the OS. Mind you, we need to maintain compatibility to MSVC and MacOS.

Do I understand it correctly that the if ( m_pinned ) {-block is the essence of your submission? I.e. the rest of the edits mostly covers tranferring the number of threads for the io_contexts into the program, right?

yes
The code after if(m_pinned) as you realized is OS dependant, and what you suggest, meaning OS detection via CMake and activating the corresponding code is the proper way.

@denisbertini
Copy link
Collaborator Author

Hi Denis, there is no license header in GIoContexts.hpp. Can I assume that it stands under the Apache v2 license, as do most other files in Geneva?
I have no idea here, i am not fit at all with licence issues

Hi Denis, all merges to the Geneva library have to be put under the Apache v2 license, otherwise we will have a mix of licenses and unclear usage rights. It would be sufficient for you to give me permission to put your contributions under Apache v2. If necessary, consult with Kilian.

I discussed with Kilian, he said that we can use the Apache v2 Licence, so just go on with it!

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

Hi Denis, there is no license header in GIoContexts.hpp. Can I assume that it stands under the Apache v2 license, as do most other files in Geneva?
I have no idea here, i am not fit at all with licence issues

Hi Denis, all merges to the Geneva library have to be put under the Apache v2 license, otherwise we will have a mix of licenses and unclear usage rights. It would be sufficient for you to give me permission to put your contributions under Apache v2. If necessary, consult with Kilian.

I discussed with Kilian, he said that we can use the Apache v2 Licence, so just go on with it!

Thanks a lot, I will then add the Apache header, with a reference to the origin in your work.

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

I suggest that we use a c_size of 0 for auto-detection of the number of threads, equal to hardware_concurrency. Otherwise one has to write machine-dependent configuration files. We can use -0 for pinning.

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

Another question: You have this code:

    explicit GAsioConsumerPT( int io_context_pool_size )
        : m_io_contexts( std::abs( io_context_pool_size ) )
        , m_acceptor( m_io_contexts.get() )
        , m_socket( m_io_contexts.get() )
    {
        std::cout << "-I- GAsioConsumerPT() created with pool size: " << io_context_pool_size << std::endl;
    }

As there is only one consumer, which holds the GIOContexts-object, there is only a single call to m_io_contexts.get() . This means that it is always the same io_context-object which is used by the acceptor or socket.

It is not clear to me what the consequence of this is. Wouldn't an individual socket or acceptor be needed for each io_context? But then we would have different ports, right?

BTW -- in your anwers, you can use "Quote Reply", so you do not need to edit the original post :-)

@denisbertini
Copy link
Collaborator Author

Another question: You have this code:

    explicit GAsioConsumerPT( int io_context_pool_size )
        : m_io_contexts( std::abs( io_context_pool_size ) )
        , m_acceptor( m_io_contexts.get() )
        , m_socket( m_io_contexts.get() )
    {
        std::cout << "-I- GAsioConsumerPT() created with pool size: " << io_context_pool_size << std::endl;
    }

As there is only one consumer, which holds the GIOContexts-object, there is only a single call to m_io_contexts.get() . This means that it is always the same io_context-object which is used by the acceptor or socket.
It is not clear to me what the consequence of this is. Wouldn't an individual socket or acceptor be needed for each io_context? But then we would have different ports, right?
BTW -- in your anwers, you can use "Quote Reply", so you do not need to edit the original post :-)

Yes.
Difference comes mainly when the session for communication is created.

Another question: You have this code:

    explicit GAsioConsumerPT( int io_context_pool_size )
        : m_io_contexts( std::abs( io_context_pool_size ) )
        , m_acceptor( m_io_contexts.get() )
        , m_socket( m_io_contexts.get() )
    {
        std::cout << "-I- GAsioConsumerPT() created with pool size: " << io_context_pool_size << std::endl;
    }

As there is only one consumer, which holds the GIOContexts-object, there is only a single call to m_io_contexts.get() . This means that it is always the same io_context-object which is used by the acceptor or socket.

It is not clear to me what the consequence of this is. Wouldn't an individual socket or acceptor be needed for each io_context? But then we would have different ports, right?

BTW -- in your anwers, you can use "Quote Reply", so you do not need to edit the original post :-)

Another question: You have this code:

    explicit GAsioConsumerPT( int io_context_pool_size )
        : m_io_contexts( std::abs( io_context_pool_size ) )
        , m_acceptor( m_io_contexts.get() )
        , m_socket( m_io_contexts.get() )
    {
        std::cout << "-I- GAsioConsumerPT() created with pool size: " << io_context_pool_size << std::endl;
    }

As there is only one consumer, which holds the GIOContexts-object, there is only a single call to m_io_contexts.get() . This means that it is always the same io_context-object which is used by the acceptor or socket.

It is not clear to me what the consequence of this is. Wouldn't an individual socket or acceptor be needed for each io_context? But then we would have different ports, right?

BTW -- in your anwers, you can use "Quote Reply", so you do not need to edit the original post :-)

I do not think it is needed to have individual acceptor/socket for the same io_context object. It works fine like this.
About the deep consequences of such a choice, i am not sure also.
If you look at the implementation provided by the https2 server example, the same choice is done.

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

Hi Denis, o.k., the http2 server example was created for Boost 1.44. Now, in Boost 1.75 the example isn't even available any longer ...

@denisbertini
Copy link
Collaborator Author

Hi Denis, o.k., the http2 server example was created for Boost 1.44. Now, in Boost 1.75 the example isn't even available any longer ...

Indeed, but the technique itself is still valid and working ... i was testing with v1.74 . I suppose it should also be OK for 1.75.

@rberlich
Copy link
Collaborator

rberlich commented Mar 8, 2021

Hi Denis, o.k., the http2 server example was created for Boost 1.44. Now, in Boost 1.75 the example isn't even available any longer ...

Indeed, but the technique itself is still valid and working ... i was testing with v1.74 . I suppose it should also be OK for 1.75.

Hi Denis, what I am trying to understand: a socket is essentially a queue for incoming or outgoing data. So many requests targeting one socket instead of many should have a bad effect on performance. And no, I have never dissected a socket, so there are for sure things I do not understand here. Still, it is not clear to me how multiple io_context objects can work more efficiently with one socket than one, as there will be a need for synchronizsation.

@denisbertini
Copy link
Collaborator Author

Hi Ruediger,
Interesting question!
My approach was here an experimental approach only and i did not dig in the precise implementation of Boost::asio to understand the possible reasons for the differences in performances we are experiencing.
What was always wondering me is why Beast and websocket outperforms so badly (in our case) pure Boost:asio since Beast is build on top of Boost:asio itself!
With the io_per_cpu the performance of both asio/ebsockets are very similar i when using
just 2 listener threads on the server side!
Furthermore, as i said, it could well be that these performance differences are system dependent.
Actually to be able to answer your question, one could come up with very simple test benchmarks (outside of Geneva Framework) comparing these different approaches ...
I can come up with some test benchmarks if you want which should help understanding better what is going on ...
what do you think ?

@rberlich
Copy link
Collaborator

rberlich commented Mar 9, 2021

Hi Ruediger,
Interesting question!
My approach was here an experimental approach only and i did not dig in the precise implementation of Boost::asio to understand the possible reasons for the differences in performances we are experiencing.
What was always wondering me is why Beast and websocket outperforms so badly (in our case) pure Boost:asio since Beast is build on top of Boost:asio itself!
With the io_per_cpu the performance of both asio/ebsockets are very similar i when using
just 2 listener threads on the server side!
Furthermore, as i said, it could well be that these performance differences are system dependent.
Actually to be able to answer your question, one could come up with very simple test benchmarks (outside of Geneva Framework) comparing these different approaches ...
I can come up with some test benchmarks if you want which should help understanding better what is going on ...
what do you think ?

That would certainly be helpful -- just a minimal data transfer (a single integer?) with scalable numbers of clients and server-threads, with and with a) one io_context object and multiple run() or b) multiple io_context/multiple run(). It should be possible to recycle some of the ASIO demos for this. O.k., come to think of it, maybe we also want Beast vs. ASIO, and scalable sizes of data packets. It may make a difference if the threads need to do computational work themselves. Maybe you could also start from this: https://github.com/rberlich/Estray . This was a testbed for the Beast-integration -- now a bit outdated (2018). It would be great if we could a) update this with ASIO and b) update it with the multiple-io_context options. I think I also need to bring the Beast-part up to the current standards.

@denisbertini
Copy link
Collaborator Author

sounds good to me !

@rberlich
Copy link
Collaborator

rberlich commented Mar 9, 2021

And we should probably implement the same protocol in Estray as we use in Geneva. This way we could transfer the solution 1:1 once Estray performs nicely. MPI is nice, but not an option for the Cloud context, so I still believe ASIO and Beast might be useful.

@denisbertini
Copy link
Collaborator Author

agreed!

@rberlich
Copy link
Collaborator

rberlich commented Mar 9, 2021

Regarding io_context I am wondering: if the first io_context is assigned the socket, but several io_context objects nevertheless share their work, what happens to other io_context objects instantiated outside of the GIoContext class? I.e., if somewhere in the address space of Geneva other io_context objects are created -- this happens e.g. for the threadpools we use -- do they share their work with the consumer, and vice versa? This way heavy traffic would directly influence the processing. The whole inner workings of ASIO are unfortunately still a miracle to me :-(

@denisbertini
Copy link
Collaborator Author

And we should probably implement the same protocol in Estray as we use in Geneva. This way we could transfer the solution 1:1 once Estray performs nicely. MPI is nice, but not an option for the Cloud context, so I still believe ASIO and Beast might be useful.

Well even for Cloud Computing, using MPI is possible !
One can for example create its own MPI Cluster on EC2 using for example the nice
automatic setting tool from the MIT StarCluster.
This tool provide openMPI and even MPICH as plugin for the user to
compile and run MPI program in a easy way on the Cloud.
How MPI will performs is another question, but it is possible !

@denisbertini
Copy link
Collaborator Author

Hi Ruediger,
Interesting question!
My approach was here an experimental approach only and i did not dig in the precise implementation of Boost::asio to understand the possible reasons for the differences in performances we are experiencing.
What was always wondering me is why Beast and websocket outperforms so badly (in our case) pure Boost:asio since Beast is build on top of Boost:asio itself!
With the io_per_cpu the performance of both asio/ebsockets are very similar i when using
just 2 listener threads on the server side!
Furthermore, as i said, it could well be that these performance differences are system dependent.
Actually to be able to answer your question, one could come up with very simple test benchmarks (outside of Geneva Framework) comparing these different approaches ...
I can come up with some test benchmarks if you want which should help understanding better what is going on ...
what do you think ?

That would certainly be helpful -- just a minimal data transfer (a single integer?) with scalable numbers of clients and server-threads, with and with a) one io_context object and multiple run() or b) multiple io_context/multiple run(). It should be possible to recycle some of the ASIO demos for this. O.k., come to think of it, maybe we also want Beast vs. ASIO, and scalable sizes of data packets. It may make a difference if the threads need to do computational work themselves. Maybe you could also start from this: https://github.com/rberlich/Estray . This was a testbed for the Beast-integration -- now a bit outdated (2018). It would be great if we could a) update this with ASIO and b) update it with the multiple-io_context options. I think I also need to bring the Beast-part up to the current standards.

Can i create branch and commit on your git repo Estray?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants