Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Has anyone ever live traded with ccxt.pro? #190

Closed
ian-wazowski opened this issue Jan 18, 2021 · 25 comments
Closed

Has anyone ever live traded with ccxt.pro? #190

ian-wazowski opened this issue Jan 18, 2021 · 25 comments

Comments

@ian-wazowski
Copy link
Contributor

ian-wazowski commented Jan 18, 2021

I'm looking at the ccxt.pro source code, but I'm not 100% sure if live trading is going well.

It is also a bit uneasy that some exchanges' data checksum logics are left as TODO.

Does it work well?

@jpmediadev
Copy link
Contributor

haven't tried

@ian-wazowski
Copy link
Contributor Author

haven't tried

Ok Thank for answering

@cjdsellers
Copy link
Member

I've been working on the integration using Binance and BitMEX. There's still some things to work out with it. Which exchange are you guys looking to trade on?

@jpmediadev
Copy link
Contributor

I am still far from real trading ...

@ian-wazowski
Copy link
Contributor Author

ian-wazowski commented Jan 18, 2021

guys looking to trade on?

@cjdsellers

I will be doing live trading on binance and binance futures this week. Then I plan to start trading at FTX (because FTX offers Subaccount very well and Fix engine too(I heard they have institutional graded fix engine)).

And I'm reviewing what you're doing on Binance and BitMEX.

Is there anything I can do to help? May be do live trading and give feedback with PR?

@ian-wazowski
Copy link
Contributor Author

I am still far from real trading ...

You will get there soon

@cjdsellers
Copy link
Member

@ian-wazowski Some help would be much appreciated.

Right now I'm just reworking some things with handling identifiers, going forward we'll have to integrate each exchange individually as the features of this platform are too advanced to be satisfied with the CCXT Pro unified API - that library can still be used though, as one has the option of passing custom arguments, and responses always contain the original API response in the info key. So right now it works well for faster iteration and to get things going like that.

So basically the unified API will allow access of live and historical market data and probably order books for the 27 exchanges they currently show implemented. For live trading on the unified API probably only MARKET and LIMIT orders with delayed fill reporting (not aggregated, only on fully filled) will be possible. Individual APIs will be fully integrated with all order types and features, and I'm starting with Binance (largest spot volume) and BitMEX (largest derivative volumes).

I'll add an explanation for this in the README with a table showing the exchanges where we support advanced features, and the integration status.

Give me a few days to do some refactoring and I'll let you know how you could help.

Have you integrated a FIX engine before? I have experience with FXCM and LMAX FIX4.4 for FX - they're not so easy to get right, the tag use can often be really different between providers - but it is very fast.

Cheers!

@cjdsellers
Copy link
Member

cjdsellers commented Jan 18, 2021

Just to be clear, CCXT Pro will still be used for these initial integrations. I just have to do some refactoring to cleanly separate the fully implemented exchanges from the unified capability exchanges.

@cjdsellers
Copy link
Member

cjdsellers commented Jan 19, 2021

I've reconsidered this. I think it's too unsafe to allow people to trade through the unified API, we wouldn't be able to guarantee the behavior and there would be a lot of buggy issues.

So the unified API will probably just be for market data including the order book via CCXTDataClient.

For live execution the aim will be to do one good solid integration at a time. Right now I'll use the CCXT WebSockets under the hood, but this will be swappable.

I sorted a lot of issues with Binance today and that seemed to be running well although more testing is needed before I'd call it stable. Implemented were LIMIT_MAKER (by selecting post_only), LIMIT, and MARKET order types for spot. All of the time in force options for applicable order types. Placement and cancelation of all the orders were working well.

Amongst other things I'll get BitMEX going again tomorrow.

@ian-wazowski
Copy link
Contributor Author

ian-wazowski commented Jan 19, 2021

Thank you for the thoughtful answer. Really.

Have you integrated a FIX engine before? I have experience with FXCM and LMAX FIX4.4 for FX - they're not so easy to get right, the tag use can often be really different between providers - but it is very fast.

I have never integrated fix as main role before, but I have experience integrating 60+ exchanges and 3 brokerages (including spot/derivatives/stock/commodites) for market making and StatArb.

Ok, then I will look for other ways to improve the project until the integration is complete.

@cjdsellers
Copy link
Member

cjdsellers commented Jan 19, 2021

That's some great experience!

Actually it would be good if you could test out some live trading on Binance, if you wanted to. I've been using a very small account for it (a few hundred USD worth) and using minimum order values.

Or if performance profiling is your thing I've been doing some analysis into the performance of the system, as found in the performance tests. I think there's a bottleneck between a trader calling submit_order in the strategy, and it finally being sent as a REST request. It's taking over 1000 microseconds (μs) and sometimes spikes as high as 4ms (4000μs).

It doesn't have anything to do with object creation as all of that is super low, 0.4μs to make a Price, 14μs to make an Order, etc. Instantiating a Price is only slightly slower than a built-in decimal.Decimal so all of the subclassing and domain driven design isn't hurting performance too much at all.

Using a Cythonized strategy reduces that by around 400μs, but there must be something else getting in the way as I would expect and target 500μs or much less.

I'm not expecting to find it in calls to cdef methods either, I'm expecting it could be around the event loop / asyncio bearing in mind we're using uvloop which is using libuv through Cython under the hood. There still alot of performance potential in the platform.

I'll report back my findings.

@jpmediadev
Copy link
Contributor

jpmediadev commented Jan 19, 2021

I think there's a bottleneck between a trader calling submit_order in the strategy, and it finally being sent as a REST request. It's taking over 1000 microseconds (μs) and sometimes spikes as high as 4ms (4000μs).

it looks like DNS delay, I came across this at the time when I wrote a high-load on Perl %)
the solution is to use cache for DNS lookups

https://github.com/jayvdb/dns-cache

python requests that is using urllib3 that is using socket.getaddrinfo which has DNS caching disabled according to this SO thread (given that your test machine runs linux).

further optimization at the level of setup Linux (TCP/IP keep-alive timeouts etc) and recompile the kernel

@jpmediadev
Copy link
Contributor

I didn't quite translate it correctly... in general DNS Cache will also not hurt))

@cjdsellers
Copy link
Member

cjdsellers commented Jan 19, 2021

It'll be valuable to look into performant networking some more, however for clarity, the OrderSubmitted event is being generated and sent back to the execution engine just before the coro which sends the REST request is awaited.

This is because there's another task listening for order events, and even though the event loop is based on cooperative multi-tasking, we want to ensure that the OrderAccepted (NEW) event doesn't get generated before the submitted one.

@jpmediadev
Copy link
Contributor

i hear what HFT use LMAX disruptor
https://lmax-exchange.github.io/disruptor/

cython code:
https://github.com/random-python/data_pipe

@cjdsellers
Copy link
Member

That look really interesting. I'm aware of the disruptor pattern, its very elegant.

I'm going to find the exact source of the bottleneck first as it may be something really simple.

Right now I'm back on integrations.

I'm starting to consider writing a networking module for async REST and websockets to get off CCXT. It could take a while though thats all.

@cjdsellers
Copy link
Member

cjdsellers commented Jan 20, 2021

One thing I did recently was make sure the LiveLogger is running in a separate process. So the system just puts log messages on a multiprocessing.Queue which that logging process pulls off sequentially to work on.

Before that I had re-implemented queue.Queue in Cython stripping out all of the unnecessary parts the log queue didn't need and c typing everything I could - that resulted in some performance gains so can do the same thing to the multiprocessing.Queue if I find thats a bottleneck.

So it is possible to do multi-threaded "parallel" programming with python although is faux threading because its actually a whole separate process. multiprocessing Pipes can also work in this way.

Later I'll be adding some sinks for Logstash -> ELK and Prometheus so piping things out to those reporting processes will be a good idea.

@cjdsellers
Copy link
Member

@ian-wazowski hold off on the live testing for the moment. I'm making some big changes.

I'm removing the redundant OrderWorking event and OrderStatus.WORKING enum. As an order status of NEW is commonly regarded as an "accepted" order anyway, and can implicity be considered to be working if its some type of passive order. This corresponds to the FIX integrations I've done before and what I'm seeing from the exchange APIs.

This will be more efficient as it avoids some object creations and message passing, and also simplifies the state space for orders.

Thoughts guys?

@ian-wazowski
Copy link
Contributor Author

It's easier to turn off all DNS settings to get the most out of your network performance. We do not need DNS because exchanges and brokerage endpoint is static(when this has to be changed they will notify us.) and once set, it seldom changes.(by this we can reduce extra 10~15% network latency(centos7 64bits, latest kernel))

@ian-wazowski
Copy link
Contributor Author

ian-wazowski commented Jan 20, 2021

@ian-wazowski hold off on the live testing for the moment. I'm making some big changes.

I'm removing the redundant OrderWorking event and OrderStatus.WORKING enum. As an order status of NEW is commonly regarded as an "accepted" order anyway, and can implicity be considered to be working if its some type of passive order. This corresponds to the FIX integrations I've done before and what I'm seeing from the exchange APIs.

This will be more efficient as it avoids some object creations and message passing, and also simplifies the state space for orders.

Thoughts guys?

Agreed, It seems to be an efficient choice from an micro-optimization point of view and DDD point of view.

If there is it(OrderWorking event), We can express a detailed order state(and logically right), but I don't think it doesn't essential to trading.

@ian-wazowski
Copy link
Contributor Author

One thing I did recently was make sure the LiveLogger is running in a separate process. So the system just puts log messages on a multiprocessing.Queue which that logging process pulls off sequentially to work on.

Before that I had re-implemented queue.Queue in Cython stripping out all of the unnecessary parts the log queue didn't need and c typing everything I could - that resulted in some performance gains so can do the same thing to the multiprocessing.Queue if I find thats a bottleneck.

So it is possible to do multi-threaded "parallel" programming with python although is faux threading because its actually a whole separate process. multiprocessing Pipes can also work in this way.

Later I'll be adding some sinks for Logstash -> ELK and Prometheus so piping things out to those reporting processes will be a good idea.

ELK or fluentd is good option but how about this ? vector.

@ian-wazowski
Copy link
Contributor Author

i hear what HFT use LMAX disruptor
https://lmax-exchange.github.io/disruptor/

cython code:
https://github.com/random-python/data_pipe

Good, Thanks!

@jpmediadev
Copy link
Contributor

t's easier to turn off all DNS settings to get the most out of your network performance. We do not need DNS because exchanges and brokerage endpoint is static(when this has to be changed they will notify us.) and once set, it seldom changes.(by this we can reduce extra 10~15% network latency(centos7 64bits, latest kernel))

it will work if the exchange or broker provides you with such an API

for a certain volume, Binance has such an option

standard, public Binance API will not work by IP, because there is a balancer and also need to encrypt traffic via https

@cjdsellers
Copy link
Member

One thing I did recently was make sure the LiveLogger is running in a separate process. So the system just puts log messages on a multiprocessing.Queue which that logging process pulls off sequentially to work on.
Before that I had re-implemented queue.Queue in Cython stripping out all of the unnecessary parts the log queue didn't need and c typing everything I could - that resulted in some performance gains so can do the same thing to the multiprocessing.Queue if I find thats a bottleneck.
So it is possible to do multi-threaded "parallel" programming with python although is faux threading because its actually a whole separate process. multiprocessing Pipes can also work in this way.
Later I'll be adding some sinks for Logstash -> ELK and Prometheus so piping things out to those reporting processes will be a good idea.

ELK or fluentd is good option but how about this ? vector.

This looks great. Big plus it's written in Rust too. I'll have to do some research and have a discussion when that stage comes up.

@ian-wazowski
Copy link
Contributor Author

Ok close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants