Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wont check failovers pools at every restart #8

Closed
mmalka opened this issue Jul 20, 2014 · 42 comments
Closed

Wont check failovers pools at every restart #8

mmalka opened this issue Jul 20, 2014 · 42 comments

Comments

@mmalka
Copy link

mmalka commented Jul 20, 2014

Hello, I figured out that sometimes, in a RARE case, the proxy will not take care about the failover pool before a long period of time, it does not happend everytime though.

This issue is also.. kind of rare

I couldn't catch the log file when it happend (that kind of things always happend at the worst timer...)

The log following almost show the issue, it connect to the first (dead) then try the second (dead) then go back to the first instead of going to the third immediatly.

LOG REMOVED
@Stratehm
Copy link
Owner

In fact, I cannot see any problem in the attached logs. Your miners are connected to the third pool (which is the only one active). The other pools try to reconnect every 10 seconds. In the proxy, all pool connections are always kept alive (even if no miners are directed to the pool) and there is no order to retry reconnect. All pools retry reconnetion independently of the others.

But another one tell me about a bug of this kind, so it might be a bug. there is a big refactoring of this part of the proxy comming in the next release (0.5.0) and the bug should be fixed.

I left the bug opened until the next release.

@mmalka
Copy link
Author

mmalka commented Jul 20, 2014

I will try to find a log showing this in my next restarts, but you must know it happend only 3 times in total, and I really see it only once clearly in the log part.

Also, so you know, I will be always using the latest commit before submitting a bug, which is at the time of this issue: 6b3cc91

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

I have a problem close to it. (at runtime instead of at the start, but it should be the same problem)

[2014-07-23 21:26:04] JSON-RPC call failed: [
20,
"No pool available on this proxy.",
null
]

I'm not using the latest version yet doe... updating it right now :)

but this log should interest you, a friend running the latest revision 5 release have this as well.

https://mega.co.nz/#!gRwQFbxL!NZCpPa0bL1-W9oPTzn3j6ubsRmG3HMwLEPIx0bp_Tbk

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

[2014-07-23 22:01:43] JSON-RPC call failed: [
20,
"No pool available on this proxy.",
null
]

Same with 5.0 Snapshot compiled from the latest sources

LOG REMOVED

@Stratehm
Copy link
Owner

Ok. I think I get it!

It's not a bug, it's a feature ;) There is a timeout in the proxy on the job notification received from the pools. The stratum specification states that mining.notify messages should be sent at regular interval. But there is no upper bound for this interval. So I have set the timeout is to 120 seconds by default (which is pretty large for most of the pools).
But wafflepool may send job notification with a larger delay (since it may mine coins with long block time, which explains the bug is random) and the timeout is triggered, disconnecting the pool.

Could you test with the timeout set to 0 to disable it? (--pool-no-notify-timeout 0 on command line or "poolNoNotifyTimeout": 0, in the configuration file)

In the 0.5.0-SNAPSHOT version, the default timeout value is now set to 240 seconds. But you are still with the old configuration file and the 120 seconds value.

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

I will set it to 0, ok. (it was set to 240 btw, I followed the commits, not the releases, only compiled so far ;))

I have a bug that is even more related to this thread "wont check failover anymore".

It didn't check my top priority pool for like 1hour in the failover, so when I re-activated my rig on betarigs... well, it's appearing offline.

https://mega.co.nz/#!QcIFCBia!9Vlam96yN5igBENxdepa3fxXBkVEWy_8IzhazFL5UhI

look like it completly stopped to look for failover

the log start just before the last check of betarigs.com (21:07)

@Stratehm
Copy link
Owner

Indeed. That is a bug.

I have already seen this behavior once but never been able to reproduce it. Unfortunately, the DEBUG log level is not enough to investigate. I need the logs of this bug with the TRACE level (this level is really verbose and may hurt a bit the proxy performance). But it is the only way I have to investigate.

Thank you for your help.

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

I had this bug again after only 20minutes, so I'm going to put the TRACE level immediatly.
I hope TRACE gives you enough informations :(

fortunatly, my server is a beast, so it does not hurt performance.

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

Again...
Here the TRACE log :)

http://paste2.org/fVm99s9M

Note: it seems very recurrent since I updated to latest version :/

@Stratehm
Copy link
Owner

Thank you. I should have all the needed info. I begin the investigation. (But will surely continue tomorrow)

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

2014-07-23 23:28:42,866 TRACE [TimerSchedulerThread]: [Timer] Task ReconnectTask-Betarigs 7740 X11 cancelled. Do not execute.

Thread 4 maybe die ?

that is the last time it add the task to reconnect

2014-07-23 23:28:32,866 TRACE [TimerSchedulerThread]: [Timer]    Next task to execute ReconnectTask-Betarigs 7740 X11: waiting for 9999 ms.
2014-07-23 23:28:32,866 TRACE [TimerExecutorThread-4]: [Timer]    Task added => Waking up the scheduler.

it use TimerExecutorThread-4, then it vanish maybe ?

Can be caused because it was trying to execute another task in the exact same time as the last reconnect, then it erased the "AddQueue" task.

2014-07-23 23:28:42,866 TRACE [TimerSchedulerThread]: [Timer]    Looking for next task to execute: [Task [isCancelled=true, expectedExecutionTime=1406150922865, name=ReconnectTask-Betarigs 7740 X11], Task [isCancelled=false, expectedExecutionTime=1406150970909, name=HashrateRecoderTask]]
2014-07-23 23:28:42,866 TRACE [TimerSchedulerThread]: [Timer]    Task to execute now: ReconnectTask-Betarigs 7740 X11.
2014-07-23 23:28:42,866 TRACE [TimerSchedulerThread]: [Timer]    Task ReconnectTask-Betarigs 7740 X11 cancelled. Do not execute.
2014-07-23 23:28:42,866 TRACE [TimerSchedulerThread]: [Timer]    Looking for next task to execute: [Task [isCancelled=false, expectedExecutionTime=1406150970909, name=HashrateRecoderTask]]
2014-07-23 23:28:42,866 TRACE [TimerSchedulerThread]: [Timer]    Next task to execute HashrateRecoderTask: waiting for 48043 ms.

@Stratehm
Copy link
Owner

That's it :) I am building a new snapshot with a fix attempt.

Stratehm added a commit that referenced this issue Jul 23, 2014
@Stratehm
Copy link
Owner

Or if can compile from the last commit and try it, it will be faster

@mmalka
Copy link
Author

mmalka commented Jul 23, 2014

Yes, ofc, I can, it takes me 2 commands to lol

Compiled and working, will update you if it goes offline on betarigs again.

What really changed ? I see you changed the endline system of the file, so it blow'd the diff unless it's related to endline ? :p

Edit: Ah, I see, changed few func to synchronized.

That mean you think like me about the "dual" task things ?

Synchronized methods enable a simple strategy for preventing thread interference and memory consistency errors: if an object is visible to more than one thread, all reads or writes to that object's variables are done through synchronized methods.

@Stratehm
Copy link
Owner

Endline change is due to commiting once with eclipse, once with git-bash.

I have just added some synchronization on threads. The issue is that 2 threads try to schedule the pool restart concurrently and the first was cancelling the second task.

@Seagulls
Copy link

I'm not at home to check at the minute but I am using the latest commit version.

Someone rented my rig on betarigs about an hour ago and it looks like the proxy has failed to switch over to that pool.

I'll investigate later but may be working properly actually thinking about it.

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

I must note on my side that it's not disconnected for about 13 hours. :)

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

The earlier problem seems fixed to me (will stop checking failover), more than 18hours without problem about it.

I have another disconnect issue, happend at 17:39:37 and 17:39:48 gmt+2, my server is gmt+0 though.

miner log with GMT+2

`[2014-07-24 17:38:13] GPU #0: GeForce GTX 670, 2420 khash/s
[2014-07-24 17:38:13] accepted: 4394/4476 (98.17%), 5130 khash/s (yay!!!)
[2014-07-24 17:38:52] GPU #1: GeForce GTX 750 Ti, 2701 khash/s
[2014-07-24 17:38:52] accepted: 4395/4477 (98.17%), 5122 khash/s (yay!!!)
[2014-07-24 17:39:02] GPU #1: GeForce GTX 750 Ti, 2694 khash/s
[2014-07-24 17:39:02] accepted: 4396/4478 (98.17%), 5114 khash/s (yay!!!)
[2014-07-24 17:39:03] Stratum detected new block
[2014-07-24 17:39:03] GPU #1: GeForce GTX 750 Ti, 2704 khash/s
[2014-07-24 17:39:03] GPU #0: GeForce GTX 670, 2468 khash/s
[2014-07-24 17:39:37] stratum_recv_line failed
[2014-07-24 17:39:37] Stratum connection interrupted
[2014-07-24 17:39:38] GPU #1: GeForce GTX 750 Ti, 2696 khash/s
[2014-07-24 17:39:38] Stratum detected new block
[2014-07-24 17:39:38] GPU #0: GeForce GTX 670, 2460 khash/s
[2014-07-24 17:39:48] stratum_recv_line failed
[2014-07-24 17:39:48] Stratum connection interrupted
[2014-07-24 17:39:48] GPU #1: GeForce GTX 750 Ti, 2707 khash/s
[2014-07-24 17:39:48] GPU #0: GeForce GTX 670, 2462 khash/s
[2014-07-24 17:39:48] Stratum detected new block
[2014-07-24 17:39:48] Stratum detected new block
[2014-07-24 17:39:49] GPU #1: GeForce GTX 750 Ti, 1942 khash/s
[2014-07-24 17:39:49] accepted: 4397/4479 (98.17%), 4404 khash/s (yay!!!)

Server log with gmt+0

removed

@Stratehm
Copy link
Owner

I do not see the disconnection logs on the proxy side. Your miner and proxy may not be well synchronized since I can see two accepted shares on the proxy logs with 55 seconds between them and the max time between two shares on the miner is 47 seconds.

So I think the extracts do not match.

What about the betarigs reconnect problem ?

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

He told me he finally been reconnected automatically. It was probably a bad pool info from the renter.

You want me to extract a bigger log ? There maybe is minutes differences on my server since I didn't run ntpd since a while.

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

Here is 4 minutes between :36 and :40, it should cover any reception lag or what-ever

removed, see next

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

I think I gave you the wrong time-range, it seems my server is in fact GMT+1.
This is a better log then:
http://paste2.org/kWcgYAxz

I'm dumb, I though it was GMT+0.

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

Another:

[2014-07-24 20:25:03] GPU #1: GeForce GTX 750 Ti, 2627 khash/s
[2014-07-24 20:25:03] accepted: 5015/5099 (98.35%), 5085 khash/s (yay!!!)
[2014-07-24 20:25:16] stratum_recv_line failed
[2014-07-24 20:25:16] Stratum connection interrupted
[2014-07-24 20:25:16] GPU #0: GeForce GTX 670, 2451 khash/s
[2014-07-24 20:25:17] GPU #1: GeForce GTX 750 Ti, 2695 khash/s
[2014-07-24 20:25:17] Stratum detected new block
[2014-07-24 20:25:27] stratum_recv_line failed
[2014-07-24 20:25:27] Stratum connection interrupted
[2014-07-24 20:25:27] GPU #0: GeForce GTX 670, 2454 khash/s
[2014-07-24 20:25:27] Stratum detected new block
[2014-07-24 20:25:27] GPU #1: GeForce GTX 750 Ti, 2712 khash/s
[2014-07-24 20:25:36] GPU #0: GeForce GTX 670, 2406 khash/s
[2014-07-24 20:25:36] accepted: 5016/5100 (98.35%), 5118 khash/s (yay!!!)

with a 1min log only
http://paste2.org/Kwf8E73h

The concerned miner is always: 79.115.158.3
The others IP are others miners.

2014-07-24 19:25:19,126 INFO [Pool-Nice Hash X11-Thread]: [MonoCurrentPoolStrategyManager] Close connection /79.115.158.3:13571 since the on-the-fly extranonce change is not supported.
2014-07-24 19:25:19,126 DEBUG [Pool-Nice Hash X11-Thread]: [StratumConnection] Closing connection /79.115.158.3:13571...
2014-07-24 19:25:19,126 INFO [Pool-Nice Hash X11-Thread]: [ProxyManager] Worker connection /79.115.158.3:13571 closed. 0 connections active on pool Nice Hash X11. Cause: Change extranonce not supported.
2014-07-24 19:25:19,126 DEBUG [/79.115.158.3:13571-Thread]: [StratumConnection] Closing connection /79.115.158.3:13571...
2014-07-24 19:25:19,126 DEBUG [Pool-Nice Hash X11-Thread]: [Pool] Stopping pool Nice Hash X11...
2014-07-24 19:25:19,126 DEBUG [Pool-Nice Hash X11-Thread]: [StratumConnection] Closing connection Pool-Nice Hash X11...
2014-07-24 19:25:19,127 INFO [Pool-Nice Hash X11-Thread]: [Pool] Pool Nice Hash X11 stopped.
2014-07-24 19:25:19,127 DEBUG [Pool-Nice Hash X11-Thread]: [Pool] Starting pool Nice Hash X11...
2014-07-24 19:25:19,147 DEBUG [Pool-Nice Hash X11-Thread]: [Timer] Scheduling of task SubscribeTimeoutTask-Nice Hash X11 in 5000 ms.
2014-07-24 19:25:19,148 TRACE [Pool-Nice Hash X11-Thread]: [Timer] Expected execution time of task SubscribeTimeoutTask-Nice Hash X11: 1406222724148.
2014-07-24 19:25:19,148 DEBUG [Pool-Nice Hash X11-Thread]: [StratumConnection] Start reading on connection Pool-Nice Hash X11.
2014-07-24 19:25:19,148 TRACE [Pool-Nice Hash X11-Thread]: [Timer] Task added => Waking up the scheduler.

Hmm, seems I still have this extranonce error, but I deactivated it in the proxy config for sure

@Seagulls
Copy link

See below comment :)

@mmalka
Copy link
Author

mmalka commented Jul 24, 2014

Here the log file of Seaguls:

http://paste2.org/L8b6KtO0

The log start at one of the last accepted shares on betarigs before it disconnect from it.
It will then try to reconnect to it for a while, then leave it forever.

this is when it disconnected from betarigs for the first time

2014-07-24 08:34:21,379 ERROR [Pool-Betarigs 7740 X11-Thread]: [Pool] Disconnect of pool Pool [name=Betarigs 7740 X11, host=r7641.g78.rigs.eu.betarigs.com:10490, username=sigals-7641, password=x, readySince=Thu Jul 24 04:02:14 BST 2014, isReady=true, isEnabled=true, isStable=true, priority=0, weight=1].
java.io.IOException: EOF on inputStream.
at strat.mining.stratum.proxy.network.StratumConnection$1.run(StratumConnection.java:156)

Also, see that little bug here "Disconnect of pool Pool", the pool name is not append'd there

@Seagulls
Copy link

http://pastie.org/9418643

and this is the last mention of betarigs in the log.

@Stratehm
Copy link
Owner

Ok. So for your miner disconnections, here after are the revelant logs lines:

2014-07-24 16:39:40,576 WARN [Pool-Nice Hash X11-Thread]: [ProxyManager] Pool Nice Hash X11 is DOWN. Moving connections to another one.
2014-07-24 16:39:40,576 DEBUG [Pool-Nice Hash X11-Thread]: [MonoCurrentPoolStrategyManager] Check all worker connections binding.
2014-07-24 16:39:40,576 DEBUG [Pool-Nice Hash X11-Thread]: [MonoCurrentPoolStrategyManager] Current pool: Waffle Pool X11
2014-07-24 16:39:40,576 INFO [Pool-Nice Hash X11-Thread]: [MonoCurrentPoolStrategyManager] Switching worker connections from pool Nice Hash X11 to pool Waffle Pool X11.
2014-07-24 16:39:40,576 INFO [Pool-Nice Hash X11-Thread]: [MonoCurrentPoolStrategyManager] Close connection /79.115.158.3:11509 since the on-the-fly extranonce change is not supported.
2014-07-24 16:39:40,576 DEBUG [Pool-Nice Hash X11-Thread]: [StratumConnection] Closing connection /79.115.158.3:11509...
2014-07-24 16:39:40,577 INFO [Pool-Nice Hash X11-Thread]: [ProxyManager] Worker connection /79.115.158.3:11509 closed. 1 connections active on pool Nice Hash X11. Cause: Change extranonce not supported.
2014-07-24 16:39:40,577 INFO [Pool-Nice Hash X11-Thread]: [WorkerConnection] Rebind connection /176.31.233.35:36323 from pool Nice Hash X11 to pool Waffle Pool X11 with setExtranonce notification.
.
.
.
2014-07-24 16:39:40,677 INFO [StratumProxyManagerSeverSocketListener]: [ProxyManager] New connection on /178.33.17.85:7740 from /79.115.158.3:12000.

As you can see, Nicehash went down and the proxy has switched the current pool to WafflePool. A worker has been disconnected because it does not support the on-the-fly pool switching (the so called extranonce change). The pool switching for this miner is only effective when the miner reconnect (which it does 100 ms later). It the disconnection that you see in your miner logs.

The second miner support the extranonce change, so the pool switching is immediately effective without disconnection.

The extranonce configuration in the proxy is just to enable/disable extanonce change subscription on the pool, not from the miners. If a miner advertises that it supports the extranonce change, then all pool switching will use the extranonce change on that miner. If the miner does not advertise the extranonce change support, the miner is disconnected on pool switching. So all seems to be fine.

I am still investigating for the other bug (the initial problem) with betarigs.

@Stratehm
Copy link
Owner

For the Betargis problem, in the Seagulls logs (http://paste2.org/L8b6KtO0), Betarigs seems to ask the disconnection (I think because the rent period is over. Can you confirm that ?) Then, since Nicehash and wafflepool are down, when miners connect on the proxy no pool is alive thus there connections are rejected.

So, the problem I see here is that Wafflepool is down but does not retry to connect. It is the same bug that we had previously with betarigs. We can conclude the bug is still here and can impact whatever configured pool in the proxy.

Nevertheless, a good news is that I am now able to reproduce the bug regularly on my side, thus it will be easier to fix.

@mmalka
Copy link
Author

mmalka commented Jul 25, 2014

The problem is that his rental did not end, and nicehash always been up all this day, I was mining on nicehash 100% of the time on this day.

Nicehash could have been considered down though, because he set a minimum price, and the price on nicehash was very bad on this day, so yeah, the fact Waffle is down is a problem.

But I'm pretty sure his rental did not end till the night and that's the problem.
Edit: confirmed:
zerodashhash rented on July 24, 2014 08:10 for 24 hours

he was on rental all this time up to today at 08:10 gmt

@Stratehm
Copy link
Owner

Ok. What is really odd is that after the disconnection of betarigs, the proxy tries to reconnect on betarigs but betargigs closes the connection. The following lines show the first disconnection 18 seconds after the last share submition. This disconnection error (EOF on inputStream) happens when TCP connection is remotely closed (thus the remote pool has closed the connection). That is why I supposed the betarigs rental was over.

2014-07-24 08:34:03,127 INFO [Pool-Betarigs 7740 X11-Thread]: [WorkerConnection] Accepted share (diff: 0.03125) from AMD@/127.0.0.1:50104 on Betarigs 7740 X11. Yeah !!!!
.
.
.
2014-07-24 08:34:21,379 ERROR [Pool-Betarigs 7740 X11-Thread]: [Pool] Disconnect of pool Pool [name=Betarigs 7740 X11, host=r7641.g78.rigs.eu.betarigs.com:10490, username=sigals-7641, password=x, readySince=Thu Jul 24 04:02:14 BST 2014, isReady=true, isEnabled=true, isStable=true, priority=0, weight=1].
java.io.IOException: EOF on inputStream.
at strat.mining.stratum.proxy.network.StratumConnection$1.run(StratumConnection.java:156)

Then the proxy tries to reconnect to betarigs 5 seconds later but the pool reject the connection (surely because the rig is no more rented):

2014-07-24 08:34:26,403 INFO [TimerExecutorThread-90]: [Pool] Trying reconnect of pool ReconnectTask-Betarigs 7740 X11...
2014-07-24 08:34:26,465 ERROR [Pool-Betarigs 7740 X11-Thread]: [Pool] Disconnect of pool Pool [name=Betarigs 7740 X11, host=r7641.g78.rigs.eu.betarigs.com:10490, username=sigals-7641, password=x, readySince=Thu Jul 24 04:02:14 BST 2014, isReady=false, isEnabled=true, isStable=false, priority=0, weight=1].
java.io.IOException: EOF on inputStream.
at strat.mining.stratum.proxy.network.StratumConnection$1.run(StratumConnection.java:156)
2014-07-24 08:34:26,465 WARN [Pool-Betarigs 7740 X11-Thread]: [ProxyManager] Pool Betarigs 7740 X11 is DOWN. Moving connections to another one.

Sorry but this time, I can't tell you it is a proxy problem since the betarigs connection was closed by betargis and the connection retry happened. From the proxy point of view, all is fine (except the Wafflepool bug that I am still investigating of course).

@mmalka
Copy link
Author

mmalka commented Jul 25, 2014

There is a possibility that the renter's pool went down and betarigs refused connection until it was back online.

The problem is that the proxy stopped to check for betarigs at all, the end of the log show the last attempt to reconnect to betarig of the whole log, there is like 6 more hours doing nothing.

When he came back home from work, he just restarted the proxy and he went back working for betarigs.

@Stratehm
Copy link
Owner

Indeed, the "renter's pool" is a valid explaination.

My bad, I have misread the logs and did not see the reconnect problem at the end of the logs.

I have found where the bug is but I do not yet understand why it happens (the task is not inserted in the queue even if the logs say the contrary). I have added some logs but since the bug is random, it can take several hours to reproduce it.

@mmalka
Copy link
Author

mmalka commented Jul 25, 2014

You can create an advanced debug-mode branch if you want and we can run it so we have better chance to find out about this error. It's seems quite recurring for him.

Maybe it's happening only when you are getting kicked from a pool and not just when it become offline or w/e. (kick => message sent / offline => timeout)

@Stratehm
Copy link
Owner

Done. You can checkout the debug branch and run the proxy with TRACE log level. Thank you.

@mmalka
Copy link
Author

mmalka commented Jul 25, 2014

Compiled and uploaded here for Seaguls: https://mega.co.nz/#!oAxSmJQB!0HWkgSwgw-xETDbxJ9nc-nOrVIE6nBPWgEheKpdsmpk

PS: You should use gitextensions, it allow you to auto-convert line ending of files while commiting to always have the right diff ^^ Also should have an implementation into Eclipse as well, or you can just right click in the directory and commit from there, you don't lose much time.

@Seagulls
Copy link

I've not been able to reproduce the bug yet -- still running debug version with trace log. Hopefully it will come up soon.

@Stratehm
Copy link
Owner

I am no more able to reproduce it too. I have ran the proxy for 3 days without the issue with the debug branch. The bug seems to have vanished... It is really strange since I have only added logs on the debug branch.

I will revert the logs and change the implementation of the tasks queue used in the Timer class.

@Stratehm
Copy link
Owner

Modifications done. You can now use the last commit of the debug branch. (ea7f4ed)

@mmalka
Copy link
Author

mmalka commented Jul 27, 2014

I will run it a bit more with extended log and switch to the latest. My rig is still down, waiting for 3 new card, selling 2...

@Stratehm
Copy link
Owner

The proxy is now running for 3 days without issues with nicehash, wafflepool and betarigs configured. I merge the debug branch on the master and close this bug.

The official 0.5.0 release is ready.

@mmalka
Copy link
Author

mmalka commented Aug 3, 2014

Hello, I have a log (130mb) of about 1 week of running, my rig show OFFLINE on betarig.

https://mega.co.nz/#!BZQB2A5Y!-9NtKjZWt2nErowkSbCKWJskV_30IxbjOUiX38h3qLU

Fortunatly, it only do 5mb :)

version was this I think: 572c273

Edit: I don't even see the start of the file (starting proxy..................)

I think your split function erase some data on the road when it did the creation of ".0" file.

Note: I pulled the latest commit 08aa73e and will run it TRACE to report any issue.

PS: You should find a way to fix the end-of-line problem, it's really hard to follow what's happend and you are going to exponentially increase the size of the git folder doing so because it takes EVERY line change in count, so it's just like you are adding 3/4 times the same file in 3/4 commit.
Last: You cannot just check back your old commit to see what you modified and why, I have a private project with 1900 commits, and from time to time, I watch out what I did earlier since the beginning, learn me alot about possible mistakes I did etc.

@Stratehm
Copy link
Owner

Stratehm commented Aug 5, 2014

I am away from home for at least 1 week but I will try to find a solution for the end-of-line problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants