New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speedscan bug fixes #1900

Merged
merged 43 commits into from Apr 9, 2017

Conversation

@rocketrobot99
Contributor

rocketrobot99 commented Mar 2, 2017

Many bug fixes and improvements to Speedscan

Description

This PR has many bug fixes and changes to the Speed scan scheduler.
While this will work with an existing database, it will identify TTHs more accurately if the DB is wiped

Motivation and Context

Reviewing the logs showed many warning messages, complaining about Pokemon not being where they should. I investigated these and fixed bugs to make the warnings go away. TTHs should also be correctly identified eventually. (As this can still take several hours).

2 routines that have been changed significantly, get_cell_to_linked_spawn_points and select_in_hex
The changes are because spawnpoints can occur within the overlaps of scan locations:
image

However, when we queue a spawn point, we only need it queued once, so the changes attribute each spawnpoint to one scan location only.

Another significant change was to use the 'current_timestamp_ms' from the GET_MAP_OBJECTS response. A lot of problems were caused by using the current time, rather than the time the scan took place.

Performance gains were also included and uncaught exception handled.

How Has This Been Tested?

Testing will be documented per change relating to an issue. Some tests will run on a clean database, others may have data already fully populated. Some may require data manipulation to force senarios. All SQL used will be put here:
put into mem.txt - This SQL loads the tables into memory and creates history tables for scannedlocation and spawnpoints. Triggers are also created to populate the history tables in order to help track what happened and when.
check_kind.txt - This SQL checks the 'kind' of the existing spawnpoints that have a TTH and the associated cell is done.
running.txt - This SQL displays information about how the map is progressing

Clearing the data base is performed by runing the map with -cd, then stopping it:
image

CPU Usage

Here are some CPU graphs created using netdata. They were taken with the server running 2 instances (-st 35 and -st 40) A total of 18,000 spawnpoints using a total of 480 workers.
This was with the new code:
2 instances 480 workers

This was where I stopped the new code and started using the current code again:
2 instances current

And finally with the current code after an all the workers have settled in:
image

As far as I can see the overall CPU usage may actually be slightly lower with the new code, but there is not much in it.

** Update after commit to fix CPU usage, now looks like **
image

I would say 40% lower on my setup than current dev

Issue 1

Speedscan delay in scanning Spawns is too high:

Current behavior:
Running with an already populated, TTH'ed database using the current code, here is the output from an -ST 30 Map:
image

On average the scan delay is about 100 seconds.

Changes:
Using the actual timestamp from the scan (see issue 5), this is returned from parse_map
image
It is then sent into task_done and used to calculate the actual difference between the start of the scan item and the actual scan time.
image
This ensures that the correct values are recorded and reported. However, another change was to have the workers target the items 'start' value accurately. Here we have the Next_item, looking for when the worker will arrive at the cell.
image

New behavior:
Running on same db with an already populated, 67% TTH'ed database using the current code, here is the output from an -ST 30 Map:

On average the scan delay is now about 50 seconds (after the queue has caught up).
image

Issue 2

Speedscan distance calculations take too long:

Current behavior:
This is the log from a typical -ST 50 map at startup when it does the distance calcs:
image
The time taken is 1min 19secs.

Changes made:
The routine link_spawn_points has some additional lines to limit the calls to the CPU intensive call to 'in_radius'. It limits the possible Spawns that could in radius by seeing if they are in an area equivalent to a square box around the Cell location.
image

New behavior:
This is the log from the same settup, using the new code:
image
The time taken is 12 seconds

Logs:
issue2_logs.zip

Issue 3

Large -ST hexes have distorted Cells at the map edges:

Current behavior:
This is pic from a typical -ST 100 map showing the 'Squashing' effect on the left and the 'spacing out' effect on the right:
left and right

Changes made:
The routine _generate_locations was re-written and simplified. The cells no longer need to be created in a particular order. The routine works by building each 'ring' of the hex, one at a time, from the center outwards. For each ring, first 6 points vertices are calculated then the cells along each edge of the ring are filled in starting the East and working clockwise.
image

New behavior:
The same locations on the left and right of the map are no longer distorted
left_after and after_right

Issue 4

RocketMap can stop with an uncaught exception:

Current behavior:
image

Changes:
In the seach.py, add an exception handler to catch any problems while copying the queue items.
image

New behavior:
image
The exception is caught and the Map continues.

Issue 5

Scannedloaction bands don't match the Spawnpoint detection data and don't correspond to when the scan actually took place:

Current behavior:
Looking at a typical scannedlocation record:
image

then comparing it with the spawnpointdetectiondata:
image

In this example the _919 detection data doesn't match with the band4 time of 918 on the cell. This is not a major difference, but unifying the time will help solve a lot of problems.

Change:
In the parse_map routine, the timestamp of the request output is used where available.
image
This is also passed to other routines and used when storing scannedlocation and SPs
image
and
image

New behavior:
image
image

Issue 6

Unable to locate a TTH with a scan, just after the latest seen scan:
If a TTH scan does not produce a TTH, then a scan of the same SP should still produce a pokemon if scanned within 89 secs of the previous scan.

Current behavior:
image
Something is not right here, taking a look at the scannedlocation history shows nothing out of the ordinary:
image

Changes:
See changes in issues 9,10 and 5. It is this combination of changes that fixes this issue.

New Behavior:
image
This particular warning should now be eliminated.

Issue 7

Spawnpoints have an incorrect 'kind':

Current behavior:
From the 'running' SQL we can see we have 3695 SPs so far.
image

Running the check_kind SQL compares the stored kind of a spawnpoint with a calculated one. This is only for SPs with a TTH and where the cell scanning is complete. The output show that 336 SPs have an incorrect kind:
image

Changes:
This includes all changes from issue 5. Plus changes to classify:
image

New behavior:
After running an -ST 60, the check_kind SQL shows no incorrect records:
image

Issue 8

Spawnpoints are not counted correctly':

Current behavior:
For a single ST 30, running a count of spawnpoints reads:
image
From the logs it is:
image

The map is reporting lower than expected SPs

Also for a multiple - ST 30, running a count of spawnpoints reads:
image
However from the logs:
image
4611+919+190+469+2026+4059+1771 = 14045
We actually have more SPs being reported than there actually are because of the SPs in the overlaps.

Changes:
I have modified cell_to_linked_spawn_points so that an SP is assigned to only one Cell, and so is only counted, searched for and reported once.
image
I similar logic was applied to create a new Spawnpoint.Select_in_hex_by_cellids
image

The existing Spawnpoint.Select_in_hex was renamed to Spawnpoint.Select_in_hex_by_location and is still needed upon startup in case the location has changed

New behavior:
We have 11568 SPs
image
From the logs..
image
3699 + 803 +140 +293 + 1781 + 3255 + 1597 = 11568

Issue 9

The queue can refresh between an item from the queue being scanned and when it is sent to Task_done:

Current behavior:
Currently, there is no code to detect if the queue has been refreshed after an item has been scanned and before it is handled in task_done. All that is stored is the index to the item in the queue and can lead to problems if the queue has changed

Changes:
Added a means to detect if we are still dealing with the same item or not.
image. Thanks to @onilton for this suggestion.

New behavior:
Now this problem is at least detected and dealt with:
image

Issue 10

SPs are not being re-classified when a cell is done, because the last scan was a 'bad' one:

Current behavior:
SPs are only finally classified when a cell has completed its scanning, which makes sense. However if the scan that completed the cell is a bad one, i.e. no pokemon, then it is not sent to classify and stays 'hhhs'. See issue 7

Changes:
image
This code was removed, as we don't want the processing to return until it is dealt with further down.
image

New Behavior:
See results from issue 7, no incorrect 'kind'

Issue 11

Queue is continuously refreshed if there are no associated SPs within that area:

Current behavior:
Running a map with -St 1 at a location that I know has no pokemon:
image

Changes:
Added a new variable for each hex called 'empty_hive'. When a queue is refreshed the number of SPs within the hex is counted and 'empty_hive' is set when the count is 0.
image
The variable is now added to the refresh check
image

New behavior:
image
Fro the logs that queue is never refreshed

Issue 12

SPs taking forever to find a TTH:

Current behavior:
With the default value for --spawn-delay being 10 seconds, when trying to find a TTH, the start value only moves along 20 seconds.
image

Changes:
So if no TTH is found the gap moves along at least 45 sec, giving a gap of 44 seconds for the next TTH search
image

New behavior:
image
The _ part of the ID shows that TTH scan is increasing by at least 50 seconds until it is found.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
@jaake77

jaake77 approved these changes Mar 2, 2017

Tested on (13) -st 9 after clearing database and it's working well. Still has the --Account (account name) returned empty scan for more than 3 scans; possibly ip is banned. Switching accounts... -- on the section of my map that intentionally doesn't have anything there. Not sure if there is even a possible fix for that though. Good Work.

@jaake77

This comment has been minimized.

Show comment
Hide comment
@jaake77

jaake77 Mar 2, 2017

Contributor

image

Contributor

jaake77 commented Mar 2, 2017

image

@myntath

This comment has been minimized.

Show comment
Hide comment
@myntath

myntath Mar 3, 2017

Contributor

Speedscan to me seems key to a useful RocketMap so I'm glad to see work to improve it. I plan to have a proper look through and test when I have time. Just wondering if you know if this is likely to fix issues 1897, 1898 and 1900, these just recently popped up? (1898 is mine, and at first glance this may not be fixed). If not perhaps we could have a look at some of these at the same time while there is a few of us considering speedscan.

Contributor

myntath commented Mar 3, 2017

Speedscan to me seems key to a useful RocketMap so I'm glad to see work to improve it. I plan to have a proper look through and test when I have time. Just wondering if you know if this is likely to fix issues 1897, 1898 and 1900, these just recently popped up? (1898 is mine, and at first glance this may not be fixed). If not perhaps we could have a look at some of these at the same time while there is a few of us considering speedscan.

Show outdated Hide outdated pogom/models.py
@@ -1787,7 +1795,8 @@ def parse_map(args, map_dict, step_location, db_update_queue, wh_update_queue,
d_t_secs = date_secs(datetime.utcfromtimestamp(
(p['last_modified_timestamp_ms'] +
p['time_till_hidden_ms']) / 1000.0))
if sp['latest_seen'] != sp['earliest_unseen']:
if (sp['latest_seen'] != sp['earliest_unseen'] or
not sp['last_scanned']):
log.info('TTH found for spawnpoint %s.', sp['id'])
sighting['tth_secs'] = d_t_secs

This comment has been minimized.

@j16sdiz

j16sdiz Mar 3, 2017

Do we need something like elif (sp['latest_seen'] == sp['earliest_unseen'] and d_t_secs != sp['earliest_unseen']): ScannedLocation.reset_bands(scan_loc) to allow rescan of tth ?

@j16sdiz

j16sdiz Mar 3, 2017

Do we need something like elif (sp['latest_seen'] == sp['earliest_unseen'] and d_t_secs != sp['earliest_unseen']): ScannedLocation.reset_bands(scan_loc) to allow rescan of tth ?

This comment has been minimized.

@rocketrobot99

rocketrobot99 Mar 3, 2017

Contributor

Probably, I have not looked into how we should deal with spawnpoints/locations when the spawns migrate. This PR was really only concerned with fixing bugs and narrowing down the TTH. Plus it would be pretty tricky to test. It might be better in a separate PR, designed to identify if a spawn migration has occurred. Maybe have certain thresholds, then reset all the bands / TTHs?

@rocketrobot99

rocketrobot99 Mar 3, 2017

Contributor

Probably, I have not looked into how we should deal with spawnpoints/locations when the spawns migrate. This PR was really only concerned with fixing bugs and narrowing down the TTH. Plus it would be pretty tricky to test. It might be better in a separate PR, designed to identify if a spawn migration has occurred. Maybe have certain thresholds, then reset all the bands / TTHs?

This comment has been minimized.

@Obihoernchen

Obihoernchen Mar 3, 2017

Contributor

I agree. But let's keep it separate. Already more than enough changes in this PR.

@Obihoernchen

Obihoernchen Mar 3, 2017

Contributor

I agree. But let's keep it separate. Already more than enough changes in this PR.

This comment has been minimized.

@azotov

azotov Mar 18, 2017

I have found a problem which is observed after spawnpointdetectiondata table cleanup. Not null tth_secs value is written to db only once when TTH is found. If you delete spawnpointdetectiondata for this point but left the point in spawnpoint table and mark scan location of this point for rescan, then all new records for this point in spawnpointdetectiondata table will have NULL in tth_secs field (even when d_t_secs < 90 seconds is observed) and the point will not be able to be reclassified forever. This problem should not be corrected in this PR (more than enough changes in this PR), just don't forget about it when making new PR for point migration support.

@azotov

azotov Mar 18, 2017

I have found a problem which is observed after spawnpointdetectiondata table cleanup. Not null tth_secs value is written to db only once when TTH is found. If you delete spawnpointdetectiondata for this point but left the point in spawnpoint table and mark scan location of this point for rescan, then all new records for this point in spawnpointdetectiondata table will have NULL in tth_secs field (even when d_t_secs < 90 seconds is observed) and the point will not be able to be reclassified forever. This problem should not be corrected in this PR (more than enough changes in this PR), just don't forget about it when making new PR for point migration support.

@Obihoernchen

Obihoernchen approved these changes Mar 5, 2017 edited

I have tested this really intensively for my ~140km² scan area with @rocketrobot99 for more than two days. We compared the data with previous SpeedScan DB, compared pokemon and spawnpoint counts and so on. It finishes initial scan, finds all the spawnpoints and times are correct as well.
I recommend a clean database (delete all scan* and spawn* tables) though.

@Obihoernchen

Obihoernchen suggested changes Mar 5, 2017 edited

Ok I just found a bug. It will not get spawnpoints in an already scanned area, if you change location. That's because it's using cellid for select_in_hex.

@rocketrobot99

This comment has been minimized.

Show comment
Hide comment
@rocketrobot99

rocketrobot99 Mar 27, 2017

Contributor

@voxx @dsorc I have pushed a fix. The problem set SP's disappear time incorrectly. What would happen is an SP would be picked up right at the very end of its spawn, by the scan of another SP. This is fine, and a simple lookup would normally show that the pokemon for the spawn already exists. However, the lookup was using the current time (now just after the SP has despawned) rather than the time of the scan. Thanks to Voxx for his help, I have been running all day with this fix and not found any more examples.

Contributor

rocketrobot99 commented Mar 27, 2017

@voxx @dsorc I have pushed a fix. The problem set SP's disappear time incorrectly. What would happen is an SP would be picked up right at the very end of its spawn, by the scan of another SP. This is fine, and a simple lookup would normally show that the pokemon for the spawn already exists. However, the lookup was using the current time (now just after the SP has despawned) rather than the time of the scan. Thanks to Voxx for his help, I have been running all day with this fix and not found any more examples.

@voxx

This comment has been minimized.

Show comment
Hide comment
@voxx

voxx Mar 29, 2017

I've been running the latest patch 4fc0fd7 for about 12 hours on three fairly large instances now.

I spent some time working with @rocketrobot99 testing a handful of previously erroneous spawn points and it appears the bogus alarms for 1 hour spawns that were occurring are now alerting at proper times.

While I did still experience a small handful of bogus timers appearing, these particular issues seem to not be related to this same bug, and I think can be considered out of scope for this PR. Likely the infamous "rare" spawn timers that have been alluded to in the past that the existing code base, and this PR don't account for.

I'll report back in another 12 hours or so, but thus far it looks good on my end. If no additional weirdness creeps up this evening you've got thumbs up to merge from me.

^^

voxx commented Mar 29, 2017

I've been running the latest patch 4fc0fd7 for about 12 hours on three fairly large instances now.

I spent some time working with @rocketrobot99 testing a handful of previously erroneous spawn points and it appears the bogus alarms for 1 hour spawns that were occurring are now alerting at proper times.

While I did still experience a small handful of bogus timers appearing, these particular issues seem to not be related to this same bug, and I think can be considered out of scope for this PR. Likely the infamous "rare" spawn timers that have been alluded to in the past that the existing code base, and this PR don't account for.

I'll report back in another 12 hours or so, but thus far it looks good on my end. If no additional weirdness creeps up this evening you've got thumbs up to merge from me.

^^

Show outdated Hide outdated pogom/models.py
@friscoMad

This comment has been minimized.

Show comment
Hide comment
@friscoMad

friscoMad Mar 30, 2017

Contributor

I have been using this PR for almost 2 weeks (updating as needed) and when I went back to the dev version today to make a new PR I did notice how fast the startup is in this PR and that I just need 1 db worker with PR and 3 without it.

I am not sure about CPU usage but mem usage is similar.

Contributor

friscoMad commented Mar 30, 2017

I have been using this PR for almost 2 weeks (updating as needed) and when I went back to the dev version today to make a new PR I did notice how fast the startup is in this PR and that I just need 1 db worker with PR and 3 without it.

I am not sure about CPU usage but mem usage is similar.

Show outdated Hide outdated pogom/schedulers.py
Show outdated Hide outdated pogom/schedulers.py
@rocketrobot99

This comment has been minimized.

Show comment
Hide comment
@rocketrobot99

rocketrobot99 Mar 30, 2017

Contributor

@onilton I understand your point, Although I have only heard of cases where this improves the scanning. I Just don't want to make any more changes in this PR. If your system is set up so that the queue is minimal, then, in theory, all SPs will be scanned as soon as possible. The situation you describe will not happen, if you have enough workers as another worker will pick up the missed SP and that other worker may even be closer. The current dev logic just goes for the closest, but only considers moving once the spawn has already started. I have been experimenting with some amendments to next_item and will be included in a follow-up PR. They are a combination to minimise secs after spawn and distance to travel.

Contributor

rocketrobot99 commented Mar 30, 2017

@onilton I understand your point, Although I have only heard of cases where this improves the scanning. I Just don't want to make any more changes in this PR. If your system is set up so that the queue is minimal, then, in theory, all SPs will be scanned as soon as possible. The situation you describe will not happen, if you have enough workers as another worker will pick up the missed SP and that other worker may even be closer. The current dev logic just goes for the closest, but only considers moving once the spawn has already started. I have been experimenting with some amendments to next_item and will be included in a follow-up PR. They are a combination to minimise secs after spawn and distance to travel.

@ObsessedGamer

This comment has been minimized.

Show comment
Hide comment
@ObsessedGamer

ObsessedGamer Apr 2, 2017

For some reason when I switched to your most recent commit I get huge DB ques now upwards of 10,000 and it stops scanning until restart. Switched back to the one before your last recent commit and it went back to working fine.

ObsessedGamer commented Apr 2, 2017

For some reason when I switched to your most recent commit I get huge DB ques now upwards of 10,000 and it stops scanning until restart. Switched back to the one before your last recent commit and it went back to working fine.

@Obihoernchen

Tested this for quite some time and besides all the bugfixes CPU usage goes down as well. For me this is way better than current dev and I couldn't find any issues.

@smint86

smint86 approved these changes Apr 6, 2017

Tested this heavily with a completely fresh database in comparison to latest develop branch. I see significant improvements concerning (correct) TTH found, average scan delay and queue calculation speed. I now scan 7 instances with -st 30, which wasn't possible without the PR (covered a little bit smaller area with 19 instances -st 14). When I tried to scan this big steps in a very spawn intense area (major city in Germany), it took ages to finish the initial scan and scan delay was way higher. Even with the smaller steps, avg scan delay was still ~150 secondes while it is at ~30 seconds now. Also, the initial scan just took about 12 hours to finish with 99,9% TTH found. I'm still missing 6 of 26,5k spawnpoints after 48 hours, though. But I think that's acceptable. Also, all the timers seem to be correct now, as I didn't got any negative reports from my users by now.

I don't have the skills to review the code, but I see that it's working way better than before. I hope this helps a little.

@jaake77

jaake77 approved these changes Apr 6, 2017

'Update total stats had an Exception: {}.'.format(
repr(e)))
traceback.print_exc(file=sys.stdout)
time.sleep(10)

This comment has been minimized.

@friscoMad

friscoMad Apr 8, 2017

Contributor

There is no need to sleep here.
Also the proper fix is to try/catch the whole loop instead of this function as maybe other exceptions are raised at other points and the handling should be the same, log and continue, not just for this one.

@friscoMad

friscoMad Apr 8, 2017

Contributor

There is no need to sleep here.
Also the proper fix is to try/catch the whole loop instead of this function as maybe other exceptions are raised at other points and the handling should be the same, log and continue, not just for this one.

This comment has been minimized.

@rocketrobot99

rocketrobot99 Apr 9, 2017

Contributor

@friscoMad Thanks for taking a look at this, the exception handling was literally just copied from the lines above:
image
If you have a better way of coping with the exception, it would probably need to be applied to both parts. Please send me the changes and I will have a look.

@rocketrobot99

rocketrobot99 Apr 9, 2017

Contributor

@friscoMad Thanks for taking a look at this, the exception handling was literally just copied from the lines above:
image
If you have a better way of coping with the exception, it would probably need to be applied to both parts. Please send me the changes and I will have a look.

@FrostTheFox FrostTheFox requested a review from sebastienvercammen Apr 9, 2017

@sebastienvercammen

This comment has been minimized.

Show comment
Hide comment
@sebastienvercammen

sebastienvercammen Apr 9, 2017

Member

I've been getting poked about merging this PR so often, by so many people (most of which have never coded in their life), that I'd like to say congratulations: you know how to annoy a person. And not just random people, but people that have contributed in some way to the project (#pr) and that I consider to be on the team.

I'm used to people poking us constantly for stuff they want us to do, but this time the "request a review" feature on Github has been used for the first time ever by @FrostTheFox to poke me towards this PR.

This makes it very clear to me that y'all don't want an actual review, because that would take more time - as I've explained several times (especially to the people in #pr on Discord). You just want it merged.

So here you are, I'm merging it. No review, reduced code quality.

Member

sebastienvercammen commented Apr 9, 2017

I've been getting poked about merging this PR so often, by so many people (most of which have never coded in their life), that I'd like to say congratulations: you know how to annoy a person. And not just random people, but people that have contributed in some way to the project (#pr) and that I consider to be on the team.

I'm used to people poking us constantly for stuff they want us to do, but this time the "request a review" feature on Github has been used for the first time ever by @FrostTheFox to poke me towards this PR.

This makes it very clear to me that y'all don't want an actual review, because that would take more time - as I've explained several times (especially to the people in #pr on Discord). You just want it merged.

So here you are, I'm merging it. No review, reduced code quality.

@sebastienvercammen sebastienvercammen merged commit ef4b9e7 into RocketMap:develop Apr 9, 2017

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
@jagauthier

This comment has been minimized.

Show comment
Hide comment
@jagauthier

jagauthier Apr 9, 2017

Contributor

No one here has any idea of what the actual process is for code review outside of the button in github. But this PR is extremely popular, and obviously many people wanted you to review it. So why didn't you? We don't know your timetable for review, you don't provide us with a roadmap or milestones.
I thought your goal was transparency?
"To stay transparent about everything we do (and why we do it),..."
"We believe it’s better to be transparent... "
"The only way to let everyone’s voice/opinion count, is to be transparent about the situation and to let everyone participate fairly."
(all quotes from the most recent devkat blog)

Since you haven't reviewed this, why do you assume it will be reduced code quality? Are you immediately assuming that the code written here is inferior to your own?

Please, open up to us.. be transparent, about this review process, the timetable, who is doing it, the roadmap ahead... heck, even what you're working on right now... Because this is an open source project, and an open source community.. right?

Contributor

jagauthier commented Apr 9, 2017

No one here has any idea of what the actual process is for code review outside of the button in github. But this PR is extremely popular, and obviously many people wanted you to review it. So why didn't you? We don't know your timetable for review, you don't provide us with a roadmap or milestones.
I thought your goal was transparency?
"To stay transparent about everything we do (and why we do it),..."
"We believe it’s better to be transparent... "
"The only way to let everyone’s voice/opinion count, is to be transparent about the situation and to let everyone participate fairly."
(all quotes from the most recent devkat blog)

Since you haven't reviewed this, why do you assume it will be reduced code quality? Are you immediately assuming that the code written here is inferior to your own?

Please, open up to us.. be transparent, about this review process, the timetable, who is doing it, the roadmap ahead... heck, even what you're working on right now... Because this is an open source project, and an open source community.. right?

@sebastienvercammen

This comment has been minimized.

Show comment
Hide comment
@sebastienvercammen

sebastienvercammen Apr 9, 2017

Member

@jagauthier Except you're oblivious to everything that's been happening in #pr.

I have been reviewing it. I have been replying to questions about where I was and how it was going, I addressed that in my earlier comment.

Everyone knew. I just needed more time, because proper review isn't something you force to finish. And I know it's bad code because it's the only thing I've been putting my time towards for the past few weeks. Even rocketrobot knew it was pretty bad and was open to being patient for an actual review - but no one else was.

I'm not even given the time by #pr to finish a review, even when I communicated properly with them, so I can tell you with absolute certainty that we don't have the time to put towards communicating everything to everyone. It doesn't work and it increases the amount of time we need to act like customer support to hold everyone's hands and to make sure we don't "offend" anyone with our decisions, actions, or delays because we're trying to communicate first rather than coding first.

For everything that the community has against commercial organisations (e.g. the negative feedback towards pogodev/pgoapi after BossLand entered the scene to solve the hashing problem), a commercial organisation with strict rules and nearly no transparency is actually what everyone seems to need. And of course that's going to annoy me, it's hypocritical.

As for your "are you assuming it's inferior to your own", that wasn't necessary.

The 6 people who upvoted your comment are all people who don't understand the consequences of some decisions, and the time and effort everything takes. Besides bluemode (you troll), I don't recognize anyone for any effort they've put towards RocketMap. And before I get another reaction of "then tell us, explain it all to us!", I'm sorry but I'm not here to hold your hands and teach you how this part of the world works.

It's open source. The thing you don't seem to get, is that the equivalent of the time and effort we've been putting in for the past 9+ months, is for you to start learning from scratch and to try to catch up to where we are in whatever way you can rather than complaining about us. We don't expect you to suddenly catch up, but the effort of trying will move you forward and into a position where you can contribute (small parts) to the code and the project rather than having to place some form of vague idealistic comment about communication.

As for this PR, everyone could comment and participate. But that doesn't mean everyone will suddenly have the required experience to properly evaluate software code, and whether it's one person without the required skills/experience approving the PR or twenty, it doesn't make any of them more valid. Feedback is always welcome, but that doesn't mean it'll be enough to use as a single point of reference for approval or rejection of a PR.

Someone PM'ed me this, and I'm finding myself agreeing more lately: the community has been spoiled.

Perhaps it's time for us to take a step back as RocketMap and act more as an organization rather than your local pub where everyone can enter to improve the work and code quality. To set up formal processes of recruitment, development and communication. But I'm convinced that'll also bring complaints along with it.

Either way, we have things to discuss and we're discussing them. For now, these are just thoughts, and I've always been transparent in any way with the relevant people: #pr knows everything I've been up to, including where I went for the holidays and for how long. I've always included everyone that was necessary in any decision we've made so far, and in the cases where that wasn't always possible (e.g. a ban on Discord), I've always been open to receiving feedback about them and adjusting if necessary (FrostTheFox). So to use my own quotes to turn a lack of time into a way of saying I haven't been transparent enough, is kind of insulting.

Member

sebastienvercammen commented Apr 9, 2017

@jagauthier Except you're oblivious to everything that's been happening in #pr.

I have been reviewing it. I have been replying to questions about where I was and how it was going, I addressed that in my earlier comment.

Everyone knew. I just needed more time, because proper review isn't something you force to finish. And I know it's bad code because it's the only thing I've been putting my time towards for the past few weeks. Even rocketrobot knew it was pretty bad and was open to being patient for an actual review - but no one else was.

I'm not even given the time by #pr to finish a review, even when I communicated properly with them, so I can tell you with absolute certainty that we don't have the time to put towards communicating everything to everyone. It doesn't work and it increases the amount of time we need to act like customer support to hold everyone's hands and to make sure we don't "offend" anyone with our decisions, actions, or delays because we're trying to communicate first rather than coding first.

For everything that the community has against commercial organisations (e.g. the negative feedback towards pogodev/pgoapi after BossLand entered the scene to solve the hashing problem), a commercial organisation with strict rules and nearly no transparency is actually what everyone seems to need. And of course that's going to annoy me, it's hypocritical.

As for your "are you assuming it's inferior to your own", that wasn't necessary.

The 6 people who upvoted your comment are all people who don't understand the consequences of some decisions, and the time and effort everything takes. Besides bluemode (you troll), I don't recognize anyone for any effort they've put towards RocketMap. And before I get another reaction of "then tell us, explain it all to us!", I'm sorry but I'm not here to hold your hands and teach you how this part of the world works.

It's open source. The thing you don't seem to get, is that the equivalent of the time and effort we've been putting in for the past 9+ months, is for you to start learning from scratch and to try to catch up to where we are in whatever way you can rather than complaining about us. We don't expect you to suddenly catch up, but the effort of trying will move you forward and into a position where you can contribute (small parts) to the code and the project rather than having to place some form of vague idealistic comment about communication.

As for this PR, everyone could comment and participate. But that doesn't mean everyone will suddenly have the required experience to properly evaluate software code, and whether it's one person without the required skills/experience approving the PR or twenty, it doesn't make any of them more valid. Feedback is always welcome, but that doesn't mean it'll be enough to use as a single point of reference for approval or rejection of a PR.

Someone PM'ed me this, and I'm finding myself agreeing more lately: the community has been spoiled.

Perhaps it's time for us to take a step back as RocketMap and act more as an organization rather than your local pub where everyone can enter to improve the work and code quality. To set up formal processes of recruitment, development and communication. But I'm convinced that'll also bring complaints along with it.

Either way, we have things to discuss and we're discussing them. For now, these are just thoughts, and I've always been transparent in any way with the relevant people: #pr knows everything I've been up to, including where I went for the holidays and for how long. I've always included everyone that was necessary in any decision we've made so far, and in the cases where that wasn't always possible (e.g. a ban on Discord), I've always been open to receiving feedback about them and adjusting if necessary (FrostTheFox). So to use my own quotes to turn a lack of time into a way of saying I haven't been transparent enough, is kind of insulting.

@jagauthier

This comment has been minimized.

Show comment
Hide comment
@jagauthier

jagauthier Apr 9, 2017

Contributor

"Except you're oblivious to everything that's been happening in #pr."

Youre 100% correct. I've submitted PRs and requested access (March 3, 27, 29) three times.

Contributor

jagauthier commented Apr 9, 2017

"Except you're oblivious to everything that's been happening in #pr."

Youre 100% correct. I've submitted PRs and requested access (March 3, 27, 29) three times.

@sebastienvercammen

This comment has been minimized.

Show comment
Hide comment
@sebastienvercammen

sebastienvercammen Apr 9, 2017

Member

Youre 100% correct. I've submitted PRs and requested access (March 3, 27, 29) three times.

So rather than find a solution to a practical problem (not getting a reply at the times you tried to contact us), you complain that we're not being transparent enough, rather than trying to understand our situation, without even once having contacted us about it? (What did you do, who did you talk to, in what timezone were you when sending your messages, was it when I was at Disneyland or snowboarding, what did you try when you noticed I didn't/couldn't reply, ...?)

I'd like to share people's ideas and thoughts with the community, but if it keeps coming back to the fact that people haven't even tried to put in the effort of understanding the other party, then it makes it a bit more difficult for me.

It only reinforces the suggestion of being a managed organization rather than "your local bar where everyone can enter".

Member

sebastienvercammen commented Apr 9, 2017

Youre 100% correct. I've submitted PRs and requested access (March 3, 27, 29) three times.

So rather than find a solution to a practical problem (not getting a reply at the times you tried to contact us), you complain that we're not being transparent enough, rather than trying to understand our situation, without even once having contacted us about it? (What did you do, who did you talk to, in what timezone were you when sending your messages, was it when I was at Disneyland or snowboarding, what did you try when you noticed I didn't/couldn't reply, ...?)

I'd like to share people's ideas and thoughts with the community, but if it keeps coming back to the fact that people haven't even tried to put in the effort of understanding the other party, then it makes it a bit more difficult for me.

It only reinforces the suggestion of being a managed organization rather than "your local bar where everyone can enter".

@jagauthier

This comment has been minimized.

Show comment
Hide comment
@jagauthier

jagauthier Apr 9, 2017

Contributor

@sebastienvercammen
" (What did you do, who did you talk to, in what timezone were you when sending your messages, was it when I was at Disneyland or snowboarding, what did you try when you noticed I didn't/couldn't reply, ...?)"

It's not my responsibility to send you a message at the perfect time for you, or @FrostTheFox .
Which I have done both.

"and I've always been transparent in any way with the relevant people: #pr knows everything I've been up to,".

Being transparent in behind closed doors (#pr) is exclusivity, not transparency.

I would like access to #pr. I would like to discuss RM and it's development will all the fine people who contribute. I would like to have a friendly relationship with you (which is very achievable, because I am certainly willing, and respect you).

So, this is my 4th format request for access to #pr. You can search my name for PRs, you know who I am on discord. And I've tagged both you and Frosty in it.

Thanks! Looking forward to further discussions about RM!

Contributor

jagauthier commented Apr 9, 2017

@sebastienvercammen
" (What did you do, who did you talk to, in what timezone were you when sending your messages, was it when I was at Disneyland or snowboarding, what did you try when you noticed I didn't/couldn't reply, ...?)"

It's not my responsibility to send you a message at the perfect time for you, or @FrostTheFox .
Which I have done both.

"and I've always been transparent in any way with the relevant people: #pr knows everything I've been up to,".

Being transparent in behind closed doors (#pr) is exclusivity, not transparency.

I would like access to #pr. I would like to discuss RM and it's development will all the fine people who contribute. I would like to have a friendly relationship with you (which is very achievable, because I am certainly willing, and respect you).

So, this is my 4th format request for access to #pr. You can search my name for PRs, you know who I am on discord. And I've tagged both you and Frosty in it.

Thanks! Looking forward to further discussions about RM!

@FrostTheFox

This comment has been minimized.

Show comment
Hide comment
@FrostTheFox

FrostTheFox Apr 9, 2017

Member

@jagauthier I'm pretty sure I would remember if you direct PMd me recently while I was on and active, sometimes I wake up to a hundred messages (yes, not even lying, not all from RM, but from all the servers I'm in) and sometimes I don't really feel like reading every single one of them.

I don't even know your discord name.

Member

FrostTheFox commented Apr 9, 2017

@jagauthier I'm pretty sure I would remember if you direct PMd me recently while I was on and active, sometimes I wake up to a hundred messages (yes, not even lying, not all from RM, but from all the servers I'm in) and sometimes I don't really feel like reading every single one of them.

I don't even know your discord name.

@sebastienvercammen

This comment has been minimized.

Show comment
Hide comment
@sebastienvercammen

sebastienvercammen Apr 9, 2017

Member

It's not my responsibility to send you a message at the perfect time for you, or @FrostTheFox .

Yes it is. That's what you get when everyone is a volunteer, no one is paid, and all we have is what we put into it. You cannot choose not to be proactive and then complain we didn't help you out enough.

Either you want to be in #pr and you do the absolute minimum to make it work (which really doesn't take much, you can ask all the other #pr members), or you don't but then don't complain about it.

Being transparent in behind closed doors (#pr) is exclusivity, not transparency.

Transparency to everyone contributes nothing and wastes time, which we already don't have. Transparency to relevant people, especially when #pr is a public rank that anyone with minimal coding experience can access, making it a publicly accessible rank rather than an exclusive one, keeps transparency and takes up less time.

So, this is my 4th format request for access to #pr. You can search my name for PRs, you know who I am on discord. And I've tagged both you and Frosty in it.

The first thing I did was search for your name on Discord, I got zero search results. I have absolutely no idea who you are.

And this is the last comment I'll post about this here. This isn't showing us anything new, and it's not winning us any time.

Member

sebastienvercammen commented Apr 9, 2017

It's not my responsibility to send you a message at the perfect time for you, or @FrostTheFox .

Yes it is. That's what you get when everyone is a volunteer, no one is paid, and all we have is what we put into it. You cannot choose not to be proactive and then complain we didn't help you out enough.

Either you want to be in #pr and you do the absolute minimum to make it work (which really doesn't take much, you can ask all the other #pr members), or you don't but then don't complain about it.

Being transparent in behind closed doors (#pr) is exclusivity, not transparency.

Transparency to everyone contributes nothing and wastes time, which we already don't have. Transparency to relevant people, especially when #pr is a public rank that anyone with minimal coding experience can access, making it a publicly accessible rank rather than an exclusive one, keeps transparency and takes up less time.

So, this is my 4th format request for access to #pr. You can search my name for PRs, you know who I am on discord. And I've tagged both you and Frosty in it.

The first thing I did was search for your name on Discord, I got zero search results. I have absolutely no idea who you are.

And this is the last comment I'll post about this here. This isn't showing us anything new, and it's not winning us any time.

@smartcuc

This comment has been minimized.

Show comment
Hide comment
@smartcuc

smartcuc Apr 9, 2017

Well, the RM core team should be close together. Please do not get upset or anything else. Stay for this great software. Quality instead of speed. And, when an #pr comes up, take your time, then at the end, it's your baby you've made public.

smartcuc commented Apr 9, 2017

Well, the RM core team should be close together. Please do not get upset or anything else. Stay for this great software. Quality instead of speed. And, when an #pr comes up, take your time, then at the end, it's your baby you've made public.

@PokemonCatcher PokemonCatcher referenced this pull request Apr 10, 2017

Merged

Improve memory usage #1960

0 of 4 tasks complete
@jchristi

This comment has been minimized.

Show comment
Hide comment
@jchristi

jchristi Apr 13, 2017

Trying to think of takeaways from this and I think the obvious one is to encourage smaller pull requests. The larger the PR gets, the longer a code review will take (and the increase in turnaround time is not linear). This is kind of a microcosm for the argument of agile vs waterfall in software development. While this PR wasn't gargantuan, it wasn't small and it addressed multiple bugs which probably could have been broken up and submitted as separate PRs. The maintainers should encourage and/or enforce smaller PRs but outside contributors would do well to heed this advice if they want to see their code merged in a timely fashion.

jchristi commented Apr 13, 2017

Trying to think of takeaways from this and I think the obvious one is to encourage smaller pull requests. The larger the PR gets, the longer a code review will take (and the increase in turnaround time is not linear). This is kind of a microcosm for the argument of agile vs waterfall in software development. While this PR wasn't gargantuan, it wasn't small and it addressed multiple bugs which probably could have been broken up and submitted as separate PRs. The maintainers should encourage and/or enforce smaller PRs but outside contributors would do well to heed this advice if they want to see their code merged in a timely fashion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment