Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Comparing changes

Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
...
Checking mergeability… Don't worry, you can still create the pull request.
  • 6 commits
  • 3 files changed
  • 0 commit comments
  • 1 contributor
Commits on Dec 18, 2013
Tom Nicholls TN: Changed process name from 'perl' to 'tagtime' c53846a
Commits on Dec 30, 2013
Tom Nicholls Merge branch 'master' of https://github.com/pmyteh/TagTime 011d268
Tom Nicholls FIXME added for long sets of tags 12a6ae7
@pmyteh ttlogmerge: fix brainless git merge error 05cec57
Commits on Jan 15, 2014
@pmyteh Fix bug in handling of multiple datapoints for a given day on the ser…
…ver.

For TT there should be only one datapoint per day on the beeminder server.
The way things were being handled, duplicates were being deleted from the
data fetched from the server, but the resulting empty entry in the array
was sometimes causing a (different) duplicate to be re-created later in the
script. This bug would only normally show up under circumstances where two
copies of TagTime were writing to the same graph, for example when it is being
run on two machines simultaneously.

This commit changes the deletion method to result in no duplicate entries
being created.
0193601
Commits on Jan 16, 2014
@pmyteh Merge branch 'beeminder' eb23d62
Showing with 27 additions and 5 deletions.
  1. +21 −5 beeminder.pl
  2. +3 −0  script/ttlogmerge.pl
  3. +3 −0  tagtimed.pl
View
26 beeminder.pl
@@ -35,7 +35,7 @@
# ph (ping hash) maps "y-m-d" to number of pings on that day.
# sh (string hash) maps "y-m-d" to the beeminder comment string for that day.
# bh (beeminder hash) maps "y-m-d" to the bmndr ID of the datapoint on that day.
-# ph1 and sh1 are based on the current tagtime long and
+# ph1 and sh1 are based on the current tagtime log and
# ph0 and sh0 are based on the cached .bee file or beeminder-fetched data.
my $start = time; # start and end are the earliest and latest times we will
@@ -109,21 +109,26 @@
# take one pass to delete any duplicates on bmndr; must be one datapt per day
my $i = 0;
undef %remember;
+ my @todelete;
for my $x (@$data) {
my($y,$m,$d) = dt($x->{"timestamp"});
my $ts = "$y-$m-$d";
my $b = $x->{"id"};
if(defined($remember{$ts})) {
- print "Beeminder has multiple datapoints for the same day. " ,
- "Deleting this one:\n";
+ print "Beeminder has multiple datapoints for the same day. " ,
+ "The other id is $remember{$ts}. Deleting this one:\n";
print Dumper $x;
beemdelete($usr, $slug, $b);
- delete $data->[$i];
+ push(@todelete,$i);
}
- $remember{$ts} = 1;
+ $remember{$ts} = $b;
$i++;
}
+ for my $x (reverse(@todelete)) {
+ splice(@$data,$x,1);
+ }
+
for my $x (@$data) { # parse the bmndr data into %ph0, %sh0, %bh
my($y,$m,$d) = dt($x->{"timestamp"});
my $ts = "$y-$m-$d";
@@ -136,6 +141,8 @@
$ph0{$ts} = 0+$c; # ping count is first thing in the comment
$sh0{$ts} = $c;
$sh0{$ts} =~ s/[^\:]*\:\s+//; # drop the "n pings:" comment prefix
+ # This really shouldn't happen.
+ if(defined($bh{$ts})) { die "Duplicate cached/fetched id datapoints for $y-$m-$d: $bh{$ts}, $b.\n", Dumper $x, "\n"; }
$bh{$ts} = $b;
}
}
@@ -194,6 +201,15 @@
if ($p1 > $p0) { $plus += ($p1-$p0); }
elsif($p1 < $p0) { $minus += ($p0-$p1); }
beemupdate($usr, $slug, $b, $t, ($p1*$ping), splur($p1,"ping").": ".$s1);
+ # If this fails, it may well be because the point being updated was deleted/
+ # replaced on another machine (possibly as the result of a merge) and is no
+ # longer on the server. In which case we should probably fail gracefully
+ # rather than failing with an ERROR (see beemupdate()) and not fixing
+ # the problem, which requires manual cache-deleting intervention.
+ # Restarting the script after deleting the offending cache is one option,
+ # though simply deleting the cache file and waiting for next time is less
+ # Intrusive. Deleting the cache files when merging two TT logs would reduce
+ # the scope for this somewhat.
} else {
print "ERROR: can't tell what to do with this datapoint (old/new):\n";
print "$y $m $d ",$p0*$ping," \"$p0 pings: $s0 [bID:$b]\"\n";
View
3  script/ttlogmerge.pl
@@ -35,6 +35,9 @@ sub parse
{
my $s = $_[0];
my @tokens = split(/\s+/, $s);
+ # XXX FIXME: This may fail where huge numbers of tags are added:
+ # It appears TT shortens the human-readable date string to stay
+ # under 80 characters per line.
for my $i (1..3) { pop(@tokens) } # Discard date string
# print "parse: ", $_[0], @tokens;
return @tokens;
View
3  tagtimed.pl
@@ -75,6 +75,9 @@ =head1 BUGS
require "${path}util.pl";
+# TN, 2013/18/12: name the process for ps/top etc.
+$0 = "tagtimed";
+
my $lstping = prevping($launchTime);
my $nxtping = nextping($lstping);

No commit comments for this range

Something went wrong with that request. Please try again.