-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Index out of bounds in Aggression Query #4
Comments
Are you using the correct datasets and dates? |
This branch should reproduce SIGMOD, if you're using the correct data: https://github.com/stanford-futuredata/tasti/tree/sigmod |
I have switch to the signed branch, but it behaves the same.
|
My complete process is as follows,
# here I use master branch as the sigmod branch has no tasti.yml
git clone https://github.com/stanford-futuredata/tasti.git
cd tasti
conda env create -f tasti.yml
conda activate tasti3
cd ..
git clone https://github.com/stanford-futuredata/swag-python.git
cd swag-python/
conda install -c conda-forge opencv
pip install -e .
cd ..
git clone https://github.com/stanford-futuredata/blazeit.git
cd blazeit/
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
conda install -c conda-forge pyclipper
pip install -e .
cd ..
git clone https://github.com/stanford-futuredata/supg.git
cd supg/
pip install pandas feather-format
pip install -e .
cd ..
git clone -b sigmod https://github.com/stanford-futuredata/tasti.git tasti_sigmod
cd tasti_sigmod/
pip install -r requirements.txt
pip install -e .
mkdir cache # will be written in night_street_offline.py
|
Try an error tolerance of 0.05 |
Unfortunately, It doesn't work. I even tried an error tolerance of 0.9 but still failed. |
Yes I'm pretty sure something is wrong. What are the hashes of the CSV files? This is what I see
|
Oh, the MD5 values are different. MD5 (jackson-town-square-2017-12-14.csv) = 8abae92a0ac3b9f6513ca23ab2549430
MD5 (jackson-town-square-2017-12-17.csv) = b51637ba2b45b9eea37f9cfc81b562d8 But I re-download from the google driver and the MD5 values are still different with yours but same with mine. |
Try downloading them again, I may have uploaded the wrong version |
Thank you very much! Now the MD5 values are correct. I will try again. |
Unfortunately, the error still exists. Here are the MD5 values of the datasets downloaded from the google drive MD5 (2017-12-14-001.zip) = 11e1f424127a2463d0908fedd86719fd
MD5 (2017-12-17-002.zip) = bea086f82bdcc5f40a91d0ec2fbde4dd |
Try the branch |
I have tried all branches of the blazeit including |
Are you sure you used the correct package versions of all packages in the SIGMOD branch? |
I'm using the numba==0.50.1 -> numba==0.51 This is because of an error below, (tasti3)$ pip install -r requirements.txt
...
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts.
datashader 0.13.0 requires numba>=0.51, but you have numba 0.50.1 which is incompatible. while conda install pytorch torchvision cudatoolkit=10.1 -c pytorch because these packages are already specified in the pytorch=1.12.1=py3.8_cuda10.2_cudnn7.6.5_0
torchaudio=0.12.1=py38_cu102
torchvision=0.13.1=py38_cu102 |
Commit |
I have no idea which step goes wrong. cache
|- embeddings.npy
|- model.pt
|- reps.npy
|- topk_dists.npy
|- topk_reps.npy Thank you very much. |
I have tried the following configurations,
Unfortunately, the same error still exists. |
I rechecked the correspondence between the data and the labels and I see a delay between them. # read .csv file
len_14 = 973489
len_17 = 973136
df = pd.read_csv('../datasets/jackson-town-square/jackson-town-square-2017-12-14.csv')
df = df[df['object_name'].isin(['car'])]
frame_to_rows = defaultdict(list)
for row in df.itertuples():
frame_to_rows[row.frame].append(row)
labels = []
for frame_idx in range(len_14):
labels.append(frame_to_rows[frame_idx])
# prepare video dataset
video = VideoDataset(
video_fp='../datasets/jackson-town-square/2017-12-14'
)
# visualization
cnt = 0
for i, frame in enumerate(video):
if i % 8 != 0:
continue
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
if cnt > 60:
break
if labels[i] != []:
for label in labels[i]:
print(label)
print((int(label[4]), int(label[5])), (int(label[6]), int(label[7])))
frame = cv2.rectangle(frame, (int(label[4]), int(label[5])), (int(label[6]), int(label[7])), (0,255,0), 2)
cnt += 1
if len(labels[i]) > 1:
pass
frame = cv2.resize(frame, None, fx=0.25, fy=0.25)
cv2.imwrite(f'annotation/{i}.png', frame) |
Oops, sorry about the video issue. Thank you for investigating |
Does that mean that the given version of |
For me, I corrected the data by subtracting |
I follow the Reproducing Experiments, and get a index is out of bounds error when executing
NightStreetAggregateQuery
andNightStreetAveragePositionAggregateQuery
innight_street_offline.py
.This error raise at
Sampler.sample()
inblazeit/aggregation/samplers.py
, where the index variablet
increases unlimited.I guess that under normal situation EBS will make the sampling stop before reaching the upper bound (
len(Y_true)
), but I don't know why it has been sampling until the last frame during the reproduction process.The text was updated successfully, but these errors were encountered: