Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: DataFrame isin function randomly selects values (instead of using them all) when on Intel CPU and input is a torch Tensor #41827

Open
SamuelTrew opened this issue Jun 5, 2021 · 1 comment
Labels
Compat pandas objects compatability with Numpy or Python functions Enhancement isin isin method

Comments

@SamuelTrew
Copy link

  • [x ] I have checked that this issue has not already been reported.

  • [x ] I have confirmed this bug exists on the latest version of pandas.


Code Sample, a copy-pastable example

from typing import Tuple

import numpy as np
from pandas import DataFrame
from torch.tensor import Tensor
from torchvision import transforms, datasets
import torch

def __loadMNISTData() -> Tuple[DataFrame, DataFrame]:
    trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
  
    # if not exist, download mnist dataset
    trainSet = datasets.MNIST("data", train=True, transform=trans, download=True)
    testSet = datasets.MNIST("data", train=False, transform=trans, download=True)
  
    # Scale pixel intensities to [-1, 1]
    xTrain: Tensor = trainSet.train_data
    xTrain = 2 * (xTrain.float() / 255.0) - 1
    # list of 2D images to 1D pixel intensities
    xTrain = xTrain.flatten(1, 2).numpy()
    yTrain = trainSet.train_labels.numpy()
  
    # Scale pixel intensities to [-1, 1]
    xtest: Tensor = testSet.test_data.clone().detach()
    xtest = 2 * (xtest.float() / 255.0) - 1
    # list of 2D images to 1D pixel intensities
    xTest: np.ndarray = xtest.flatten(1, 2).numpy()
    yTest: np.ndarray = testSet.test_labels.numpy()
  
    trainDataframe = DataFrame(zip(xTrain, yTrain))
    testDataframe = DataFrame(zip(xTest, yTest))
    trainDataframe.columns = testDataframe.columns = ["data", "labels"]

    return trainDataframe, testDataframe

def _filterDataByLabel( 
    labels: Tensor, trainDataframe: DataFrame, testDataframe: DataFrame
) -> Tuple[DataFrame, DataFrame]:
    print("\nEntering _filterDataByLabel()")
    print(f"labels: {labels}")
    print(f"Lengths: {len(trainDataframe)}, {len(testDataframe)}")
    trainDataframe = trainDataframe[trainDataframe["labels"].isin(labels)]
    testDataframe = testDataframe[testDataframe["labels"].isin(labels)]
    print(f"Lengths: {len(trainDataframe)}, {len(testDataframe)}")
    print(f"Unique values: {trainDataframe['labels'].unique()}")
    print("Exiting _filterDataByLabel()\n")
    return trainDataframe, testDataframe


data = __loadMNISTData()
labels = torch.tensor(range(10))
trainDataframe, testDataframe = self._filterDataByLabel(labels, *data)

Problem description

So for some reason, the labels that are passed into the isin function, aren't the same as the unique values.
E.g. I'll put 0-9 in and get back a random amount each time even though I set the seeds for python, np, torch and cuda.
If I change labels from a Tensor into a list then this no longer happens.

The weirdest part is that this doesn't happen on my AMD Ryzen 5600x but it does happen on 6th and 8th gen Intel i7 laptop CPUs.

Expected Output

The unique values should always be the same as the labels going in even if it's a tensor and no matter what CPU is used.

Output of pd.show_versions()

These are the details from my laptop containing the 8th gen i7 cpu

INSTALLED VERSIONS

commit : 2cb9652
python : 3.8.8.final.0
python-bits : 64
OS : Linux
OS-release : 5.11.0-17-generic
Version : #18-Ubuntu SMP Thu May 6 20:10:11 UTC 2021
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.2.4
numpy : 1.19.4
pytz : 2021.1
dateutil : 2.8.1
pip : 21.0.1
setuptools : 52.0.0.post20210125
Cython : 0.29.23
pytest : 6.2.3
hypothesis : None
sphinx : 4.0.1
blosc : None
feather : None
xlsxwriter : 1.3.8
lxml.etree : 4.6.3
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.3
IPython : 7.22.0
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.9.0
fastparquet : None
gcsfs : None
matplotlib : 3.3.2
numexpr : 2.7.3
odfpy : None
openpyxl : 3.0.7
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.6.0
sqlalchemy : 1.4.15
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 2.0.1
xlwt : 1.3.0
numba : 0.53.1
None

@SamuelTrew SamuelTrew added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jun 5, 2021
@mzeitlin11
Copy link
Member

Thanks for reporting this @SamuelTrew! In general, we don't test against pytorch, and no compatibility is guaranteed. Would always recommend converting to numpy first, since otherwise you might run into strange behavior like here :) Investigations to fix this are welcome, though any solution would ideally patch a logical flaw in existing code, and not just add torch-specific logic.

@mzeitlin11 mzeitlin11 added Compat pandas objects compatability with Numpy or Python functions Enhancement isin isin method and removed Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jul 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Compat pandas objects compatability with Numpy or Python functions Enhancement isin isin method
Projects
None yet
Development

No branches or pull requests

2 participants