Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using a field representing real numbers with the iterator #78

Closed
ashudeep opened this issue Jul 21, 2017 · 11 comments · Fixed by #119
Closed

Using a field representing real numbers with the iterator #78

ashudeep opened this issue Jul 21, 2017 · 11 comments · Fixed by #119

Comments

@ashudeep
Copy link

I am trying to learn a regressor on text data and I use torchtext in all my other tasks but I see a problem in using it for this use case.

I define the field for targets as follows:

TARGETS = data.Field(
            sequential=False, tensor_type=torch.DoubleTensor, batch_first=True)
self.fields = [('targets', TARGETS), ('text', TEXT)]
self.train, self.val, self.test = data.TabularDataset.splits(
            path=self.path,
            train=self.train_suffix,
            validation=self.val_suffix,
            test=self.test_suffix,
            format=formatting,
            fields=self.fields)
TEXT.build_vocab(self.train)

I have a file that contains tab separate \t

When I make iterators out of it,

train_iter, val_iter, test_iter = data.Iterator.splits(
                (self.train, self.val, self.test),
                batch_sizes=(self.batch_size, self.test_batch_size,
                             self.test_batch_size),
                sort_key=lambda x: len(x.text),
                shuffle=True)
print(next(iter(train_iter)))

it gives me an error when getting the next batch:

AttributeError: 'Field' object has no attribute 'vocab'

I know this is because I didn't run .build_vocab for the TARGETS field. But why do I really need to do this? What if I just want to get real numbers and compute losses on them?

Any workaround is appreciated. If I am doing something wrong, please let me know too.

@ashudeep
Copy link
Author

Found use_vocab argument 😞

@ashudeep
Copy link
Author

Even after setting use_vocab=False. I get

RuntimeError: already counted a million dimensions in a given sequence. Most likely your items are also sequences and there's no way to infer how many dimension should the tensor have

It is the same error that one gets when you try to do torch.DoubleTensor('1.2'). Is there something I am doing wrong?

@ashudeep ashudeep reopened this Jul 21, 2017
@nelson-liu
Copy link
Contributor

nelson-liu commented Jul 21, 2017

Thanks for the issue.

Torchtext needs to convert the string number to an int or float somewhere down the line and it currently doesn't do this. A quick fix would be to manually add a pipeline to the postprocessing argument that converts everything in the TARGETS field to int. With a slightly modified version of your code:

Edit: just noticed that your example uses doubles. changed my code accordingly

(tab separated file)

$ cat test.txt
1.1   test string
1.2   test string2
1.3   test string3

The following works on my machine in the meantime while we patch this:

In [1]: import torch

In [2]: from torchtext import data

In [3]: TEXT = data.Field(batch_first=True)

In [4]: TARGETS = data.Field(sequential=False, tensor_type=torch.DoubleTensor, batch_first=True, use_vocab=False, postprocessing=data.Pipeline(lambda x: float(x)))

In [5]: fields = [('targets', TARGETS), ('text', TEXT)]

In [6]: dataset = data.TabularDataset(path="test.txt", format="tsv", fields=fields)

In [7]: TEXT.build_vocab(dataset)

In [8]: train_iter = data.Iterator(dataset, batch_size=1, sort_key=lambda x: len(x.text), shuffle=True)

In [9]: batch = next(iter(train_iter))

In [10]: batch.targets
Out[10]: 
Variable containing:
 1.3000
[torch.cuda.DoubleTensor of size 1 (GPU 0)]

Hope that helps.

@ashudeep
Copy link
Author

Thanks for the solution @nelson-liu

@nelson-liu
Copy link
Contributor

could you leave this open for now --- there is a bug behind this that would be nice to track (the fact that we do not actually convert values with use_vocab=False to numbers). Thanks!

@ashudeep
Copy link
Author

Sure, I agree.

@ashudeep ashudeep reopened this Jul 21, 2017
@jekbradbury
Copy link
Contributor

Yeah, I was originally imagining that values would be provided as Python numerical types -- but that isn't really consistent with the nature of the library as loading mostly text values. Certainly if it sees strings it should convert them!

@ImtiazKhanDS
Copy link

If both my fields like target and source are sequences then also we get the same error , any idea on how to resolve this?

@greed2411
Copy link

for me the above one, didn't work.
if anyone is still wondering,
change postprocessing=data.Pipeline(lambda x: float(x)) to preprocessing= lambda x: float(x)
that made it work for me (pytorch 0.4 and torchtext 0.2.3)

@giovannipcarvalho
Copy link

giovannipcarvalho commented Oct 11, 2019

@greed2411 you don't even need the lambda. Field(use_vocab=False, preprocessing=float) is enough.

edit: It seems to work for RawField but not Field. 😕
edit2: ah, forgot to set sequential=False.

@finiteautomata
Copy link

data.LabelField(dtype = torch.float, use_vocab=False, preprocessing=float) does the trick as data.LabelField already sets use_sequential=False (and also removes <unk> token)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants