Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot recognize some member in pytorch #2067

Closed
zchrissirhcz opened this issue May 8, 2018 · 5 comments
Closed

cannot recognize some member in pytorch #2067

zchrissirhcz opened this issue May 8, 2018 · 5 comments
Labels
Bug 🪲 Hacktoberfest Help wanted 🙏 Outside help would be appreciated, good for new contributors High priority Issue with more than 10 reactions

Comments

@zchrissirhcz
Copy link

zchrissirhcz commented May 8, 2018

Steps to reproduce

  1. I use Ubuntu16.04, 64bit system, python2.7.12. I also use VSCode1.23 as my text editor.
  2. I use pytorch in gpu mode. it needs sudo pip install torch torchvision. I have a GPU and CUDA and CuDNN installed. I'm not sure if they are necessary.
  3. I already put these content in ~/.pylintrc:
[MASTER]
extension-pkg-whitelist=numpy,torch

[TYPECHECK]
ignored-modules=numpy,torch
ignored-classes=numpy,torch
generated-members=numpy.*,torch.*
  1. I use these code:
#coding:utf8
from config import opt
import os
import torch as t
import models
from data.dataset import DogCat
from torch.utils.data import DataLoader
from torch.autograd import Variable
from torchnet import meter
from utils.visualize import Visualizer
from tqdm import tqdm

def test(**kwargs):
    opt.parse(kwargs)
    # import ipdb;
    # ipdb.set_trace()
    # configure model
    model = getattr(models, opt.model)().eval()
    if opt.load_model_path:
        model.load(opt.load_model_path)
    if opt.use_gpu: model.cuda()

    # data
    train_data = DogCat(opt.test_data_root,test=True)
    test_dataloader = DataLoader(train_data,batch_size=opt.batch_size,shuffle=False,num_workers=opt.num_workers)
    results = []
    for ii,(data,path) in tqdm(enumerate(test_dataloader)):
        input = t.autograd.Variable(data,volatile = True)
        if opt.use_gpu: input = input.cuda()
        score = model(input)
        probability = t.nn.functional.softmax(score)[:,0].data.tolist()
        # label = score.max(dim = 1)[1].data.tolist()
        
        batch_results = [(path_,probability_) for path_,probability_ in zip(path,probability) ]

        results += batch_results
    write_csv(results,opt.result_file)

    return results

def write_csv(results,file_name):
    import csv
    with open(file_name,'w') as f:
        writer = csv.writer(f)
        writer.writerow(['id','label'])
        writer.writerows(results)
    
def train(**kwargs):
    opt.parse(kwargs)
    vis = Visualizer(opt.env)

    # step1: configure model
    model = getattr(models, opt.model)()
    if opt.load_model_path:
        model.load(opt.load_model_path)
    if opt.use_gpu: model.cuda()

    # step2: data
    train_data = DogCat(opt.train_data_root,train=True)
    val_data = DogCat(opt.train_data_root,train=False)
    train_dataloader = DataLoader(train_data,opt.batch_size,
                        shuffle=True,num_workers=opt.num_workers)
    val_dataloader = DataLoader(val_data,opt.batch_size,
                        shuffle=False,num_workers=opt.num_workers)
    
    # step3: criterion and optimizer
    criterion = t.nn.CrossEntropyLoss()
    lr = opt.lr
    optimizer = t.optim.Adam(model.parameters(),lr = lr,weight_decay = opt.weight_decay)
        
    # step4: meters
    loss_meter = meter.AverageValueMeter()
    confusion_matrix = meter.ConfusionMeter(2)
    previous_loss = 1e100

    # train
    for epoch in range(opt.max_epoch):
        
        loss_meter.reset()
        confusion_matrix.reset()

        for ii,(data,label) in tqdm(enumerate(train_dataloader)):

            # train model 
            inpu = Variable(data)  # inpu mean input
            target = Variable(label)
            if opt.use_gpu:
                inpu = inpu.cuda()
                target = target.cuda()

            optimizer.zero_grad()
            score = model(inpu)
            loss = criterion(score,target)
            loss.backward()
            optimizer.step()
            
            
            # meters update and visualize
            loss_meter.add(loss.data[0])
            confusion_matrix.add(score.data, target.data)

            if ii%opt.print_freq==opt.print_freq-1:
                vis.plot('loss', loss_meter.value()[0])
                
                # 进入debug模式
                if os.path.exists(opt.debug_file):
                    import ipdb
                    ipdb.set_trace()


        model.save()

        # validate and visualize
        val_cm,val_accuracy = val(model,val_dataloader)

        vis.plot('val_accuracy',val_accuracy)
        vis.log("epoch:{epoch},lr:{lr},loss:{loss},train_cm:{train_cm},val_cm:{val_cm}".format(
                    epoch = epoch,loss = loss_meter.value()[0],val_cm = str(val_cm.value()),train_cm=str(confusion_matrix.value()),lr=lr))
        
        # update learning rate
        if loss_meter.value()[0] > previous_loss:          
            lr = lr * opt.lr_decay
            # 第二种降低学习率的方法:不会有moment等信息的丢失
            for param_group in optimizer.param_groups:
                param_group['lr'] = lr
        

        previous_loss = loss_meter.value()[0]

def val(model,dataloader):
    """
    计算模型在验证集上的准确率等信息
    """
    model.eval()
    confusion_matrix = meter.ConfusionMeter(2)
    for ii, data in tqdm(enumerate(dataloader)):
        input, label = data
        val_input = Variable(input, volatile=True)
        if opt.use_gpu:
            val_input = val_input.cuda()
        score = model(val_input)
        confusion_matrix.add(score.data.squeeze(), label.type(t.LongTensor))

    model.train()
    cm_value = confusion_matrix.value()
    accuracy = 100. * (cm_value[0][0] + cm_value[1][1]) / (cm_value.sum())
    return confusion_matrix, accuracy

def help():
    """
    打印帮助的信息: python file.py help
    """
    
    print("""
    usage : python file.py <function> [--args=value]
    <function> := train | test | help
    example: 
            python {0} train --env='env0701' --lr=0.01
            python {0} test --dataset='path/to/dataset/root/'
            python {0} help
    avaiable args:""".format(__file__))

    from inspect import getsource
    source = (getsource(opt.__class__))
    print(source)

if __name__=='__main__':
    import fire
    fire.Fire()

Current behavior

pylint complains E1101 that Instance of 'Variable' has no 'cuda' member. Which, actually, my code can run correctly.

fuck_again_screenshot

Expected behavior

pylint, please don't complain

pylint --version output

Using config file /home/chris/.pylintrc
pylint 1.8.4,
astroid 1.6.3
Python 2.7.12 (default, Dec 4 2017, 14:50:18)
[GCC 5.4.0 20160609]

@PCManticore
Copy link
Contributor

This is probably a bug that would need an astroid brain tip to understand what pytorch is doing here.

@agajews
Copy link

agajews commented Jun 29, 2018

Quick fix: it works if you set generated-members=numpy.*,torch.*,Variable

@Duan-JM
Copy link

Duan-JM commented Apr 9, 2019

Thanks, I use @agajews 's solution, it works for me.

@clouds56
Copy link

clouds56 commented Apr 15, 2019

@agajews thanks for tips, this help mitigate most errors while still output

Instance of 'Variable' has no 'data' member

am I missing something?


update:
I'm using --generated-members=numpy.*,torch.*,*.data,*.grad to work around this

@Pierre-Sassoulas Pierre-Sassoulas added the High priority Issue with more than 10 reactions label May 30, 2021
@Pierre-Sassoulas Pierre-Sassoulas added the Help wanted 🙏 Outside help would be appreciated, good for new contributors label Jul 1, 2021
@Pierre-Sassoulas Pierre-Sassoulas pinned this issue Aug 16, 2021
@Pierre-Sassoulas Pierre-Sassoulas unpinned this issue Aug 16, 2021
@DanielNoord
Copy link
Collaborator

@zchrissirhcz Has this been fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug 🪲 Hacktoberfest Help wanted 🙏 Outside help would be appreciated, good for new contributors High priority Issue with more than 10 reactions
Projects
None yet
Development

No branches or pull requests

7 participants