Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

测试集中的负样本的生成方式似乎有bug #6

Closed
ZikaiGuo opened this issue Jul 16, 2019 · 3 comments
Closed

测试集中的负样本的生成方式似乎有bug #6

ZikaiGuo opened this issue Jul 16, 2019 · 3 comments

Comments

@ZikaiGuo
Copy link

作者您好!

在DataModule.py的generateEvaNegative函数里,对于某个用户,测试集里针对他随机生成的负样本应该同时避开他训练集和测试集里的正样本。但generateEvaNegative函数里的hash_data仅能指示当前样本是否为测试集里的正样本。这会导致训练集里的正样本有可能被采样成了测试集里的负样本。模型的实际性能会因此被低估。请问这个地方是不是有点bug?

@PeiJieSun
Copy link
Owner

嗯,谢谢指正。我会在后面抽一点时间重新修改测试集负样本的抽取以及重新验证实验结果。谢谢。

@PeiJieSun
Copy link
Owner

PeiJieSun commented Aug 20, 2019

下面的工作是在generateEvaNegative函数中,规避了所有的正样本,包括train, val,和test中,code如下:

def readTotalData(self):
        # I have concat the yelp.train.rating, yelp.val.rating and yelp.test.rating to the yelop.total.rating
        filename = '/home/sunpeijie/files/task/diffnet/data/yelp/yelp.total.rating'
        f = open(filename)
        total_data = defaultdict(int)
        for _, line in enumerate(f):
            arr = line.split('\t')
            total_data[(int(arr[0]), int(arr[1]))] = 1
        self.total_data = total_data

def generateEvaNegative(self):
        #hash_data = self.hash_data
        total_data = self.total_data
        total_user_list = self.total_user_list
        num_evaluate = self.conf.num_evaluate
        num_items = self.conf.num_items
        eva_negative_data = defaultdict(list)
        for u in total_user_list:
            for _ in range(num_evaluate):
                j = np.random.randint(num_items)
                #while (u, j) in hash_data:
                while (u, j) in total_data:
                    j = np.random.randint(num_items)
                eva_negative_data[u].append(j)
        self.eva_negative_data = eva_negative_data

接下来是训练过程展示

Epoch:1, compute loss cost:4.4887s, train loss:976.1782, val loss:921.9982, test loss:4801.2915
Evaluate cost:4.8390s, hr:0.3344, ndcg:0.2040

Epoch:2, compute loss cost:1.1450s, train loss:903.5286, val loss:916.6892, test loss:4764.4575
Evaluate cost:4.0666s, hr:0.3374, ndcg:0.2060

Epoch:3, compute loss cost:1.3662s, train loss:885.0439, val loss:915.4321, test loss:4751.1250
Evaluate cost:3.9642s, hr:0.3388, ndcg:0.2069

Epoch:4, compute loss cost:1.2752s, train loss:872.8805, val loss:913.7832, test loss:4741.5625
Evaluate cost:4.0750s, hr:0.3388, ndcg:0.2073

Epoch:5, compute loss cost:1.4054s, train loss:862.0972, val loss:913.4442, test loss:4733.7803
Evaluate cost:3.9750s, hr:0.3395, ndcg:0.2079

Epoch:6, compute loss cost:1.2529s, train loss:852.9391, val loss:912.8486, test loss:4730.5098
Evaluate cost:3.9459s, hr:0.3387, ndcg:0.2078

Epoch:7, compute loss cost:1.3311s, train loss:839.2758, val loss:910.7900, test loss:4726.5449
Evaluate cost:3.9413s, hr:0.3405, ndcg:0.2085

Epoch:8, compute loss cost:1.1541s, train loss:832.8566, val loss:910.8615, test loss:4726.2988
Evaluate cost:4.0341s, hr:0.3412, ndcg:0.2086

Epoch:9, compute loss cost:1.2854s, train loss:822.4843, val loss:910.3232, test loss:4720.5449
Evaluate cost:3.9594s, hr:0.3404, ndcg:0.2087

Epoch:10, compute loss cost:1.2378s, train loss:813.4489, val loss:910.8851, test loss:4720.4375
Evaluate cost:4.0082s, hr:0.3396, ndcg:0.2086

Epoch:11, compute loss cost:1.4024s, train loss:804.9924, val loss:910.4186, test loss:4718.1494
Evaluate cost:4.1083s, hr:0.3416, ndcg:0.2092

Epoch:12, compute loss cost:1.2963s, train loss:797.4242, val loss:908.8552, test loss:4715.7480
Evaluate cost:4.0213s, hr:0.3425, ndcg:0.2102

Epoch:13, compute loss cost:1.1518s, train loss:791.3901, val loss:908.9052, test loss:4715.8457
Evaluate cost:4.0870s, hr:0.3410, ndcg:0.2095

Epoch:14, compute loss cost:1.3260s, train loss:782.3792, val loss:908.7347, test loss:4714.4048
Evaluate cost:4.0248s, hr:0.3416, ndcg:0.2093

Epoch:15, compute loss cost:1.3423s, train loss:776.7711, val loss:909.3613, test loss:4718.0088
Evaluate cost:4.0250s, hr:0.3412, ndcg:0.2090

Epoch:16, compute loss cost:1.4019s, train loss:767.5394, val loss:909.2978, test loss:4717.3696
Evaluate cost:3.9777s, hr:0.3411, ndcg:0.2090

Epoch:17, compute loss cost:1.2562s, train loss:763.2227, val loss:909.4788, test loss:4718.4609
Evaluate cost:4.0097s, hr:0.3418, ndcg:0.2097

Epoch:18, compute loss cost:1.3210s, train loss:753.9173, val loss:909.2440, test loss:4715.9062
Evaluate cost:3.9800s, hr:0.3410, ndcg:0.2095

Epoch:19, compute loss cost:1.1679s, train loss:746.9887, val loss:909.7432, test loss:4717.8345
Evaluate cost:3.9110s, hr:0.3420, ndcg:0.2100

Epoch:20, compute loss cost:1.2792s, train loss:740.2877, val loss:910.1975, test loss:4720.4590
Evaluate cost:3.9135s, hr:0.3405, ndcg:0.2093

Epoch:21, compute loss cost:1.2932s, train loss:733.3928, val loss:909.7073, test loss:4718.4399
Evaluate cost:4.3458s, hr:0.3410, ndcg:0.2091

Epoch:22, compute loss cost:1.3098s, train loss:727.5326, val loss:911.1656, test loss:4726.4248
Evaluate cost:3.9070s, hr:0.3400, ndcg:0.2089

接下来是原先的code迭代20次的结果,

Epoch:1, compute loss cost:4.4047s, train loss:977.7568, val loss:911.8739, test loss:4820.8652
Evaluate cost:4.8005s, hr:0.3292, ndcg:0.1991

Epoch:2, compute loss cost:1.1652s, train loss:905.0388, val loss:905.4276, test loss:4780.8911
Evaluate cost:4.0661s, hr:0.3316, ndcg:0.2021

Epoch:3, compute loss cost:1.2145s, train loss:885.4837, val loss:902.9839, test loss:4768.2319
Evaluate cost:3.9950s, hr:0.3346, ndcg:0.2037

Epoch:4, compute loss cost:1.2132s, train loss:872.8642, val loss:901.1090, test loss:4755.9219
Evaluate cost:3.9527s, hr:0.3368, ndcg:0.2047

Epoch:5, compute loss cost:1.3428s, train loss:860.3478, val loss:899.8690, test loss:4749.0400
Evaluate cost:4.0359s, hr:0.3360, ndcg:0.2045

Epoch:6, compute loss cost:1.1439s, train loss:848.9529, val loss:898.0026, test loss:4742.5396
Evaluate cost:4.0447s, hr:0.3373, ndcg:0.2051

Epoch:7, compute loss cost:1.3440s, train loss:843.2498, val loss:898.4583, test loss:4740.3965
Evaluate cost:4.1448s, hr:0.3365, ndcg:0.2046

Epoch:8, compute loss cost:1.2179s, train loss:831.7834, val loss:896.5043, test loss:4734.0811
Evaluate cost:4.0220s, hr:0.3378, ndcg:0.2051

Epoch:9, compute loss cost:1.3796s, train loss:824.1720, val loss:896.7383, test loss:4731.4961
Evaluate cost:4.0968s, hr:0.3375, ndcg:0.2052

Epoch:10, compute loss cost:1.1316s, train loss:815.3879, val loss:896.5787, test loss:4729.8359
Evaluate cost:3.9890s, hr:0.3382, ndcg:0.2050

Epoch:11, compute loss cost:1.2635s, train loss:807.6373, val loss:896.3104, test loss:4727.5586
Evaluate cost:3.9969s, hr:0.3371, ndcg:0.2047

Epoch:12, compute loss cost:1.2897s, train loss:798.9100, val loss:896.3608, test loss:4726.1875
Evaluate cost:3.9736s, hr:0.3364, ndcg:0.2048

Epoch:13, compute loss cost:1.4416s, train loss:791.8956, val loss:896.0686, test loss:4726.3369
Evaluate cost:3.9479s, hr:0.3358, ndcg:0.2038

Epoch:14, compute loss cost:1.4195s, train loss:781.8600, val loss:894.9672, test loss:4724.1260
Evaluate cost:4.0348s, hr:0.3357, ndcg:0.2036

Epoch:15, compute loss cost:1.1752s, train loss:778.1496, val loss:896.0370, test loss:4727.1182
Evaluate cost:4.0657s, hr:0.3372, ndcg:0.2039

Epoch:16, compute loss cost:1.4490s, train loss:770.1089, val loss:895.8448, test loss:4728.9053
Evaluate cost:4.1832s, hr:0.3359, ndcg:0.2035

Epoch:17, compute loss cost:1.2218s, train loss:761.5718, val loss:895.0361, test loss:4728.0410
Evaluate cost:4.0020s, hr:0.3361, ndcg:0.2036

Epoch:18, compute loss cost:1.3718s, train loss:753.7161, val loss:895.0839, test loss:4727.6699
Evaluate cost:4.1473s, hr:0.3383, ndcg:0.2045

Epoch:19, compute loss cost:1.2316s, train loss:748.7736, val loss:895.0060, test loss:4727.8232
Evaluate cost:4.0401s, hr:0.3367, ndcg:0.2040

Epoch:20, compute loss cost:1.3455s, train loss:740.0026, val loss:894.8278, test loss:4726.8804
Evaluate cost:4.0321s, hr:0.3357, ndcg:0.2030

综合比较两个训练过程,我们可以验证,在抽取测试数据时,如果规避所有的正样本,确实有利于提升模型的效果,谢谢指教。

@ZikaiGuo
Copy link
Author

感谢您的回复!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants