Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert string error. Any comments? #5

Closed
tigerneil opened this issue Apr 11, 2017 · 2 comments
Closed

Convert string error. Any comments? #5

tigerneil opened this issue Apr 11, 2017 · 2 comments

Comments

@tigerneil
Copy link

run python train.py I got following error. Seems scipy problem. But I can't figure out it.

The information:

Using TensorFlow backend.
Loading cora dataset...
Traceback (most recent call last):
  File "train.py", line 22, in <module>
    X, A, y = load_data(dataset=DATASET)
  File "/Users/Tiger/anaconda/envs/rllab3/lib/python3.5/site-packages/kegra-0.0.1-py3.5.egg/kegra/utils.py", line 20, in load_data
    features = sp.csr_matrix(idx_features_labels[:, 1:-2], dtype=np.float32)
  File "/Users/Tiger/anaconda/envs/rllab3/lib/python3.5/site-packages/scipy/sparse/compressed.py", line 79, in __init__
    self._set_self(self.__class__(coo_matrix(arg1, dtype=dtype)))
  File "/Users/Tiger/anaconda/envs/rllab3/lib/python3.5/site-packages/scipy/sparse/coo.py", line 182, in __init__
    self.data = self.data.astype(dtype, copy=False)
ValueError: could not convert string to float: "b'0'"
@tkipf
Copy link
Owner

tkipf commented Apr 11, 2017

Interesting - haven't seen this before. Could be an issue related to python 3.5. Would you mind trying this in python 2.7? I'll look into it in the meantime.

@tigerneil
Copy link
Author

tigerneil commented Apr 11, 2017

I check it now. Python 2.7 works with tensorflow 1.0 and keras-1.2.2.

Using TensorFlow backend.
Loading cora dataset...
Dataset has 2708 nodes, 5429 edges, 1432 features.
Using local pooling filters...
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Epoch: 0001 train_loss= 1.9408 train_acc= 0.5500 val_loss= 1.9420 val_acc= 0.4733 time= 0.9435
Epoch: 0002 train_loss= 1.9354 train_acc= 0.5786 val_loss= 1.9379 val_acc= 0.5100 time= 0.1498
Epoch: 0003 train_loss= 1.9291 train_acc= 0.6214 val_loss= 1.9332 val_acc= 0.5600 time= 0.1436
Epoch: 0004 train_loss= 1.9219 train_acc= 0.6714 val_loss= 1.9278 val_acc= 0.5767 time= 0.1404
Epoch: 0005 train_loss= 1.9141 train_acc= 0.6857 val_loss= 1.9221 val_acc= 0.5867 time= 0.1386
Epoch: 0006 train_loss= 1.9058 train_acc= 0.7000 val_loss= 1.9162 val_acc= 0.5867 time= 0.1397
Epoch: 0007 train_loss= 1.8970 train_acc= 0.7071 val_loss= 1.9098 val_acc= 0.5800 time= 0.1421
Epoch: 0008 train_loss= 1.8875 train_acc= 0.7143 val_loss= 1.9030 val_acc= 0.5767 time= 0.1479
Epoch: 0009 train_loss= 1.8774 train_acc= 0.7071 val_loss= 1.8958 val_acc= 0.5633 time= 0.1428
Epoch: 0010 train_loss= 1.8668 train_acc= 0.6786 val_loss= 1.8881 val_acc= 0.5600 time= 0.1438
Epoch: 0011 train_loss= 1.8556 train_acc= 0.6643 val_loss= 1.8801 val_acc= 0.5433 time= 0.1393
Epoch: 0012 train_loss= 1.8439 train_acc= 0.6571 val_loss= 1.8716 val_acc= 0.5267 time= 0.1376
Epoch: 0013 train_loss= 1.8317 train_acc= 0.6357 val_loss= 1.8628 val_acc= 0.5200 time= 0.1468
Epoch: 0014 train_loss= 1.8190 train_acc= 0.6357 val_loss= 1.8536 val_acc= 0.5167 time= 0.1538
Epoch: 0015 train_loss= 1.8059 train_acc= 0.6143 val_loss= 1.8440 val_acc= 0.5000 time= 0.1485
Epoch: 0016 train_loss= 1.7923 train_acc= 0.6143 val_loss= 1.8341 val_acc= 0.4967 time= 0.1380
Epoch: 0017 train_loss= 1.7783 train_acc= 0.6071 val_loss= 1.8239 val_acc= 0.4933 time= 0.1382
Epoch: 0018 train_loss= 1.7639 train_acc= 0.6071 val_loss= 1.8133 val_acc= 0.4933 time= 0.1420
Epoch: 0019 train_loss= 1.7492 train_acc= 0.5929 val_loss= 1.8025 val_acc= 0.4900 time= 0.1437
Epoch: 0020 train_loss= 1.7342 train_acc= 0.5929 val_loss= 1.7916 val_acc= 0.4867 time= 0.1394
Epoch: 0021 train_loss= 1.7189 train_acc= 0.5929 val_loss= 1.7805 val_acc= 0.4867 time= 0.1436
Epoch: 0022 train_loss= 1.7033 train_acc= 0.5929 val_loss= 1.7690 val_acc= 0.4867 time= 0.1489
Epoch: 0023 train_loss= 1.6875 train_acc= 0.5857 val_loss= 1.7575 val_acc= 0.4833 time= 0.1354
Epoch: 0024 train_loss= 1.6715 train_acc= 0.5786 val_loss= 1.7458 val_acc= 0.4800 time= 0.1378
Epoch: 0025 train_loss= 1.6552 train_acc= 0.5714 val_loss= 1.7340 val_acc= 0.4767 time= 0.1405
Epoch: 0026 train_loss= 1.6389 train_acc= 0.5714 val_loss= 1.7220 val_acc= 0.4767 time= 0.1400
Epoch: 0027 train_loss= 1.6224 train_acc= 0.5714 val_loss= 1.7099 val_acc= 0.4800 time= 0.1405
Epoch: 0028 train_loss= 1.6059 train_acc= 0.5786 val_loss= 1.6977 val_acc= 0.4800 time= 0.1407
Epoch: 0029 train_loss= 1.5892 train_acc= 0.5857 val_loss= 1.6854 val_acc= 0.4833 time= 0.1390
Epoch: 0030 train_loss= 1.5725 train_acc= 0.5929 val_loss= 1.6730 val_acc= 0.4900 time= 0.1404
Epoch: 0031 train_loss= 1.5557 train_acc= 0.6071 val_loss= 1.6605 val_acc= 0.4933 time= 0.1392
Epoch: 0032 train_loss= 1.5389 train_acc= 0.6143 val_loss= 1.6480 val_acc= 0.4933 time= 0.1433
Epoch: 0033 train_loss= 1.5222 train_acc= 0.6214 val_loss= 1.6354 val_acc= 0.4933 time= 0.1436
Epoch: 0034 train_loss= 1.5054 train_acc= 0.6214 val_loss= 1.6229 val_acc= 0.4967 time= 0.1358
Epoch: 0035 train_loss= 1.4887 train_acc= 0.6214 val_loss= 1.6104 val_acc= 0.5067 time= 0.1478
Epoch: 0036 train_loss= 1.4721 train_acc= 0.6214 val_loss= 1.5979 val_acc= 0.5133 time= 0.1464
Epoch: 0037 train_loss= 1.4557 train_acc= 0.6286 val_loss= 1.5855 val_acc= 0.5200 time= 0.1377
Epoch: 0038 train_loss= 1.4394 train_acc= 0.6357 val_loss= 1.5731 val_acc= 0.5300 time= 0.1458
Epoch: 0039 train_loss= 1.4233 train_acc= 0.6429 val_loss= 1.5609 val_acc= 0.5400 time= 0.1348
Epoch: 0040 train_loss= 1.4073 train_acc= 0.6571 val_loss= 1.5486 val_acc= 0.5533 time= 0.1324
Epoch: 0041 train_loss= 1.3914 train_acc= 0.6643 val_loss= 1.5364 val_acc= 0.5667 time= 0.1418
Epoch: 0042 train_loss= 1.3757 train_acc= 0.6857 val_loss= 1.5243 val_acc= 0.5800 time= 0.1387
Epoch: 0043 train_loss= 1.3601 train_acc= 0.7000 val_loss= 1.5123 val_acc= 0.5900 time= 0.1473
Epoch: 0044 train_loss= 1.3444 train_acc= 0.7357 val_loss= 1.5003 val_acc= 0.6000 time= 0.1377
Epoch: 0045 train_loss= 1.3290 train_acc= 0.7500 val_loss= 1.4883 val_acc= 0.6033 time= 0.1422
Epoch: 0046 train_loss= 1.3138 train_acc= 0.7571 val_loss= 1.4764 val_acc= 0.6233 time= 0.1388
Epoch: 0047 train_loss= 1.2988 train_acc= 0.7500 val_loss= 1.4648 val_acc= 0.6300 time= 0.1411
Epoch: 0048 train_loss= 1.2839 train_acc= 0.7714 val_loss= 1.4530 val_acc= 0.6367 time= 0.1347
Epoch: 0049 train_loss= 1.2694 train_acc= 0.7857 val_loss= 1.4415 val_acc= 0.6467 time= 0.1407
Epoch: 0050 train_loss= 1.2551 train_acc= 0.7929 val_loss= 1.4303 val_acc= 0.6600 time= 0.1460
Epoch: 0051 train_loss= 1.2411 train_acc= 0.8000 val_loss= 1.4192 val_acc= 0.6700 time= 0.1424
Epoch: 0052 train_loss= 1.2274 train_acc= 0.8214 val_loss= 1.4084 val_acc= 0.6800 time= 0.1388
Epoch: 0053 train_loss= 1.2139 train_acc= 0.8214 val_loss= 1.3977 val_acc= 0.6900 time= 0.1387
Epoch: 0054 train_loss= 1.2007 train_acc= 0.8214 val_loss= 1.3874 val_acc= 0.7033 time= 0.1441
Epoch: 0055 train_loss= 1.1876 train_acc= 0.8214 val_loss= 1.3769 val_acc= 0.7067 time= 0.1390
Epoch: 0056 train_loss= 1.1747 train_acc= 0.8286 val_loss= 1.3667 val_acc= 0.7133 time= 0.1387
Epoch: 0057 train_loss= 1.1619 train_acc= 0.8429 val_loss= 1.3565 val_acc= 0.7233 time= 0.1512
Epoch: 0058 train_loss= 1.1494 train_acc= 0.8429 val_loss= 1.3467 val_acc= 0.7233 time= 0.1443
Epoch: 0059 train_loss= 1.1370 train_acc= 0.8500 val_loss= 1.3370 val_acc= 0.7233 time= 0.1367
Epoch: 0060 train_loss= 1.1247 train_acc= 0.8571 val_loss= 1.3275 val_acc= 0.7267 time= 0.1414
Epoch: 0061 train_loss= 1.1129 train_acc= 0.8643 val_loss= 1.3183 val_acc= 0.7267 time= 0.1509
Epoch: 0062 train_loss= 1.1011 train_acc= 0.8643 val_loss= 1.3091 val_acc= 0.7333 time= 0.1428
Epoch: 0063 train_loss= 1.0894 train_acc= 0.8643 val_loss= 1.2998 val_acc= 0.7367 time= 0.1381
Epoch: 0064 train_loss= 1.0779 train_acc= 0.8714 val_loss= 1.2906 val_acc= 0.7367 time= 0.1487
Epoch: 0065 train_loss= 1.0668 train_acc= 0.8786 val_loss= 1.2816 val_acc= 0.7400 time= 0.1438
Epoch: 0066 train_loss= 1.0558 train_acc= 0.8857 val_loss= 1.2727 val_acc= 0.7433 time= 0.1345
Epoch: 0067 train_loss= 1.0448 train_acc= 0.8857 val_loss= 1.2640 val_acc= 0.7433 time= 0.1384
Epoch: 0068 train_loss= 1.0340 train_acc= 0.8857 val_loss= 1.2554 val_acc= 0.7433 time= 0.1385
Epoch: 0069 train_loss= 1.0235 train_acc= 0.8857 val_loss= 1.2471 val_acc= 0.7467 time= 0.1439
Epoch: 0070 train_loss= 1.0133 train_acc= 0.8857 val_loss= 1.2390 val_acc= 0.7500 time= 0.1400
Epoch: 0071 train_loss= 1.0033 train_acc= 0.8929 val_loss= 1.2313 val_acc= 0.7533 time= 0.1514
Epoch: 0072 train_loss= 0.9935 train_acc= 0.8929 val_loss= 1.2236 val_acc= 0.7500 time= 0.1414
Epoch: 0073 train_loss= 0.9840 train_acc= 0.8929 val_loss= 1.2162 val_acc= 0.7567 time= 0.1415
Epoch: 0074 train_loss= 0.9745 train_acc= 0.8929 val_loss= 1.2087 val_acc= 0.7567 time= 0.1442
Epoch: 0075 train_loss= 0.9652 train_acc= 0.9000 val_loss= 1.2013 val_acc= 0.7633 time= 0.1379
Epoch: 0076 train_loss= 0.9560 train_acc= 0.9000 val_loss= 1.1941 val_acc= 0.7633 time= 0.1416
Epoch: 0077 train_loss= 0.9470 train_acc= 0.9000 val_loss= 1.1869 val_acc= 0.7600 time= 0.1412
Epoch: 0078 train_loss= 0.9381 train_acc= 0.9000 val_loss= 1.1798 val_acc= 0.7633 time= 0.1503
Epoch: 0079 train_loss= 0.9295 train_acc= 0.9000 val_loss= 1.1729 val_acc= 0.7633 time= 0.1399
Epoch: 0080 train_loss= 0.9211 train_acc= 0.9000 val_loss= 1.1661 val_acc= 0.7633 time= 0.1435
Epoch: 0081 train_loss= 0.9128 train_acc= 0.9000 val_loss= 1.1593 val_acc= 0.7667 time= 0.1447
Epoch: 0082 train_loss= 0.9048 train_acc= 0.9000 val_loss= 1.1527 val_acc= 0.7667 time= 0.1462
Epoch: 0083 train_loss= 0.8968 train_acc= 0.9000 val_loss= 1.1461 val_acc= 0.7667 time= 0.1380
Epoch: 0084 train_loss= 0.8889 train_acc= 0.9000 val_loss= 1.1398 val_acc= 0.7700 time= 0.1437
Epoch: 0085 train_loss= 0.8811 train_acc= 0.9000 val_loss= 1.1336 val_acc= 0.7767 time= 0.1533
Epoch: 0086 train_loss= 0.8734 train_acc= 0.9000 val_loss= 1.1273 val_acc= 0.7800 time= 0.1436
Epoch: 0087 train_loss= 0.8657 train_acc= 0.9000 val_loss= 1.1212 val_acc= 0.7800 time= 0.1486
Epoch: 0088 train_loss= 0.8581 train_acc= 0.9000 val_loss= 1.1154 val_acc= 0.7833 time= 0.1407
Epoch: 0089 train_loss= 0.8507 train_acc= 0.9000 val_loss= 1.1095 val_acc= 0.7833 time= 0.1493
Epoch: 0090 train_loss= 0.8432 train_acc= 0.9071 val_loss= 1.1038 val_acc= 0.7833 time= 0.1413
Epoch: 0091 train_loss= 0.8358 train_acc= 0.9071 val_loss= 1.0983 val_acc= 0.7833 time= 0.1462
Epoch: 0092 train_loss= 0.8287 train_acc= 0.9071 val_loss= 1.0929 val_acc= 0.7800 time= 0.1538
Epoch: 0093 train_loss= 0.8217 train_acc= 0.9071 val_loss= 1.0878 val_acc= 0.7800 time= 0.1451
Epoch: 0094 train_loss= 0.8151 train_acc= 0.9071 val_loss= 1.0830 val_acc= 0.7800 time= 0.1487
Epoch: 0095 train_loss= 0.8086 train_acc= 0.9143 val_loss= 1.0782 val_acc= 0.7800 time= 0.1450
Epoch: 0096 train_loss= 0.8021 train_acc= 0.9143 val_loss= 1.0731 val_acc= 0.7833 time= 0.1482
Epoch: 0097 train_loss= 0.7957 train_acc= 0.9143 val_loss= 1.0681 val_acc= 0.7867 time= 0.1432
Epoch: 0098 train_loss= 0.7894 train_acc= 0.9143 val_loss= 1.0630 val_acc= 0.7767 time= 0.1399
Epoch: 0099 train_loss= 0.7829 train_acc= 0.9143 val_loss= 1.0579 val_acc= 0.7767 time= 0.1429
Epoch: 0100 train_loss= 0.7766 train_acc= 0.9143 val_loss= 1.0527 val_acc= 0.7767 time= 0.1422
Epoch: 0101 train_loss= 0.7703 train_acc= 0.9143 val_loss= 1.0477 val_acc= 0.7767 time= 0.1440
Epoch: 0102 train_loss= 0.7640 train_acc= 0.9143 val_loss= 1.0430 val_acc= 0.7767 time= 0.1502
Epoch: 0103 train_loss= 0.7579 train_acc= 0.9143 val_loss= 1.0383 val_acc= 0.7767 time= 0.1459
Epoch: 0104 train_loss= 0.7518 train_acc= 0.9143 val_loss= 1.0337 val_acc= 0.7767 time= 0.1439
Epoch: 0105 train_loss= 0.7459 train_acc= 0.9143 val_loss= 1.0293 val_acc= 0.7767 time= 0.1505
Epoch: 0106 train_loss= 0.7402 train_acc= 0.9143 val_loss= 1.0249 val_acc= 0.7767 time= 0.1469
Epoch: 0107 train_loss= 0.7344 train_acc= 0.9143 val_loss= 1.0203 val_acc= 0.7800 time= 0.1423
Epoch: 0108 train_loss= 0.7288 train_acc= 0.9143 val_loss= 1.0156 val_acc= 0.7833 time= 0.1567
Epoch: 0109 train_loss= 0.7233 train_acc= 0.9286 val_loss= 1.0112 val_acc= 0.7900 time= 0.1416
Epoch: 0110 train_loss= 0.7178 train_acc= 0.9286 val_loss= 1.0069 val_acc= 0.7900 time= 0.1464
Epoch: 0111 train_loss= 0.7123 train_acc= 0.9286 val_loss= 1.0026 val_acc= 0.7900 time= 0.1464
Epoch: 0112 train_loss= 0.7069 train_acc= 0.9286 val_loss= 0.9984 val_acc= 0.7933 time= 0.1484
Epoch: 0113 train_loss= 0.7015 train_acc= 0.9286 val_loss= 0.9941 val_acc= 0.7933 time= 0.1427
Epoch: 0114 train_loss= 0.6961 train_acc= 0.9286 val_loss= 0.9897 val_acc= 0.7933 time= 0.1393
Epoch: 0115 train_loss= 0.6908 train_acc= 0.9286 val_loss= 0.9854 val_acc= 0.7933 time= 0.1440
Epoch: 0116 train_loss= 0.6857 train_acc= 0.9286 val_loss= 0.9811 val_acc= 0.7933 time= 0.1505
Epoch: 0117 train_loss= 0.6806 train_acc= 0.9286 val_loss= 0.9768 val_acc= 0.7900 time= 0.1404
Epoch: 0118 train_loss= 0.6757 train_acc= 0.9286 val_loss= 0.9726 val_acc= 0.7867 time= 0.1445
Epoch: 0119 train_loss= 0.6709 train_acc= 0.9286 val_loss= 0.9686 val_acc= 0.7867 time= 0.1512
Epoch: 0120 train_loss= 0.6664 train_acc= 0.9286 val_loss= 0.9652 val_acc= 0.7867 time= 0.1468
Epoch: 0121 train_loss= 0.6619 train_acc= 0.9357 val_loss= 0.9620 val_acc= 0.7867 time= 0.1462
Epoch: 0122 train_loss= 0.6575 train_acc= 0.9357 val_loss= 0.9590 val_acc= 0.7867 time= 0.1423
Epoch: 0123 train_loss= 0.6530 train_acc= 0.9357 val_loss= 0.9561 val_acc= 0.7867 time= 0.1418
Epoch: 0124 train_loss= 0.6486 train_acc= 0.9357 val_loss= 0.9534 val_acc= 0.7900 time= 0.1365
Epoch: 0125 train_loss= 0.6443 train_acc= 0.9357 val_loss= 0.9509 val_acc= 0.7900 time= 0.1422
Epoch: 0126 train_loss= 0.6401 train_acc= 0.9357 val_loss= 0.9487 val_acc= 0.7967 time= 0.1448
Epoch: 0127 train_loss= 0.6359 train_acc= 0.9357 val_loss= 0.9465 val_acc= 0.7967 time= 0.1384
Epoch: 0128 train_loss= 0.6319 train_acc= 0.9357 val_loss= 0.9442 val_acc= 0.7967 time= 0.1418
Epoch: 0129 train_loss= 0.6281 train_acc= 0.9357 val_loss= 0.9418 val_acc= 0.7933 time= 0.1410
Epoch: 0130 train_loss= 0.6243 train_acc= 0.9357 val_loss= 0.9396 val_acc= 0.7933 time= 0.1453
Epoch: 0131 train_loss= 0.6206 train_acc= 0.9429 val_loss= 0.9374 val_acc= 0.7933 time= 0.1418
Epoch: 0132 train_loss= 0.6168 train_acc= 0.9429 val_loss= 0.9350 val_acc= 0.7933 time= 0.1438
Epoch: 0133 train_loss= 0.6131 train_acc= 0.9500 val_loss= 0.9323 val_acc= 0.7933 time= 0.1490
Epoch: 0134 train_loss= 0.6095 train_acc= 0.9500 val_loss= 0.9294 val_acc= 0.7967 time= 0.1503
Epoch: 0135 train_loss= 0.6058 train_acc= 0.9500 val_loss= 0.9265 val_acc= 0.7967 time= 0.1431
Epoch: 0136 train_loss= 0.6022 train_acc= 0.9500 val_loss= 0.9233 val_acc= 0.7967 time= 0.1386
Epoch: 0137 train_loss= 0.5986 train_acc= 0.9571 val_loss= 0.9202 val_acc= 0.7967 time= 0.1469
Epoch: 0138 train_loss= 0.5950 train_acc= 0.9571 val_loss= 0.9172 val_acc= 0.7967 time= 0.1391
Epoch: 0139 train_loss= 0.5914 train_acc= 0.9571 val_loss= 0.9141 val_acc= 0.7967 time= 0.1411
Epoch: 0140 train_loss= 0.5879 train_acc= 0.9571 val_loss= 0.9111 val_acc= 0.7967 time= 0.1439
Epoch: 0141 train_loss= 0.5843 train_acc= 0.9571 val_loss= 0.9082 val_acc= 0.8000 time= 0.1389
Epoch: 0142 train_loss= 0.5806 train_acc= 0.9571 val_loss= 0.9052 val_acc= 0.8000 time= 0.1467
Epoch: 0143 train_loss= 0.5770 train_acc= 0.9571 val_loss= 0.9021 val_acc= 0.7967 time= 0.1491
Epoch: 0144 train_loss= 0.5733 train_acc= 0.9571 val_loss= 0.8985 val_acc= 0.7967 time= 0.1456
Epoch: 0145 train_loss= 0.5697 train_acc= 0.9643 val_loss= 0.8947 val_acc= 0.7967 time= 0.1430
Epoch: 0146 train_loss= 0.5661 train_acc= 0.9643 val_loss= 0.8914 val_acc= 0.7967 time= 0.1439
Epoch: 0147 train_loss= 0.5626 train_acc= 0.9643 val_loss= 0.8883 val_acc= 0.7967 time= 0.1561
Epoch: 0148 train_loss= 0.5592 train_acc= 0.9643 val_loss= 0.8854 val_acc= 0.7967 time= 0.1422
Epoch: 0149 train_loss= 0.5557 train_acc= 0.9643 val_loss= 0.8825 val_acc= 0.7967 time= 0.1459
Epoch: 0150 train_loss= 0.5522 train_acc= 0.9643 val_loss= 0.8799 val_acc= 0.8000 time= 0.1418
Epoch: 0151 train_loss= 0.5488 train_acc= 0.9643 val_loss= 0.8772 val_acc= 0.8000 time= 0.1412
Epoch: 0152 train_loss= 0.5455 train_acc= 0.9643 val_loss= 0.8747 val_acc= 0.8067 time= 0.1406
Epoch: 0153 train_loss= 0.5423 train_acc= 0.9643 val_loss= 0.8723 val_acc= 0.8067 time= 0.1425
Epoch: 0154 train_loss= 0.5391 train_acc= 0.9643 val_loss= 0.8700 val_acc= 0.8067 time= 0.1539
Epoch: 0155 train_loss= 0.5360 train_acc= 0.9643 val_loss= 0.8675 val_acc= 0.8067 time= 0.1423
Epoch: 0156 train_loss= 0.5330 train_acc= 0.9643 val_loss= 0.8650 val_acc= 0.8067 time= 0.1420
Epoch: 0157 train_loss= 0.5300 train_acc= 0.9643 val_loss= 0.8626 val_acc= 0.8033 time= 0.1407
Epoch: 0158 train_loss= 0.5272 train_acc= 0.9643 val_loss= 0.8602 val_acc= 0.8033 time= 0.1442
Epoch: 0159 train_loss= 0.5243 train_acc= 0.9643 val_loss= 0.8578 val_acc= 0.8000 time= 0.1411
Epoch: 0160 train_loss= 0.5215 train_acc= 0.9643 val_loss= 0.8555 val_acc= 0.8000 time= 0.1412
Epoch: 0161 train_loss= 0.5188 train_acc= 0.9643 val_loss= 0.8531 val_acc= 0.8033 time= 0.1542
Epoch: 0162 train_loss= 0.5161 train_acc= 0.9571 val_loss= 0.8506 val_acc= 0.8033 time= 0.1432
Epoch: 0163 train_loss= 0.5133 train_acc= 0.9571 val_loss= 0.8484 val_acc= 0.8000 time= 0.1370
Epoch: 0164 train_loss= 0.5107 train_acc= 0.9571 val_loss= 0.8464 val_acc= 0.8033 time= 0.1406
Epoch: 0165 train_loss= 0.5081 train_acc= 0.9571 val_loss= 0.8444 val_acc= 0.8033 time= 0.1425
Epoch: 0166 train_loss= 0.5055 train_acc= 0.9571 val_loss= 0.8424 val_acc= 0.8033 time= 0.1440
Epoch: 0167 train_loss= 0.5029 train_acc= 0.9643 val_loss= 0.8405 val_acc= 0.8033 time= 0.1422
Epoch: 0168 train_loss= 0.5004 train_acc= 0.9643 val_loss= 0.8389 val_acc= 0.8033 time= 0.1493
Epoch: 0169 train_loss= 0.4978 train_acc= 0.9643 val_loss= 0.8377 val_acc= 0.8033 time= 0.1457
Epoch: 0170 train_loss= 0.4953 train_acc= 0.9643 val_loss= 0.8365 val_acc= 0.8067 time= 0.1543
Epoch: 0171 train_loss= 0.4930 train_acc= 0.9643 val_loss= 0.8354 val_acc= 0.8100 time= 0.1442
Epoch: 0172 train_loss= 0.4907 train_acc= 0.9643 val_loss= 0.8341 val_acc= 0.8100 time= 0.1488
Epoch: 0173 train_loss= 0.4886 train_acc= 0.9714 val_loss= 0.8331 val_acc= 0.8100 time= 0.1410
Epoch: 0174 train_loss= 0.4865 train_acc= 0.9714 val_loss= 0.8317 val_acc= 0.8100 time= 0.1422
Epoch: 0175 train_loss= 0.4842 train_acc= 0.9714 val_loss= 0.8302 val_acc= 0.8133 time= 0.1536
Epoch: 0176 train_loss= 0.4821 train_acc= 0.9714 val_loss= 0.8287 val_acc= 0.8167 time= 0.1517
Epoch: 0177 train_loss= 0.4799 train_acc= 0.9714 val_loss= 0.8269 val_acc= 0.8200 time= 0.1452
Epoch: 0178 train_loss= 0.4777 train_acc= 0.9714 val_loss= 0.8250 val_acc= 0.8233 time= 0.1411
Epoch: 0179 train_loss= 0.4756 train_acc= 0.9714 val_loss= 0.8232 val_acc= 0.8233 time= 0.1417
Epoch: 0180 train_loss= 0.4734 train_acc= 0.9714 val_loss= 0.8214 val_acc= 0.8233 time= 0.1455
Epoch: 0181 train_loss= 0.4712 train_acc= 0.9714 val_loss= 0.8190 val_acc= 0.8200 time= 0.1392
Epoch: 0182 train_loss= 0.4690 train_acc= 0.9714 val_loss= 0.8168 val_acc= 0.8200 time= 0.1400
Epoch: 0183 train_loss= 0.4669 train_acc= 0.9714 val_loss= 0.8148 val_acc= 0.8200 time= 0.1422
Epoch: 0184 train_loss= 0.4648 train_acc= 0.9714 val_loss= 0.8128 val_acc= 0.8167 time= 0.1375
Epoch: 0185 train_loss= 0.4627 train_acc= 0.9714 val_loss= 0.8109 val_acc= 0.8167 time= 0.1457
Epoch: 0186 train_loss= 0.4607 train_acc= 0.9714 val_loss= 0.8091 val_acc= 0.8133 time= 0.1489
Epoch: 0187 train_loss= 0.4587 train_acc= 0.9714 val_loss= 0.8077 val_acc= 0.8133 time= 0.1442
Epoch: 0188 train_loss= 0.4567 train_acc= 0.9714 val_loss= 0.8066 val_acc= 0.8133 time= 0.1477
Epoch: 0189 train_loss= 0.4548 train_acc= 0.9714 val_loss= 0.8056 val_acc= 0.8133 time= 0.1414
Epoch: 0190 train_loss= 0.4529 train_acc= 0.9714 val_loss= 0.8043 val_acc= 0.8167 time= 0.1460
Epoch: 0191 train_loss= 0.4510 train_acc= 0.9714 val_loss= 0.8031 val_acc= 0.8167 time= 0.1405
Epoch: 0192 train_loss= 0.4492 train_acc= 0.9714 val_loss= 0.8017 val_acc= 0.8233 time= 0.1428
Epoch: 0193 train_loss= 0.4474 train_acc= 0.9714 val_loss= 0.8005 val_acc= 0.8233 time= 0.1414
Epoch: 0194 train_loss= 0.4456 train_acc= 0.9714 val_loss= 0.7992 val_acc= 0.8233 time= 0.1407
Epoch: 0195 train_loss= 0.4438 train_acc= 0.9714 val_loss= 0.7977 val_acc= 0.8233 time= 0.1505
Epoch: 0196 train_loss= 0.4419 train_acc= 0.9714 val_loss= 0.7961 val_acc= 0.8233 time= 0.1431
Epoch: 0197 train_loss= 0.4401 train_acc= 0.9714 val_loss= 0.7944 val_acc= 0.8233 time= 0.1430
Epoch: 0198 train_loss= 0.4381 train_acc= 0.9714 val_loss= 0.7925 val_acc= 0.8233 time= 0.1456
Epoch: 0199 train_loss= 0.4364 train_acc= 0.9714 val_loss= 0.7910 val_acc= 0.8233 time= 0.1493
Epoch: 0200 train_loss= 0.4347 train_acc= 0.9714 val_loss= 0.7893 val_acc= 0.8233 time= 0.1391
Test set results: loss= 0.8438 accuracy= 0.8260

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants