Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

実験管理 #6

Open
matsuken92 opened this issue Jun 10, 2019 · 6 comments
Open

実験管理 #6

matsuken92 opened this issue Jun 10, 2019 · 6 comments

Comments

@matsuken92
Copy link
Owner

matsuken92 commented Jun 10, 2019

train_v003_057.py LB: -
⇒ yiemon feat 1j2j3j + pca_v002 CV : cv score: -
⇒ num_leaves 256, max_depth=-1

train_v003_059.py LB: -1.856
⇒ yiemon feat 1j2j3j HnJ CV : cv score: -1.5654
⇒ num_leaves 256, max_depth=-1

train_v003_064.py LB: -1.889
⇒ yiemon feat 1j2j3j HnJ train_v003_059のimportanceの高いものだけ
CV : cv score: -1.59941
⇒ num_leaves 256, max_depth=-1
nohup_v003_064.out.txt

train_v003_067.py LB: -1.897
⇒ train_v003_064にcircle featureを追加
CV : cv score: -1.6077

train_v003_072.py LB: -1.897
⇒ Objectiveをhuberに
CV : cv score: -1.63236

train_all_v003_074.py LB: -2.073 (25 seeds)
⇒ train_v003_072をベースに全データ学習

train_v003_075.py LB: xxx
⇒ train_v003_067をベースに下記を追加
seg_H2J_stats_feat2_train.pkl
seg_H3J_stats_feat2_train.pkl
seg_H2J_stats_feat2_test.pkl
seg_H3J_stats_feat2_test.pkl
fc CV mean score: -1.3420, std: 0.0048.
CV : cv score: -1.62411

train_v003_076.py LB: xxx
⇒ train_v003_075.py から追加特徴を上位importanceに絞る
fc CV mean score: -1.3426, std: 0.0046., CV : cv score: -1.62323

train_v003_077.py LB: xxx
⇒ train_v003_076.py からfcもtype別に

train_v003_078.py LB: -1.964
⇒ train_v003_076.py からseg_H1J_bond_extension1を追加
⇒ CV: -1.6617

train_v003_079.py LB: xxx
⇒ train_v003_078.py からseg_H1J_bond_extension1を重要度高いものに絞る
⇒ CV: -1.67293

train_v003_080.py LB: -1.977
⇒ train_v003_079.py からさらに全体の特徴からも重要度高いものに絞る
⇒ CV: -1.68498

train_v003_082.py LB: xxx
さらに特徴量を重要度で削減

train_v003_083.py LB: xxx
n_estimatorを増やす

train_v003_083.py LB: xxx
'max_bin': 64,

train_v003_085.py LB: xxx
y_fcをtarget(scalar_coupling_constant)にしてみる

train_v003_096.py LB:
⇒ train_v003_080.py をベースにfc無くしたらどうなるか確認

古い履歴

@matsuken92
Copy link
Owner Author

Log of v003_003

nohup.out_v003_003.txt

@matsuken92
Copy link
Owner Author

matsuken92 commented Jun 12, 2019

v003_006 score

STARTING : 2019-06-11 16:59:17
Mem. usage decreased to 1239.42 Mb (71.6% reduction)
Mem. usage decreased to 633.21 Mb (71.9% reduction)
Fold 1 started at Tue Jun 11 17:04:58 2019
[8000]	training's l1: 0.307622	valid_1's l1: 0.520485
Fold 2 started at Tue Jun 11 17:22:06 2019
[8000]	training's l1: 0.30656	valid_1's l1: 0.521003
Fold 3 started at Tue Jun 11 17:38:36 2019
[8000]	training's l1: 0.306582	valid_1's l1: 0.520394
Fold 4 started at Tue Jun 11 17:56:04 2019
[8000]	training's l1: 0.305736	valid_1's l1: 0.519107
Fold 5 started at Tue Jun 11 18:13:32 2019
[8000]	training's l1: 0.305946	valid_1's l1: 0.518649
CV mean score: -0.7760, std: 0.0025.
X['type'].unique(): [0 3 1 4 2 6 5 7]

Training of type 0
Fold 1 started at Tue Jun 11 18:31:23 2019
[15000]	training's l1: 0.0199649	valid_1's l1: 0.906052
Fold 2 started at Tue Jun 11 18:40:04 2019
[15000]	training's l1: 0.0198753	valid_1's l1: 0.913156
Fold 3 started at Tue Jun 11 18:48:49 2019
[15000]	training's l1: 0.0198629	valid_1's l1: 0.910464
Fold 4 started at Tue Jun 11 18:57:25 2019
[15000]	training's l1: 0.0197971	valid_1's l1: 0.91228
Fold 5 started at Tue Jun 11 19:06:04 2019
[15000]	training's l1: 0.0198354	valid_1's l1: 0.913914
CV mean score: -0.0930, std: 0.0031.

Training of type 3
Fold 1 started at Tue Jun 11 19:15:23 2019
[15000]	training's l1: 0.00119511	valid_1's l1: 0.229172
Fold 2 started at Tue Jun 11 19:23:32 2019
[15000]	training's l1: 0.0011875	valid_1's l1: 0.228336
Fold 3 started at Tue Jun 11 19:31:32 2019
[15000]	training's l1: 0.00119345	valid_1's l1: 0.228493
Fold 4 started at Tue Jun 11 19:39:29 2019
[15000]	training's l1: 0.00119267	valid_1's l1: 0.22976
Fold 5 started at Tue Jun 11 19:47:20 2019
[15000]	training's l1: 0.00119209	valid_1's l1: 0.229388
CV mean score: -1.4739, std: 0.0024.

Training of type 1
Fold 1 started at Tue Jun 11 19:56:15 2019
[4918]	training's l1: 0.00119977	valid_1's l1: 0.540004
Fold 2 started at Tue Jun 11 19:57:14 2019
[4502]	training's l1: 0.00130429	valid_1's l1: 0.54387
Fold 3 started at Tue Jun 11 19:58:10 2019
[6681]	training's l1: 0.000973862	valid_1's l1: 0.538346
Fold 4 started at Tue Jun 11 19:59:18 2019
[7275]	training's l1: 0.000925833	valid_1's l1: 0.544128
Fold 5 started at Tue Jun 11 20:00:30 2019
[5490]	training's l1: 0.00108372	valid_1's l1: 0.536281
CV mean score: -0.6152, std: 0.0057.

Training of type 4
Fold 1 started at Tue Jun 11 20:02:03 2019
[6334]	training's l1: 0.00123945	valid_1's l1: 0.238972
Fold 2 started at Tue Jun 11 20:04:03 2019
[12232]	training's l1: 0.000677554	valid_1's l1: 0.242251
Fold 3 started at Tue Jun 11 20:07:06 2019
[8054]	training's l1: 0.000869291	valid_1's l1: 0.244561
Fold 4 started at Tue Jun 11 20:09:30 2019
[12660]	training's l1: 0.00067459	valid_1's l1: 0.244021
Fold 5 started at Tue Jun 11 20:12:41 2019
[6786]	training's l1: 0.0010875	valid_1's l1: 0.24058
CV mean score: -1.4185, std: 0.0086.

Training of type 2
Fold 1 started at Tue Jun 11 20:15:20 2019
[15000]	training's l1: 0.024419	valid_1's l1: 0.376461
Fold 2 started at Tue Jun 11 20:27:02 2019
[15000]	training's l1: 0.0243986	valid_1's l1: 0.376453
Fold 3 started at Tue Jun 11 20:38:28 2019
[15000]	training's l1: 0.0242558	valid_1's l1: 0.376576
Fold 4 started at Tue Jun 11 20:49:43 2019
[15000]	training's l1: 0.0244827	valid_1's l1: 0.372675
Fold 5 started at Tue Jun 11 21:01:27 2019
[15000]	training's l1: 0.0243272	valid_1's l1: 0.376481
CV mean score: -0.9789, std: 0.0041.

Training of type 6
Fold 1 started at Tue Jun 11 21:13:47 2019
[15000]	training's l1: 0.00282867	valid_1's l1: 0.201985
Fold 2 started at Tue Jun 11 21:22:23 2019
[15000]	training's l1: 0.0028309	valid_1's l1: 0.202628
Fold 3 started at Tue Jun 11 21:30:58 2019
[15000]	training's l1: 0.00281874	valid_1's l1: 0.203361
Fold 4 started at Tue Jun 11 21:39:33 2019
[15000]	training's l1: 0.00283694	valid_1's l1: 0.201092
Fold 5 started at Tue Jun 11 21:48:01 2019
[15000]	training's l1: 0.00280774	valid_1's l1: 0.202467
CV mean score: -1.5980, std: 0.0037.

Training of type 5
Fold 1 started at Tue Jun 11 21:57:21 2019
[15000]	training's l1: 0.0399831	valid_1's l1: 0.350548
Fold 2 started at Tue Jun 11 22:11:54 2019
[15000]	training's l1: 0.0398623	valid_1's l1: 0.351907
Fold 3 started at Tue Jun 11 22:26:34 2019
[15000]	training's l1: 0.0399994	valid_1's l1: 0.350007
Fold 4 started at Tue Jun 11 22:40:52 2019
[15000]	training's l1: 0.0400033	valid_1's l1: 0.34955
Fold 5 started at Tue Jun 11 22:54:56 2019
[15000]	training's l1: 0.0400072	valid_1's l1: 0.351625
CV mean score: -1.0477, std: 0.0026.

Training of type 7
Fold 1 started at Tue Jun 11 23:09:40 2019
[7199]	training's l1: 0.00118186	valid_1's l1: 0.168515
Fold 2 started at Tue Jun 11 23:12:22 2019
[13843]	training's l1: 0.000634087	valid_1's l1: 0.166695
Fold 3 started at Tue Jun 11 23:16:44 2019
[10761]	training's l1: 0.000736725	valid_1's l1: 0.171416
Fold 4 started at Tue Jun 11 23:20:27 2019
[13103]	training's l1: 0.000652251	valid_1's l1: 0.167788
Fold 5 started at Tue Jun 11 23:24:39 2019
[11342]	training's l1: 0.000711948	valid_1's l1: 0.169089
CV mean score: -1.7797, std: 0.0093.

oof_log_mae: -1.1256088497006402
finished. : 2019-06-11 23:29:16

v003_006 importance LB: -1.327

  type_0 type_1 type_2 type_3 type_4 type_5 type_6 type_7 ave
oof_fc 2.7% 3.0% 3.0% 2.3% 2.9% 3.5% 2.4% 3.6% 2.9%
f006:dist_origin_mean 2.3% 3.4% 1.8% 2.6% 2.6% 2.0% 2.2% 2.8% 2.4%
abs_dist 2.5% 2.9% 2.0% 2.3% 2.3% 1.7% 1.8% 1.9% 2.2%
f006:dist_from_origin_0 2.2% 2.3% 2.1% 2.3% 2.2% 2.0% 2.0% 2.0% 2.1%
molecule_dist_mean 1.9% 2.7% 1.4% 2.2% 1.9% 1.6% 2.0% 2.0% 2.0%
molecule_dist_max 1.9% 2.8% 1.5% 2.2% 2.1% 1.2% 1.9% 2.0% 1.9%
f006:dist_from_origin_1 1.8% 1.9% 1.7% 2.4% 1.8% 1.7% 2.3% 1.7% 1.9%
molecule_atom_1_dist_mean 2.0% 2.4% 1.5% 2.1% 1.8% 1.5% 2.1% 1.7% 1.9%
molecule_atom_index_0_x_1_std 2.0% 2.0% 1.7% 1.8% 1.9% 1.6% 1.7% 1.7% 1.8%
z_0 1.8% 1.8% 1.6% 1.7% 2.1% 1.6% 1.7% 2.0% 1.8%
molecule_atom_1_dist_std_diff 2.4% 3.0% 1.5% 2.5% 1.7% 0.8% 1.2% 1.3% 1.8%
molecule_atom_index_0_z_1_std 2.0% 1.9% 1.6% 1.9% 1.8% 1.7% 1.7% 1.7% 1.8%
z_1 1.7% 1.9% 1.6% 1.9% 1.8% 1.7% 1.7% 1.8% 1.8%
f004:angle_abs 1.8% 2.0% 1.5% 1.6% 2.1% 1.3% 1.7% 1.9% 1.7%
dist_xy 1.8% 1.8% 1.6% 1.8% 1.9% 1.5% 1.6% 1.6% 1.7%
molecule_atom_index_0_y_1_std 1.8% 1.9% 1.5% 1.7% 1.7% 1.5% 1.5% 1.7% 1.7%
dist_xz 2.0% 2.3% 1.6% 1.6% 1.7% 1.4% 1.4% 1.4% 1.7%
molecule_atom_index_0_y_1_mean_div 1.8% 1.8% 1.6% 1.7% 1.7% 1.5% 1.6% 1.5% 1.6%
x_0 1.6% 1.7% 1.5% 1.7% 1.7% 1.5% 1.5% 1.8% 1.6%
dist_yz 1.8% 1.8% 1.5% 1.7% 1.7% 1.4% 1.5% 1.5% 1.6%
x_1 1.5% 1.8% 1.4% 1.8% 1.5% 1.5% 1.5% 1.6% 1.6%
gasteiger_1 1.8% 1.6% 1.8% 0.0% 1.7% 1.9% 1.9% 1.7% 1.5%
f004:angle 1.7% 1.8% 1.4% 1.5% 1.7% 1.3% 1.4% 1.5% 1.5%
gasteiger_0 1.5% 1.0% 1.8% 1.6% 1.1% 2.0% 1.5% 1.1% 1.5%
y_0 1.5% 1.5% 1.4% 1.7% 1.5% 1.4% 1.4% 1.5% 1.5%
molecule_atom_index_0_y_1_mean_diff 1.5% 1.4% 1.4% 1.5% 1.4% 1.2% 1.3% 1.2% 1.4%
molecule_couples 1.4% 1.6% 1.1% 1.3% 1.4% 1.0% 1.2% 1.3% 1.3%
molecule_atom_index_0_dist_max 1.1% 1.0% 1.5% 1.0% 1.2% 1.6% 1.4% 1.5% 1.3%
molecule_type_dist_mean_diff 1.8% 1.5% 1.2% 1.6% 1.5% 0.7% 0.9% 0.9% 1.3%
y_1 1.3% 1.3% 1.2% 1.6% 1.1% 1.3% 1.2% 1.3% 1.3%
molecule_type_0_dist_std_diff 1.0% 1.8% 1.1% 1.9% 1.4% 0.8% 1.1% 0.9% 1.3%
qtpie_0 1.1% 1.3% 1.2% 1.1% 1.3% 1.3% 1.2% 1.2% 1.2%
molecule_type_dist_min 0.6% 0.0% 1.6% 1.2% 0.8% 1.9% 1.9% 1.5% 1.2%
qeq_0 1.1% 1.4% 1.2% 1.2% 1.2% 1.2% 1.1% 1.2% 1.2%
molecule_atom_index_0_y_1_max_diff 1.4% 1.3% 1.2% 1.1% 1.2% 1.0% 1.2% 1.0% 1.2%
eem2015ha_0 1.3% 1.3% 1.0% 1.2% 1.3% 1.0% 1.2% 1.2% 1.2%
eem2015ha_1 1.2% 1.4% 1.1% 1.2% 1.1% 1.1% 1.2% 1.1% 1.2%
eem2015bm_0 1.3% 1.4% 1.0% 1.2% 1.2% 1.0% 1.1% 1.3% 1.2%
molecule_atom_index_0_dist_mean 1.0% 0.7% 1.3% 0.9% 1.1% 1.6% 1.3% 1.4% 1.2%
molecule_type_dist_std_diff 0.7% 0.9% 1.2% 1.5% 1.4% 0.8% 1.2% 1.5% 1.2%
molecule_atom_index_0_dist_std 1.1% 0.8% 1.1% 0.9% 1.1% 1.3% 1.2% 1.5% 1.1%
molecule_type_dist_max 1.1% 0.6% 1.5% 1.4% 0.9% 0.7% 1.4% 1.2% 1.1%
eem2015ba_0 1.2% 1.1% 1.0% 1.1% 1.2% 0.9% 1.1% 1.1% 1.1%
eem2015ba_1 1.2% 1.0% 1.1% 1.1% 0.9% 1.1% 1.1% 0.9% 1.0%
qtpie_1 0.9% 1.0% 1.0% 1.2% 0.9% 1.0% 1.3% 0.9% 1.0%
eem_0 1.2% 1.3% 0.9% 1.0% 1.0% 0.9% 0.9% 1.0% 1.0%
qeq_1 0.9% 1.0% 0.9% 1.2% 0.9% 0.9% 1.3% 0.9% 1.0%
eem2015hm_0 1.2% 1.3% 0.9% 1.0% 1.0% 0.8% 0.9% 1.0% 1.0%
molecule_dist_min 0.9% 0.4% 1.0% 1.5% 0.8% 1.1% 1.3% 0.8% 1.0%
molecule_atom_index_1_dist_std 0.9% 1.1% 0.9% 1.1% 0.8% 1.1% 0.9% 1.0% 1.0%
molecule_atom_index_1_dist_std_diff 0.8% 0.9% 1.0% 1.0% 0.9% 1.0% 1.1% 0.9% 0.9%
eem2015bm_1 0.9% 0.9% 0.8% 1.3% 0.7% 0.8% 1.2% 0.7% 0.9%
eem2015bn_0 1.0% 1.0% 0.8% 0.9% 1.1% 0.7% 0.8% 0.9% 0.9%
molecule_atom_index_1_dist_max 0.8% 0.6% 1.1% 0.6% 0.9% 1.4% 0.8% 0.8% 0.9%
molecule_atom_index_0_dist_std_div 0.6% 0.5% 0.9% 0.7% 0.8% 1.3% 0.9% 1.2% 0.9%
molecule_atom_index_0_dist_std_diff 0.9% 0.7% 1.1% 1.0% 0.8% 1.0% 0.8% 0.7% 0.9%
eem2015hn_0 1.0% 1.1% 0.7% 0.8% 1.0% 0.7% 0.8% 0.8% 0.9%
molecule_atom_index_0_dist_max_div 1.2% 1.2% 1.1% 0.9% 0.9% 0.5% 0.6% 0.3% 0.8%
eem2015hm_1 0.8% 1.0% 0.7% 1.1% 0.7% 0.7% 1.0% 0.7% 0.8%
molecule_atom_index_0_dist_mean_div 1.0% 0.8% 1.0% 0.7% 0.9% 0.9% 0.7% 0.8% 0.8%
molecule_atom_index_1_dist_mean 0.7% 0.5% 1.0% 0.5% 0.8% 1.2% 0.9% 0.9% 0.8%
molecule_atom_index_0_dist_max_diff 0.8% 0.8% 1.1% 0.8% 1.0% 0.7% 0.7% 0.4% 0.8%
molecule_atom_1_dist_min_diff 1.6% 0.7% 0.6% 1.4% 0.6% 0.3% 0.6% 0.5% 0.8%
eem2015bn_1 0.7% 0.9% 0.6% 0.9% 0.7% 0.6% 0.9% 0.7% 0.8%
eem_1 0.7% 0.8% 0.6% 1.1% 0.6% 0.6% 1.0% 0.6% 0.8%
molecule_atom_index_1_dist_std_div 0.5% 0.5% 0.7% 0.5% 0.7% 1.0% 0.8% 1.0% 0.7%
molecule_atom_index_0_dist_mean_diff 0.7% 0.5% 0.9% 0.6% 0.8% 0.8% 0.6% 0.7% 0.7%
molecule_atom_index_0_dist_min 0.5% 0.3% 0.9% 0.6% 0.6% 1.1% 0.7% 0.7% 0.7%
eem2015hn_1 0.6% 0.8% 0.5% 0.9% 0.6% 0.5% 0.8% 0.6% 0.7%
molecule_atom_index_1_dist_min 0.4% 0.2% 0.9% 0.5% 0.6% 1.1% 1.0% 0.7% 0.7%
cos2T 0.0% 0.0% 0.0% 0.0% 0.0% 1.9% 1.4% 1.5% 0.6%
molecule_atom_index_1_dist_max_diff 0.6% 0.5% 0.8% 0.3% 0.8% 0.7% 0.3% 0.5% 0.6%
molecule_atom_index_1_dist_min_diff 1.2% 0.4% 0.7% 0.4% 0.5% 0.6% 0.3% 0.4% 0.6%
molecule_atom_index_1_dist_mean_diff 0.5% 0.4% 0.7% 0.4% 0.6% 0.8% 0.4% 0.7% 0.6%
Torsion 0.0% 0.0% 0.0% 0.0% 0.0% 1.5% 1.3% 1.5% 0.6%
mmff94_1 0.5% 0.3% 1.0% 0.0% 0.5% 1.2% 0.1% 0.7% 0.5%
molecule_atom_index_1_dist_max_div 0.9% 0.6% 0.9% 0.1% 0.7% 0.5% 0.2% 0.3% 0.5%
atom_index_0 0.5% 0.7% 0.5% 0.2% 0.6% 0.5% 0.3% 0.6% 0.5%
molecule_atom_index_1_dist_mean_div 0.7% 0.4% 0.6% 0.1% 0.5% 0.8% 0.3% 0.6% 0.5%
Angle 0.0% 0.0% 1.8% 0.9% 1.3% 0.0% 0.0% 0.0% 0.5%
molecule_type_dist_mean_div 0.1% 0.0% 0.6% 0.2% 0.5% 0.6% 0.8% 0.8% 0.5%
molecule_atom_1_dist_min_div 0.2% 0.0% 0.7% 0.2% 0.5% 0.5% 0.7% 0.5% 0.4%
molecule_atom_index_0_dist_min_div 0.0% 0.0% 0.7% 0.4% 0.5% 0.6% 0.5% 0.6% 0.4%
atom_index_1 0.4% 0.2% 0.4% 0.3% 0.3% 0.5% 0.4% 0.3% 0.4%
molecule_atom_index_0_dist_min_diff 0.0% 0.0% 0.6% 0.8% 0.5% 0.4% 0.4% 0.3% 0.4%
cosT 0.0% 0.0% 0.0% 0.0% 0.0% 1.2% 0.8% 0.6% 0.3%
H 0.4% 0.3% 0.3% 0.3% 0.3% 0.3% 0.3% 0.3% 0.3%
molecule_atom_index_1_dist_min_div 0.0% 0.0% 0.6% 0.0% 0.3% 0.6% 0.2% 0.4% 0.3%
dist 0.0% 0.0% 0.5% 0.3% 0.3% 0.3% 0.4% 0.3% 0.3%
atom_1_couples_count 0.3% 0.1% 0.4% 0.0% 0.2% 0.4% 0.0% 0.3% 0.2%
N 0.2% 0.2% 0.2% 0.2% 0.1% 0.2% 0.2% 0.2% 0.2%
O 0.2% 0.2% 0.2% 0.2% 0.2% 0.2% 0.2% 0.2% 0.2%
atom_0_couples_count 0.3% 0.1% 0.3% 0.1% 0.1% 0.3% 0.2% 0.1% 0.2%
C 0.2% 0.2% 0.1% 0.1% 0.1% 0.1% 0.1% 0.1% 0.1%
mmff94_0 0.1% 0.0% 0.1% 0.0% 0.1% 0.1% 0.0% 0.0% 0.1%
F 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
sp 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
type 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

@matsuken92
Copy link
Owner Author

matsuken92 commented Jun 13, 2019

v003_009 LB -1.416

STARTING : 2019-06-13 05:12:15

Fold 1 started at Thu Jun 13 05:20:21 2019
[8000]	training's l1: 0.275524	valid_1's l1: 0.469715
Fold 2 started at Thu Jun 13 05:45:24 2019
[8000]	training's l1: 0.275445	valid_1's l1: 0.47087
Fold 3 started at Thu Jun 13 06:10:52 2019
[8000]	training's l1: 0.27503	valid_1's l1: 0.469924
Fold 4 started at Thu Jun 13 06:34:39 2019
[8000]	training's l1: 0.275247	valid_1's l1: 0.470253
Fold 5 started at Thu Jun 13 07:00:13 2019
CV mean score: -0.8756, std: 0.0017.

Training of type 0
X_t.shape: (709416, 147), X_test_t.shape: (380609, 147)
Fold 1 started at Thu Jun 13 07:25:46 2019
[15000]	training's l1: 0.019366	valid_1's l1: 0.866321
Fold 2 started at Thu Jun 13 07:37:30 2019
[15000]	training's l1: 0.0191301	valid_1's l1: 0.870355
Fold 3 started at Thu Jun 13 07:49:18 2019
[15000]	training's l1: 0.0191912	valid_1's l1: 0.867929
Fold 4 started at Thu Jun 13 08:00:56 2019
[15000]	training's l1: 0.0193442	valid_1's l1: 0.869011
Fold 5 started at Thu Jun 13 08:12:20 2019
CV mean score: -0.1408, std: 0.0017.

Training of type 3
X_t.shape: (378036, 147), X_test_t.shape: (203126, 147)
Fold 1 started at Thu Jun 13 08:24:57 2019
[15000]	training's l1: 0.00115351	valid_1's l1: 0.212802
Fold 2 started at Thu Jun 13 08:34:11 2019
[15000]	training's l1: 0.00114644	valid_1's l1: 0.213157
Fold 3 started at Thu Jun 13 08:43:30 2019
[15000]	training's l1: 0.00114995	valid_1's l1: 0.213095
Fold 4 started at Thu Jun 13 08:52:51 2019
[15000]	training's l1: 0.00115019	valid_1's l1: 0.212825
Fold 5 started at Thu Jun 13 09:02:01 2019
CV mean score: -1.5465, std: 0.0007.

Training of type 1
X_t.shape: (43363, 147), X_test_t.shape: (24195, 147)
Fold 1 started at Thu Jun 13 09:12:25 2019
[6562]	training's l1: 0.000972868	valid_1's l1: 0.513828
Fold 2 started at Thu Jun 13 09:13:43 2019
[5606]	training's l1: 0.00110532	valid_1's l1: 0.520146
Fold 3 started at Thu Jun 13 09:14:51 2019
[7601]	training's l1: 0.000906742	valid_1's l1: 0.518258
Fold 4 started at Thu Jun 13 09:16:10 2019
[6771]	training's l1: 0.000962967	valid_1's l1: 0.517405
Fold 5 started at Thu Jun 13 09:17:28 2019
CV mean score: -0.6598, std: 0.0044.

Training of type 4
X_t.shape: (119253, 147), X_test_t.shape: (64424, 147)
Fold 1 started at Thu Jun 13 09:19:17 2019
[8205]	training's l1: 0.000852979	valid_1's l1: 0.222738
Fold 2 started at Thu Jun 13 09:22:16 2019
[7345]	training's l1: 0.000927949	valid_1's l1: 0.223598
Fold 3 started at Thu Jun 13 09:25:05 2019
[11940]	training's l1: 0.000677343	valid_1's l1: 0.224487
Fold 4 started at Thu Jun 13 09:28:57 2019
[8405]	training's l1: 0.000839033	valid_1's l1: 0.224196
Fold 5 started at Thu Jun 13 09:31:59 2019
CV mean score: -1.4962, std: 0.0033.

Training of type 2
X_t.shape: (1140674, 147), X_test_t.shape: (613138, 147)
Fold 1 started at Thu Jun 13 09:36:14 2019
[15000]	training's l1: 0.0221611	valid_1's l1: 0.335969
Fold 2 started at Thu Jun 13 09:51:03 2019
[15000]	training's l1: 0.0221469	valid_1's l1: 0.336252
Fold 3 started at Thu Jun 13 10:06:04 2019
[15000]	training's l1: 0.0221432	valid_1's l1: 0.33623
Fold 4 started at Thu Jun 13 10:21:02 2019
[15000]	training's l1: 0.0221748	valid_1's l1: 0.333912
Fold 5 started at Thu Jun 13 10:36:10 2019
CV mean score: -1.0918, std: 0.0026.

Training of type 6
X_t.shape: (590611, 147), X_test_t.shape: (317435, 147)
Fold 1 started at Thu Jun 13 10:51:54 2019
[15000]	training's l1: 0.00264541	valid_1's l1: 0.184185
Fold 2 started at Thu Jun 13 11:03:48 2019
[15000]	training's l1: 0.0026492	valid_1's l1: 0.184293
Fold 3 started at Thu Jun 13 11:15:30 2019
[15000]	training's l1: 0.00263448	valid_1's l1: 0.1845
Fold 4 started at Thu Jun 13 11:27:25 2019
[15000]	training's l1: 0.00264616	valid_1's l1: 0.183882
Fold 5 started at Thu Jun 13 11:39:24 2019
CV mean score: -1.6915, std: 0.0011.

Training of type 5
X_t.shape: (1510379, 147), X_test_t.shape: (811999, 147)
Fold 1 started at Thu Jun 13 11:52:21 2019
[15000]	training's l1: 0.0356988	valid_1's l1: 0.313844
Fold 2 started at Thu Jun 13 12:12:15 2019
[15000]	training's l1: 0.0357428	valid_1's l1: 0.314336
Fold 3 started at Thu Jun 13 12:32:27 2019
[15000]	training's l1: 0.0356953	valid_1's l1: 0.313332
Fold 4 started at Thu Jun 13 12:52:32 2019
[15000]	training's l1: 0.0357354	valid_1's l1: 0.31366
Fold 5 started at Thu Jun 13 13:12:25 2019
CV mean score: -1.1589, std: 0.0010.

Training of type 7
X_t.shape: (166415, 147), X_test_t.shape: (90616, 147)
Fold 1 started at Thu Jun 13 13:33:23 2019
[14587]	training's l1: 0.000611207	valid_1's l1: 0.154428
Fold 2 started at Thu Jun 13 13:39:00 2019
[14369]	training's l1: 0.000615051	valid_1's l1: 0.152621
Fold 3 started at Thu Jun 13 13:44:20 2019
[12589]	training's l1: 0.000661184	valid_1's l1: 0.154007
Fold 4 started at Thu Jun 13 13:49:12 2019
[13861]	training's l1: 0.000626634	valid_1's l1: 0.152488
Fold 5 started at Thu Jun 13 13:54:33 2019
[10924]	training's l1: 0.000720345	valid_1's l1: 0.154056
CV mean score: -1.8739, std: 0.0052.

oof_log_mae: -1.2074376400054934

finished. : 2019-06-13 13:59:58

nohup_v003_009.out.txt

  type_0 type_1 type_2 type_3 type_4 type_5 type_6 type_7 ave
oof_fc 2.6% 2.8% 2.9% 2.2% 2.8% 3.3% 2.4% 3.5% 2.8%
f006:dist_origin_mean 2.3% 3.3% 1.6% 2.5% 2.5% 1.8% 2.1% 2.7% 2.3%
abs_dist 2.4% 2.9% 1.9% 2.3% 2.3% 1.6% 1.7% 1.8% 2.1%
f006:dist_from_origin_0 2.2% 2.3% 2.0% 2.2% 2.2% 1.9% 1.8% 2.0% 2.1%
molecule_dist_mean 1.9% 2.7% 1.3% 2.1% 1.9% 1.5% 1.9% 1.9% 1.9%
molecule_dist_max 1.9% 2.8% 1.4% 2.1% 2.1% 1.1% 1.8% 1.9% 1.9%
molecule_atom_1_dist_mean 2.0% 2.3% 1.4% 2.0% 1.7% 1.5% 2.0% 1.6% 1.8%
f006:dist_from_origin_1 1.7% 1.9% 1.5% 2.4% 1.6% 1.5% 2.2% 1.5% 1.8%
molecule_atom_1_dist_std_diff 2.4% 3.0% 1.4% 2.4% 1.6% 0.7% 1.1% 1.3% 1.7%
molecule_atom_index_0_x_1_std 1.9% 2.0% 1.6% 1.7% 1.7% 1.5% 1.5% 1.6% 1.7%
molecule_atom_index_0_z_1_std 1.9% 1.9% 1.5% 1.8% 1.6% 1.5% 1.5% 1.5% 1.7%
f004:angle_abs 1.7% 2.0% 1.4% 1.5% 2.0% 1.3% 1.6% 1.8% 1.7%
dist_xy 1.8% 1.8% 1.5% 1.8% 1.9% 1.3% 1.5% 1.5% 1.6%
dist_xz 2.0% 2.2% 1.5% 1.5% 1.7% 1.3% 1.4% 1.3% 1.6%
molecule_atom_index_0_y_1_std 1.8% 1.8% 1.4% 1.6% 1.6% 1.4% 1.4% 1.6% 1.6%
molecule_atom_index_0_y_1_mean_div 1.7% 1.7% 1.5% 1.6% 1.6% 1.4% 1.4% 1.3% 1.5%
dist_yz 1.7% 1.8% 1.4% 1.6% 1.6% 1.3% 1.4% 1.4% 1.5%
f004:angle 1.6% 1.8% 1.3% 1.4% 1.5% 1.2% 1.3% 1.4% 1.4%
gasteiger_1 1.7% 1.5% 1.7% 0.0% 1.6% 1.7% 1.7% 1.6% 1.4%
z_1 1.5% 1.7% 1.2% 1.6% 1.4% 1.3% 1.3% 1.4% 1.4%
z_0 1.5% 1.4% 1.3% 1.4% 1.6% 1.2% 1.3% 1.5% 1.4%
gasteiger_0 1.4% 1.0% 1.7% 1.5% 1.0% 1.8% 1.4% 1.0% 1.4%
molecule_atom_index_0_y_1_mean_diff 1.5% 1.4% 1.3% 1.4% 1.3% 1.1% 1.2% 1.1% 1.3%
x_0 1.3% 1.3% 1.1% 1.4% 1.3% 1.1% 1.2% 1.4% 1.3%
molecule_couples 1.4% 1.6% 1.0% 1.3% 1.3% 1.0% 1.2% 1.3% 1.3%
molecule_type_dist_mean_diff 1.8% 1.5% 1.1% 1.6% 1.5% 0.7% 0.9% 0.9% 1.2%
molecule_atom_index_0_dist_max 1.1% 0.9% 1.4% 0.9% 1.2% 1.5% 1.3% 1.4% 1.2%
x_1 1.3% 1.5% 1.1% 1.5% 1.1% 1.1% 1.1% 1.1% 1.2%
molecule_type_0_dist_std_diff 0.9% 1.8% 1.1% 1.9% 1.4% 0.8% 1.1% 0.8% 1.2%
y_0 1.2% 1.1% 1.1% 1.4% 1.2% 1.1% 1.1% 1.2% 1.2%
qtpie_0 1.0% 1.3% 1.1% 1.1% 1.2% 1.2% 1.1% 1.2% 1.2%
qeq_0 1.1% 1.3% 1.1% 1.1% 1.2% 1.1% 1.1% 1.2% 1.1%
eem2015ha_0 1.3% 1.2% 1.0% 1.1% 1.3% 0.9% 1.1% 1.1% 1.1%
molecule_type_dist_min 0.6% 0.0% 1.5% 1.1% 0.8% 1.9% 1.9% 1.4% 1.1%
eem2015bm_0 1.3% 1.3% 0.9% 1.2% 1.1% 0.9% 1.1% 1.3% 1.1%
eem2015ha_1 1.2% 1.3% 1.0% 1.2% 1.0% 1.0% 1.1% 1.1% 1.1%
molecule_atom_index_0_dist_mean 1.0% 0.6% 1.2% 0.8% 1.1% 1.4% 1.2% 1.4% 1.1%
molecule_type_dist_std_diff 0.6% 0.8% 1.1% 1.4% 1.3% 0.8% 1.2% 1.5% 1.1%
z_a1_nb 1.2% 1.1% 1.3% 0.0% 1.3% 1.3% 1.3% 1.3% 1.1%
molecule_atom_index_0_dist_std 1.0% 0.8% 1.0% 0.9% 1.1% 1.2% 1.1% 1.4% 1.1%
molecule_atom_index_0_y_1_max_diff 1.3% 1.3% 1.1% 1.0% 1.1% 0.9% 1.1% 0.8% 1.1%
eem2015ba_0 1.2% 1.1% 0.9% 1.1% 1.2% 0.9% 1.1% 1.1% 1.1%
molecule_type_dist_max 1.1% 0.6% 1.4% 1.4% 0.8% 0.6% 1.3% 1.1% 1.0%
z_a0_nb 0.0% 0.0% 1.1% 1.5% 1.4% 1.3% 1.3% 1.5% 1.0%
y_1 1.1% 1.1% 0.9% 1.3% 0.8% 1.0% 0.9% 0.9% 1.0%
x_a1_nb 1.1% 1.1% 1.2% 0.0% 1.1% 1.2% 1.0% 1.2% 1.0%
eem2015ba_1 1.1% 1.0% 1.0% 1.1% 0.8% 1.0% 1.1% 0.8% 1.0%
eem_0 1.2% 1.3% 0.8% 1.0% 0.9% 0.8% 0.9% 1.0% 1.0%
eem2015hm_0 1.2% 1.2% 0.8% 1.0% 0.9% 0.8% 0.9% 1.0% 1.0%
qtpie_1 0.9% 0.9% 0.9% 1.2% 0.8% 0.9% 1.3% 0.8% 1.0%
qeq_1 0.9% 0.9% 0.9% 1.2% 0.8% 0.9% 1.2% 0.9% 1.0%
molecule_dist_min 0.9% 0.4% 1.0% 1.4% 0.7% 1.0% 1.2% 0.7% 0.9%
molecule_atom_index_1_dist_std 0.9% 1.1% 0.8% 1.1% 0.7% 1.0% 0.9% 0.9% 0.9%
y_a1_nb 1.0% 0.9% 1.0% 0.0% 1.0% 1.2% 0.9% 1.1% 0.9%
molecule_atom_index_1_dist_std_diff 0.8% 0.9% 1.0% 0.9% 0.8% 1.0% 1.0% 0.9% 0.9%
eem2015bn_0 1.0% 1.0% 0.7% 0.8% 1.0% 0.7% 0.8% 0.9% 0.9%
x_a0_nb 0.0% 0.0% 1.0% 1.4% 1.0% 1.1% 1.1% 1.3% 0.9%
eem2015bm_1 0.9% 0.9% 0.7% 1.2% 0.6% 0.8% 1.1% 0.6% 0.9%
molecule_atom_index_0_dist_std_div 0.6% 0.5% 0.9% 0.7% 0.8% 1.2% 0.9% 1.2% 0.8%
eem2015hn_0 1.0% 1.0% 0.7% 0.8% 0.9% 0.6% 0.8% 0.8% 0.8%
molecule_atom_index_0_dist_std_diff 0.9% 0.7% 1.0% 1.0% 0.8% 0.9% 0.7% 0.7% 0.8%
molecule_atom_index_1_dist_max 0.8% 0.6% 1.0% 0.6% 0.8% 1.3% 0.8% 0.7% 0.8%
molecule_atom_index_0_dist_max_div 1.1% 1.1% 1.1% 0.9% 0.9% 0.5% 0.6% 0.3% 0.8%
eem2015hm_1 0.8% 0.9% 0.7% 1.0% 0.6% 0.7% 0.9% 0.6% 0.8%
molecule_atom_index_0_dist_mean_div 1.0% 0.7% 0.9% 0.7% 0.8% 0.8% 0.6% 0.7% 0.8%
molecule_atom_index_0_dist_max_diff 0.8% 0.8% 1.0% 0.8% 0.9% 0.7% 0.7% 0.4% 0.8%
molecule_atom_index_1_dist_mean 0.7% 0.5% 0.9% 0.5% 0.7% 1.2% 0.8% 0.8% 0.8%
molecule_atom_1_dist_min_diff 1.6% 0.7% 0.5% 1.4% 0.6% 0.3% 0.5% 0.4% 0.7%
eem_1 0.7% 0.8% 0.6% 1.0% 0.5% 0.5% 1.0% 0.6% 0.7%
eem2015bn_1 0.7% 0.9% 0.5% 0.9% 0.7% 0.6% 0.8% 0.6% 0.7%
y_a0_nb 0.0% 0.0% 0.8% 1.2% 0.8% 1.0% 0.9% 0.9% 0.7%
molecule_atom_index_1_dist_std_div 0.5% 0.5% 0.7% 0.5% 0.7% 0.9% 0.8% 0.9% 0.7%
molecule_atom_index_0_dist_mean_diff 0.7% 0.5% 0.9% 0.6% 0.8% 0.7% 0.6% 0.7% 0.7%
eem2015hn_1 0.6% 0.7% 0.5% 0.9% 0.6% 0.5% 0.8% 0.5% 0.6%
molecule_atom_index_1_dist_min 0.3% 0.2% 0.8% 0.5% 0.5% 0.9% 1.0% 0.6% 0.6%
molecule_atom_index_0_dist_min 0.3% 0.2% 0.8% 0.5% 0.6% 1.0% 0.7% 0.6% 0.6%
cos2T 0.0% 0.0% 0.0% 0.0% 0.0% 1.8% 1.4% 1.4% 0.6%
molecule_atom_index_1_dist_max_diff 0.6% 0.5% 0.8% 0.3% 0.8% 0.7% 0.3% 0.5% 0.6%
molecule_atom_index_1_dist_min_diff 1.2% 0.5% 0.7% 0.4% 0.4% 0.5% 0.3% 0.3% 0.5%
molecule_atom_index_1_dist_mean_diff 0.5% 0.4% 0.7% 0.4% 0.5% 0.8% 0.4% 0.6% 0.5%
Torsion 0.0% 0.0% 0.0% 0.0% 0.0% 1.5% 1.3% 1.4% 0.5%
molecule_atom_index_1_dist_max_div 0.9% 0.6% 0.8% 0.1% 0.7% 0.5% 0.2% 0.3% 0.5%
molecule_atom_index_1_dist_mean_div 0.6% 0.4% 0.6% 0.1% 0.5% 0.7% 0.3% 0.6% 0.5%
atom_index_0 0.5% 0.8% 0.5% 0.2% 0.5% 0.5% 0.3% 0.5% 0.5%
Angle 0.0% 0.0% 1.6% 0.9% 1.2% 0.0% 0.0% 0.0% 0.5%
mmff94_1 0.5% 0.2% 0.9% 0.0% 0.5% 1.0% 0.1% 0.6% 0.5%
molecule_type_dist_mean_div 0.1% 0.0% 0.6% 0.2% 0.5% 0.6% 0.7% 0.8% 0.4%
molecule_atom_1_dist_min_div 0.2% 0.0% 0.7% 0.2% 0.5% 0.5% 0.7% 0.5% 0.4%
molecule_atom_index_0_dist_min_div 0.0% 0.0% 0.7% 0.4% 0.5% 0.6% 0.5% 0.5% 0.4%
molecule_atom_index_0_dist_min_diff 0.0% 0.0% 0.5% 0.7% 0.5% 0.3% 0.3% 0.3% 0.3%
atom_index_1 0.4% 0.2% 0.4% 0.3% 0.2% 0.5% 0.3% 0.2% 0.3%
dist_to_type_mean 0.5% 0.5% 0.3% 0.3% 0.3% 0.2% 0.2% 0.2% 0.3%
cosT 0.0% 0.0% 0.0% 0.0% 0.0% 1.1% 0.8% 0.5% 0.3%
H 0.4% 0.3% 0.3% 0.3% 0.3% 0.2% 0.3% 0.2% 0.3%
molecule_atom_index_1_dist_min_div 0.0% 0.0% 0.6% 0.0% 0.2% 0.6% 0.2% 0.3% 0.3%
atom_1_couples_count 0.3% 0.1% 0.4% 0.0% 0.2% 0.4% 0.0% 0.2% 0.2%
N 0.2% 0.2% 0.2% 0.2% 0.1% 0.2% 0.2% 0.1% 0.2%
O 0.2% 0.2% 0.2% 0.2% 0.1% 0.2% 0.2% 0.1% 0.2%
dist 0.0% 0.0% 0.3% 0.2% 0.2% 0.2% 0.2% 0.2% 0.2%
atom_0_couples_count 0.2% 0.1% 0.3% 0.1% 0.1% 0.3% 0.2% 0.1% 0.2%
C 0.2% 0.2% 0.1% 0.1% 0.1% 0.1% 0.1% 0.1% 0.1%
a0_nb_nb_c 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.1% 0.0% 0.0%
mmff94_0 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a0_nb_inring5 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_inring4 0.1% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_inring5 0.1% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a0_nb_nb_n 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a0_nb_inring4 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_nb_nb_c 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0%
a1_inring3 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_nb_nb_h 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0%
a1_nb_c 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a0_nb_nb_h 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_nb_n 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a0_nb_inring6 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_inring6 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a0_nb_nb_o 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_nb_nb_n 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0%
a1_nb_nb_o 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_o 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_nb_inring5 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0%
a0_nb_inring 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_inring4 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0%
a1_nb_degree 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_inring7 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a0_nb_inring3 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
a1_degree 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_inring 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a0_nb_inring7 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_inring3 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_inring6 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_h 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
F 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_hybridization 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_hybridization 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a0_nb_hybridization 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_inring8 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_inring 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a0_nb_inring8 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_inring7 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a0_nb_degree 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_inring8 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_na 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
sp 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a1_nb_nb_na 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
type 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
a0_nb_nb_na 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

resultv003_009

@matsuken92
Copy link
Owner Author

matsuken92 commented Jun 24, 2019

holdout v003_025

[500] training's l1: 0.566029 valid_1's l1: 0.589641
[1000] training's l1: 0.478477 valid_1's l1: 0.517248
[1500] training's l1: 0.429449 valid_1's l1: 0.480525
[2000] training's l1: 0.394687 valid_1's l1: 0.456913
[2500] training's l1: 0.368058 valid_1's l1: 0.440431
[3000] training's l1: 0.345617 valid_1's l1: 0.427161
[3500] training's l1: 0.32731 valid_1's l1: 0.417293
[4000] training's l1: 0.311112 valid_1's l1: 0.408887
[4500] training's l1: 0.296723 valid_1's l1: 0.401715
[5000] training's l1: 0.283922 valid_1's l1: 0.395972
[5500] training's l1: 0.272583 valid_1's l1: 0.391227
[6000] training's l1: 0.262024 valid_1's l1: 0.386568
[6500] training's l1: 0.252317 valid_1's l1: 0.382687
[7000] training's l1: 0.243352 valid_1's l1: 0.379186
[7500] training's l1: 0.235024 valid_1's l1: 0.376095
[8000] training's l1: 0.227153 valid_1's l1: 0.373319
Did not meet early stopping. Best iteration is:
[8000] training's l1: 0.227153 valid_1's l1: 0.373319

v003_025 HOLD_OUT score: -1.0758

holdout v003_026

add "sum_radius", "tda_cocycles_shape", "tda_max_radius",
"tda_mean_radius", "tda_min_radius", "tda_num_circle"

[500] training's l1: 0.563407 valid_1's l1: 0.587388
[1000] training's l1: 0.477219 valid_1's l1: 0.516854
[1500] training's l1: 0.427973 valid_1's l1: 0.480503
[2000] training's l1: 0.393202 valid_1's l1: 0.456683
[2500] training's l1: 0.366525 valid_1's l1: 0.440105
[3000] training's l1: 0.344808 valid_1's l1: 0.427501
[3500] training's l1: 0.326128 valid_1's l1: 0.417173
[4000] training's l1: 0.309906 valid_1's l1: 0.408808
[4500] training's l1: 0.295743 valid_1's l1: 0.402032
[5000] training's l1: 0.282954 valid_1's l1: 0.396243
[5500] training's l1: 0.271474 valid_1's l1: 0.391291
[6000] training's l1: 0.260916 valid_1's l1: 0.386974
[6500] training's l1: 0.251339 valid_1's l1: 0.383271
[7000] training's l1: 0.242284 valid_1's l1: 0.379728
[7500] training's l1: 0.233908 valid_1's l1: 0.376693
[8000] training's l1: 0.225893 valid_1's l1: 0.373774
Did not meet early stopping. Best iteration is:
[8000] training's l1: 0.225893 valid_1's l1: 0.373774
v003_026HOLD_OUT score: -1.0781 .

holdout v003_028

[500] training's l1: 0.564218 valid_1's l1: 0.587363
[1000] training's l1: 0.476771 valid_1's l1: 0.515268
[1500] training's l1: 0.427689 valid_1's l1: 0.479324
[2000] training's l1: 0.393346 valid_1's l1: 0.456341
[2500] training's l1: 0.366496 valid_1's l1: 0.439514
[3000] training's l1: 0.344709 valid_1's l1: 0.426899
[3500] training's l1: 0.325971 valid_1's l1: 0.41646
[4000] training's l1: 0.309585 valid_1's l1: 0.40771
[4500] training's l1: 0.295316 valid_1's l1: 0.400746
[5000] training's l1: 0.28251 valid_1's l1: 0.394671
[5500] training's l1: 0.271071 valid_1's l1: 0.389632
[6000] training's l1: 0.260586 valid_1's l1: 0.385247
[6500] training's l1: 0.250897 valid_1's l1: 0.381341
[7000] training's l1: 0.241896 valid_1's l1: 0.37798
[7500] training's l1: 0.233551 valid_1's l1: 0.37485
[8000] training's l1: 0.225663 valid_1's l1: 0.371988
Did not meet early stopping. Best iteration is:
[8000] training's l1: 0.225663 valid_1's l1: 0.371988
v003_028HOLD_OUT score: -1.0816 .

holdout v003_029

[500] training's l1: 0.544537 valid_1's l1: 0.567271
[1000] training's l1: 0.460407 valid_1's l1: 0.49808
[1500] training's l1: 0.411908 valid_1's l1: 0.461823
[2000] training's l1: 0.37895 valid_1's l1: 0.440118
[2500] training's l1: 0.353092 valid_1's l1: 0.424013
[3000] training's l1: 0.331885 valid_1's l1: 0.411772
[3500] training's l1: 0.31368 valid_1's l1: 0.401764
[4000] training's l1: 0.298216 valid_1's l1: 0.3939
[4500] training's l1: 0.284504 valid_1's l1: 0.387557
[5000] training's l1: 0.272024 valid_1's l1: 0.381773
[5500] training's l1: 0.260994 valid_1's l1: 0.37712
[6000] training's l1: 0.250798 valid_1's l1: 0.372839
[6500] training's l1: 0.241187 valid_1's l1: 0.369066
[7000] training's l1: 0.232459 valid_1's l1: 0.365779
[7500] training's l1: 0.224269 valid_1's l1: 0.362654
[8000] training's l1: 0.216631 valid_1's l1: 0.359984
Did not meet early stopping. Best iteration is:
[8000] training's l1: 0.216631 valid_1's l1: 0.359984
v003_029HOLD_OUT score: -1.1120 .

holdout v003_030

[LightGBM] [Warning] num_threads is set=-1, n_jobs=-1 will be ignored. Current value: num_threads=-1
Training until validation scores don't improve for 200 rounds.
[500] training's l1: 0.565497 valid_1's l1: 0.590105
[1000] training's l1: 0.479003 valid_1's l1: 0.518853
[1500] training's l1: 0.429585 valid_1's l1: 0.482372
[2000] training's l1: 0.394412 valid_1's l1: 0.458456
[2500] training's l1: 0.367211 valid_1's l1: 0.44135
[3000] training's l1: 0.345316 valid_1's l1: 0.429041
[3500] training's l1: 0.326603 valid_1's l1: 0.418916
[4000] training's l1: 0.310245 valid_1's l1: 0.410607
[4500] training's l1: 0.295665 valid_1's l1: 0.403374
[5000] training's l1: 0.282766 valid_1's l1: 0.397506
[5500] training's l1: 0.271163 valid_1's l1: 0.392556
[6000] training's l1: 0.26057 valid_1's l1: 0.388205
[6500] training's l1: 0.250659 valid_1's l1: 0.384267
[7000] training's l1: 0.241611 valid_1's l1: 0.380885
[7500] training's l1: 0.233248 valid_1's l1: 0.377946
[8000] training's l1: 0.225284 valid_1's l1: 0.375133
Did not meet early stopping. Best iteration is:
[8000] training's l1: 0.225284 valid_1's l1: 0.375133
v003_030HOLD_OUT score: -1.0682 .

@matsuken92
Copy link
Owner Author

matsuken92 commented Jun 29, 2019

  • train_v003_003.py : LB -1.056

  • train_v003_004.py
    ⇒ n_fold = 5 → n_fold = 3
    ⇒ 1st layer: n_estimators=8000 → n_estimators=3000
    ⇒ 2nd layer: n_estimators=15000 → n_estimators=5000

  • train_v004_001.py
    ⇒ train data augmentation with train2 : processed/v004

  • train_v004_002.py : LB 0.6 惨敗・・・
    train data augmentation with train2
    ⇒ n_fold = 3 → n_fold = 5
    ⇒ 1st layer: n_estimators=3000 → n_estimators=10000
    ⇒ 2nd layer: n_estimators=5000 → n_estimators=20000

train_v003_005.py
⇒ train_v003_004.py に babel_train、babel_testと諸々特徴量追加

train_v003_006.py
⇒ train_v003_005.pyにob_charges.csvの追加

train_v003_007.py
⇒ train_v003_006.py にたいして2ndモデルをxgboost化

train_v003_008.py
⇒ train_v003_006.py に rdkit_train、rdkit_test 追加

train_v003_009.py
⇒ train_v003_008.py はrdkit_colsをgood_colsに入れ忘れるというミスをしたのでやりなおし

train_v003_010.py
⇒ train_v003_009.pyにtrain augmentation(augmented_train_v001.py)を適用
⇒ GroupKFoldしないとLeakしていた・・・

train_v003_011.py
⇒ train_v003_010.pyのGroupKFold版

train_v003_017.py
⇒ train_v003_009.py にcoulomb_featを追加
CV : -1.5089, LB -1.757

train_v003_018.py
⇒ bond_calc_featの採用

train_v003_019.py
⇒ train_v003_018.py を GroupKFoldに
CV: -1.468 LB: -1.728

train_v003_021.py
⇒ all data LB: -1.659

train_v003_022.py
⇒ all data seed ave 7
LB : -1.864

train_v003_023.py
⇒ tda_feat
⇒ CV -1.5139, LB: -1.766

train_v003_024
⇒ train_v003_022.py all data に対して tda_feat を適用して seed ave

train_valid_v003_025.py CV: 1.0758
⇒ train_v003_024を特徴量選択用にhold outでscore計算できるようにしたもの

train_valid_v003_026.py CV:1.0781
⇒ tda featを追加
"sum_radius", "tda_cocycles_shape", "tda_max_radius",
"tda_mean_radius", "tda_min_radius", "tda_num_circle"

train_valid_v003_027.py
⇒ adversarial validation

train_valid_v003_028.py CV:1.0816
⇒ train_valid_v003_026.pyのtda featをmin, maxだけにしたもの

holdout v003_029 CV: 1.1120
⇒ reduce mem usageを修正

holdout v003_030 CV: 1.0682(もうちょっと良いはず)
⇒ pca_feat導入

train_v003_031
⇒ pca導入、reduce mem usage修正を入れて ALL TRAIN & seed ave

train_valid_v003_032.py CV: -1.1074
⇒ permutation importanceで効果のないcolumnを除外してみる

train_valid_v003_033.py CV: -1.1183
⇒ permutation importanceで効果のないcolumnを除外してみる その2

train_valid_v003_034.py CV: -1.1426
⇒ permutation importanceで効果のないcolumnを除外してみる その3

train_valid_v003_035.py CV: -1.1628 .
⇒ permutation importanceで効果のないcolumnを除外してみる その4

train_v003_036
⇒ permutation importanceで効果のないcolumnを除外 ALL TRAIN & seed ave

train_valid_v003_037
⇒ validをtype別に分けるbase line確認

train_valid_v003_038
⇒ validをtype別depth, num_leavesを変えることのsearch

train_valid_v003_039.py
⇒ MLPお試し ⇒ CV全然ダメ・・・。途中でやめた。

holdout v003_029 CV: 1.1120
⇒ reduce mem usageを修正

holdout v003_030 CV: 1.0682(もうちょっと良いはず)
⇒ pca_feat導入

train_v003_031
⇒ pca導入、reduce mem usage修正を入れて ALL TRAIN & seed ave

train_valid_v003_032.py CV: -1.1074
⇒ permutation importanceで効果のないcolumnを除外してみる

train_valid_v003_033.py CV: -1.1183
⇒ permutation importanceで効果のないcolumnを除外してみる その2

train_valid_v003_034.py CV: -1.1426
⇒ permutation importanceで効果のないcolumnを除外してみる その3

train_valid_v003_035.py CV: -1.1628 .
⇒ permutation importanceで効果のないcolumnを除外してみる その4

train_v003_036
⇒ permutation importanceで効果のないcolumnを除外 ALL TRAIN & seed ave

train_valid_v003_037
⇒ validをtype別に分けるbase line確認

train_valid_v003_038
⇒ validをtype別depth, num_leavesを変えることのsearch

train_valid_v003_039.py
⇒ MLPお試し ⇒ CV全然ダメ・・・。途中でやめた。

train_valid_v003_041.py CV: -1.2896
⇒ lgb type別num_leaves効果チェック
⇒ いい感じにnum_leavesをtype別にすると-1.5149
#6 (comment)

train_valid_v003_042.py Holdout: -1.1342
⇒ type splitなしの実験

train_valid_v003_043.py CV: -1.5149
⇒ typeごとにnum_leavesを指定
importance_all_V003_043.xlsx
nohup_v003_043.out.txt

train_v003_044
⇒ oof_fcの算出(type別) ⇒ all data(type別) ⇒ めっちゃ時間かかる・・・

train_valid_v003_045.py CV: -1.3925719
⇒ typeごとにnum_leavesを指定 w/yiemon feat
importance_all_v003_045.xlsx
nohup_v003_045.out.txt

train_valid_v003_046.py CV: 測定中
⇒ yemon faet imporatnce上位のみ利用
atm_dist_min_all_of_nn
2J2nd_AverageBondAngle
2J2nd_SmallestBondAngle
d_ratio_0_all_comb_min
inM_atm0_atm_d_var
d_diff_0_all_comb_min
gasteiger_max_min_ratio

train_valid_v003_048.py
⇒ 2段階 base line

train_valid_v003_049.py
⇒ 2段階 mullkan1, mullkan2, fc → holdout
⇒ type:0 以外は精度向上

type v003_049 type0 no mullken
0 -0.4923 -0.6281023
1 -1.2356 -1.2356
2 -1.5185 -1.5185
3 -2.1063 -2.1063
4 -1.9051 -1.9051
5 -1.4799 -1.4799
6 -2.0131 -2.0131
7 -2.1672 -2.1672
ave -1.61475 -1.6317253
importance_all.xlsx

train_v003_050.py
⇒ 2段階 mullkan1, mullkan2, fc → all data (type 0 : no mullken)

train_v003_052.py
⇒ yiemon feat 1j2j3j hold out

train_v003_053.py
⇒ no yiemon feat hold out for compare

train_v003_054.py
⇒ yiemon feat 1j2j3j hold out
⇒ num_leaves 256, max_depth=-1

train_v003_055.py LB: -1.874
⇒ yiemon feat 1j2j3j CV : cv score: -1.5841730
⇒ num_leaves 256, max_depth=-1

train_v003_056.py LB: -1.813
⇒ without yiemon feat 1j2j3j CV : cv score: -1.5310483
⇒ num_leaves 256, max_depth=-1

train_v003_057.py LB: xxxx
⇒ yiemon feat 1j2j3j CV : cv score: -1.58087
⇒ add mullken oof from train_v003_055.py

train_v004_001.py
⇒ basic feature for baseline of augment CV: -0.523706
⇒ num_leaves 256, max_depth=-1

train_v004_002.py
⇒ z axis augment CV: -0.5349
⇒ num_leaves 256, max_depth=-1

train_v004_003.py
⇒ y axis augment CV: xxx
⇒ num_leaves 256, max_depth=-1

@matsuken92
Copy link
Owner Author

matsuken92 commented Jul 2, 2019

num_leavesチェック

スクリーンショット 2019-07-03 2 54 48

result_analysis.xlsx

type num_leaves score
0 8 -0.6261
1 8 -1.1679
2 16 -1.3788
3 8 -1.9578
4 8 -1.6824
5 16 -1.3118
6 8 -1.9195
7 16 -2.0746
    -1.5149

v003_042 without type split : Holdout: -1.1342

STARTING : 2019-07-02 18:00:54
X.shape: (4658147, 95), X_test.shape: (2505542, 95)
LGBMRegressor(bagging_seed=72, boosting_type='gbdt', class_weight=None,
       colsample_bytree=1.0, feature_fraction_seed=73,
       importance_type='gain', learning_rate=0.2, max_depth=9,
       metric='mae', min_child_samples=79, min_child_weight=0.001,
       min_split_gain=0.0, n_estimators=128000, n_jobs=-1, num_leaves=8,
       num_threads=-1, objective='regression', random_state=None,
       reg_alpha=0.1, reg_lambda=0.3, seed=71, silent=True, subsample=0.9,
       subsample_for_bin=200000, subsample_freq=1, verbosity=-1)
[LightGBM] [Warning] num_threads is set=-1, n_jobs=-1 will be ignored. Current value: num_threads=-1
Did not meet early stopping. Best iteration is:
[128000]	training's l1: 0.255779	valid_1's l1: 0.34963
v003_042HOLD_OUT score: -1.1342 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant