Skip to content

Commit

Permalink
Updated README to reference Python data and streamlined some of the t…
Browse files Browse the repository at this point in the history
…estscript. Currently a problem with the fit_new_function for the logistic FD
  • Loading branch information
bibliolytic committed Feb 6, 2017
1 parent c0467fe commit fe08b08
Show file tree
Hide file tree
Showing 5 changed files with 25 additions and 22 deletions.
2 changes: 2 additions & 0 deletions MT_baseclass.m
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,8 @@
obj.parallel = invarargin(varargin,'parallel');
if isempty(obj.parallel)
obj.parallel = 0;
else
fprintf('[MT base] Attempting parallel implementation with %d cores\n',obj.parallel);
end
obj.prior = struct();
obj.prior.lambda = 1;
Expand Down
Binary file modified MTtestdata.mat
Binary file not shown.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,8 @@ There is an enormous space of possibilties for how this framework can be extende
Please feel free (indeed, urged) to let me know through the issues feature whether something is not working and I will be happy to fix them as soon as I can. If preferrable, feel free also to send mail to vjayaram@tue.mpg.de

# Python
Python version coming shortly...
For the python Python version please check out our related [page](https://github.com/bibliolytic/pyMTL).


# Citations:

Expand Down
4 changes: 2 additions & 2 deletions lambdaCV.m
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
% Optional Arguments
% n: Number of CV loops (default 5)
% parallel: Parallel loops (<num cores> | none)
% lrange: Vector of lambda values (default exp(-6:10))
% lrange: Vector of lambda values (default [exp(-6),exp(-1:0.1:1),exp(6)])
% verbose: boolean, verbose (default 0)
% bootstrap: boolean, bootstrap to equalize classes (default 1)

Expand Down Expand Up @@ -43,7 +43,7 @@

lrange = invarargin(varargin,'lrange');
if isempty(lrange)
lrange=[exp(-6),exp(-2:0.2:2),exp(6)];
lrange=[exp(-6),exp(-1:0.1:1),exp(6)];
end

%% Main code
Expand Down
38 changes: 19 additions & 19 deletions testscript.m
Original file line number Diff line number Diff line change
Expand Up @@ -23,44 +23,44 @@
disp('Confirm prior computation switches: ');
linear_model{i}.printswitches;

% Code to fit the prior
% Code to fit the prior (training on the first 4)
disp('Training L2 loss prior...')
linear_model{i}.fit_prior(T_X2d, T_y);
linear_model{i}.fit_prior(T_X2d(1:4), T_y(1:4));
disp('Training logistic loss prior...')
log_model{i}.fit_prior(T_X2d, T_y);
log_model{i}.fit_prior(T_X2d(1:4), T_y(1:4));

% Code that computes prior accuracy on the training data
pacc_lin = mean(linear_model{i}.prior_predict(X2d_s) == y_s);
pacc_log = mean(log_model{i}.prior_predict(X2d_s) == y_s);
fprintf('prior accuracies: \n Linear: %.2f\n Logistic: %.2f\n', pacc_lin, pacc_log);
% Code that computes prior accuracy on the held-out session data
pacc_lin = mean(linear_model{i}.prior_predict(T_X2d{5}) == T_y{5});
pacc_log = mean(log_model{i}.prior_predict(T_X2d{5}) == T_y{5});
fprintf('Prior accuracies on held-out session: \n Linear: %.2f\n Logistic: %.2f\n', pacc_lin, pacc_log);

% Code to fit the new task (with cross-validated lambda)
fitted_new_linear_task = linear_model{i}.fit_new_task(X2d_s, y_s, 'ml', 1);
fitted_new_log_task = log_model{i}.fit_new_task(X2d_s, y_s, 'ml', 1);
fitted_new_linear_task = linear_model{i}.fit_new_task(T_X2d{5}, T_y{5}, 'ml', 0);
fitted_new_log_task = log_model{i}.fit_new_task(T_X2d{5},T_y{5}, 'ml', 0);

% Classifying after the new task update
fprintf('New task *training set* accuracy: \n Linear: %.2f\nLogistic: %.2f\n',...
mean(fitted_new_linear_task.predict(X2d_s) == y_s), ...
mean(fitted_new_log_task.predict(X2d_s) == y_s));
mean(fitted_new_linear_task.predict(T_X2d{5}) == T_y{5}), ...
mean(fitted_new_log_task.predict(T_X2d{5}) == T_y{5}));
end

%%
%%%%%%%%%%%%%%%%%%%%%%%%%%
% How to use the bilinear version of this approach
%%%%%%%%%%%%%%%%%%%%%%%%%%%

type = {'linear'};%,'logistic'};
type = {'linear', 'logistic'};

for i = 1:length(type)
disp(['********************FD ', type{i},'**********************']);
FD{i} = MT_FD_model(type{i},'n_its',5,'verbose',1);
FD{i} = MT_FD_model(type{i},'n_its',5,'verbose',0);
FD{i}.printswitches;
FD{i}.fit_prior(T_X, T_y);
acc = mean(FD{i}.prior_predict(X_s) == y_s);
fprintf('Prior accuracy: %.2f\n', acc*100);
out = FD{i}.fit_new_task(X_s, y_s, 'ml', 1);
acc = mean(out.predict(X_s) == y_s);
fprintf('New task training accuracy: %.2f\n', acc*100);
FD{i}.fit_prior(T_X(1:4), T_y(1:4));
acc = mean(FD{i}.prior_predict(T_X{5}) == T_y{5});
fprintf('Prior accuracy on held-out data: %.2f\n', acc*100);
out = FD{i}.fit_new_task(T_X{5}, T_y{5}, 'ml', 0);
acc = mean(out.predict(T_X{5}) == T_y{5});
fprintf('New task training accuracy: %.2f\n', acc*100);
end

fprintf('Script finished!\n');

0 comments on commit fe08b08

Please sign in to comment.