Skip to content

Commit

Permalink
Improved PairProb
Browse files Browse the repository at this point in the history
  • Loading branch information
Jinchao Ye committed Mar 18, 2012
1 parent 3c9bdb6 commit f0077ab
Show file tree
Hide file tree
Showing 3 changed files with 34 additions and 2 deletions.
Binary file modified PA9/Predictions.mat
Binary file not shown.
34 changes: 32 additions & 2 deletions PA9/RecognizeUnknownActions.m
Expand Up @@ -28,22 +28,50 @@
for action=1:length(datasetTrain)
poseData = datasetTrain(action).poseData;
InitialClassProb = datasetTrain(action).InitialClassProb;
InitialPairProb = datasetTrain(action).InitialPairProb;
K = size(InitialClassProb, 2);
% clustering for initial probabilities
reshapedData = zeros(size(poseData, 1), size(poseData, 2) * size(poseData, 3));
for pose=1:size(poseData, 1)
reshapedData(pose, :) = reshape(poseData(pose, :, :), 1, size(poseData, 2) * size(poseData, 3));
end
% Don't have enough data for 30 dimensional data, so need to use diagonal covariance matricies
% result is random!!!
gm = gmdistribution.fit(reshapedData, K, 'CovType', 'diagonal', 'Regularize', 1e-4); % 'SharedCov', true);
InitialClassProb = posterior(gm, reshapedData); % NOT ACTUALLY USING THIS => decreases accuracy
%InitialClassProb = posterior(gm, reshapedData); % NOT ACTUALLY USING THIS => decreases accuracy
for compo = 1:K
mu = gm.mu(compo,:);
Sigma = gm.Sigma(:,:,compo);
% InvSigma = Sigma^(-1);
% Det = det(Sigma);
for row = 1:size(reshapedData,1)
InitialClassProb(row,compo) = exp(logsumexp(lognormpdf(reshapedData(row,:),mu,Sigma)));
end
end
for i=1:size(InitialClassProb,1)
InitialClassProb(i,:) = InitialClassProb(i,:) / sum(InitialClassProb(i,:));
end
% add dirichlet prior
InitialClassProb = InitialClassProb + 0.005;
for i=1:size(InitialClassProb,1)
InitialClassProb(i,:) = InitialClassProb(i,:) / sum(InitialClassProb(i,:));
end
% END clustering
[pa ll cc pp] = EM_HMM(datasetTrain(action).actionData, poseData, G, InitialClassProb, datasetTrain(action).InitialPairProb, maxIter);
TransformMatrix = zeros(K,K);
for i =1:size(datasetTrain(action).actionData,2)
for j = 1:size(datasetTrain(action).actionData(i).marg_ind,2)-1
SourceProb = InitialClassProb(datasetTrain(action).actionData(i).marg_ind(j),:);
SinkProb = InitialClassProb(datasetTrain(action).actionData(i).marg_ind(j+1),:);
TransformMatrix = TransformMatrix + SourceProb'*SinkProb;
end
end
for i = 1:K
TransformMatrix(i,:) = TransformMatrix(i,:)/sum(TransformMatrix(i,:));
end
for j = 1:size(InitialPairProb,1)
InitialPairProb(j,:) = TransformMatrix(:)';
end
[pa ll cc pp] = EM_HMM(datasetTrain(action).actionData, poseData, G, InitialClassProb, InitialPairProb, maxIter);
Ps{action} = pa;
loglikelihood{action} = ll;
ClassProb{action} = cc;
Expand All @@ -57,6 +85,8 @@
% Accuracy is defined as (#correctly classified examples / #total examples)
% Note that all actions share the same graph parameterization

% save('myParameters','Ps','loglikelihood','ClassProb','PairProb');

predicted_labels = [];

for testcase=1:length(datasetTest.actionData)
Expand Down
2 changes: 2 additions & 0 deletions PA9/YourMethod.txt
Expand Up @@ -4,8 +4,10 @@ Start with training a different model for each of the training actions.

Start with obtaining better initial probability estimates for class assignments for the poses.
Use gmdistribution.fit method to find gaussian mixture with K components that best fits data of poses.
The result probability gaussian mixture model is random to some extent.
Needed to do some wrangling to get the clustering to work on 30-dimensional data.
Then use posterior probabilities of which poses are assigned components for initial hidden state probabilities.
Once InitialClassProb are initialized, we initialize InitialPairProb accordingly. We initialize each row of InitialPairProb as the same, representing state transform probability.
Using the EM method to train an HMM.

Then test for each action, by computing likelihood over each of the models.
Expand Down

0 comments on commit f0077ab

Please sign in to comment.