Skip to content

Conversation

@laks0209
Copy link
Contributor

  • Command line argument -saveTensorData dumps First or All iteration Output Tensor results to csv files
  • Output tensor corresponding to each iteration is saved in separate csv
    files
  • Summary.csv file contains the final result of each iteration, hash of
    output tensor and pointer to the dumped output tensor file

@laks0209 laks0209 requested a review from a team as a code owner December 18, 2018 06:42
@laks0209 laks0209 force-pushed the feature/OutputTensor branch from ea8b5ea to c217107 Compare December 18, 2018 06:45
{81EA9CC6-8A26-4583-B1A4-84740EF815C8} = {81EA9CC6-8A26-4583-B1A4-84740EF815C8}
EndProjectSection
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ClassLibrary1", "..\ClassLibrary1\ClassLibrary1.csproj", "{12E5A5A7-E32C-4D2E-84DC-E937BE0A9DA8}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was a Csharp project supposed to be added to this solution in the PR with the name: ClassLibrary1?

<Link>
<SubSystem>Console</SubSystem>
<AdditionalDependencies>dxgi.lib;d3d12.lib;windowsapp.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalDependencies>dxgi.lib;d3d12.lib;windowsapp.lib;mscoree.lib;%(AdditionalDependencies)</AdditionalDependencies>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is mscoree.lib needed as a dependency? From my understanding this is related to .NET Framework

<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v141</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
<CLRSupport>false</CLRSupport>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to specify CLRSupport? Isn't this related to .NET?

@laks0209 laks0209 force-pushed the feature/OutputTensor branch 7 times, most recently from 11e29f8 to aecbc38 Compare December 26, 2018 20:12
Copy link
Contributor Author

@laks0209 laks0209 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.NET framework support has been removed. Please review the new commit.

@laks0209 laks0209 force-pushed the feature/OutputTensor branch 5 times, most recently from ae33af0 to c9de335 Compare January 7, 2019 18:42
@laks0209
Copy link
Contributor Author

laks0209 commented Jan 7, 2019

Made modifications according to the per-iteration performance dump. Now, the summary.csv file contains the per-iteration performance results and the final result. Please review the latest commit.

{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|ARM64.ActiveCfg = Debug|Win32
{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|x64.ActiveCfg = Debug|x64
{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|x64.Build.0 = Debug|x64
{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|x64.ActiveCfg = Release|x64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you change this, it builds release when debug is specified.

Release|x86 = Release|x86
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{81EA9CC6-8A26-4583-B1A4-84740EF815C8}.Debug|Any CPU.ActiveCfg = Debug|Win32
Copy link
Contributor

@ryanlai2 ryanlai2 Jan 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this change, When we specify Any CPU Debug, it'll build to Win32 Debug

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, Debug|Any CPU.Build.0 is missing. If we click "Build solution" with the configuration: Any Debug | Any CPU, then WinMLRunner won't build.

{81EA9CC6-8A26-4583-B1A4-84740EF815C8}.Debug|ARM64.Build.0 = Debug|ARM64
{81EA9CC6-8A26-4583-B1A4-84740EF815C8}.Debug|x64.ActiveCfg = Debug|x64
{81EA9CC6-8A26-4583-B1A4-84740EF815C8}.Debug|x64.Build.0 = Debug|x64
{81EA9CC6-8A26-4583-B1A4-84740EF815C8}.Debug|x64.ActiveCfg = Release|x64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you change this, it builds release when debug is specified.

{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|x64.Build.0 = Release|x64
{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|x86.ActiveCfg = Debug|Win32
{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Debug|x86.Build.0 = Debug|Win32
{E9D4AC92-8295-4FB4-BF7D-3FAF74B564E8}.Release|Any CPU.ActiveCfg = Release|Win32
Copy link
Contributor

@ryanlai2 ryanlai2 Jan 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this change, When we specify Any CPU Release, it'll build to Win32 Release

@laks0209 laks0209 force-pushed the feature/OutputTensor branch 7 times, most recently from d5254f3 to 21c5fd3 Compare January 7, 2019 22:36
@ryanlai2
Copy link
Contributor

ryanlai2 commented Jan 8, 2019

Can we get some test(s) added to test that this works properly? Thanks!

com_ptr<ITensorNative> itn = results.Lookup(desc.Name()).as<ITensorNative>();
std::string* Tensor;
uint32_t uCapacity;
HRESULT(itn->GetBuffer(reinterpret_cast<BYTE**>(&Tensor), &uCapacity));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not supported for TensorString types. The GetBuffer method will return ERROR_INVALID_FUNCTION here.

float* Tensor;
uint32_t uCapacity;
HRESULT(itn->GetBuffer(reinterpret_cast<BYTE**>(&Tensor), &uCapacity));
hash = winrt::impl::hash_data(Tensor, uCapacity);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i dont think the winrt::impl namespace is safe here - the SDK team changes the impl internals quite frequently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @smk2007 . Do you have any suggestions for hash function? Please feel free to comment/provide alternatives.

for (uint32_t i = 0; i < numIterations; i++)
{
bool captureIterationPerf = (args.PerfCapture() && (!args.IgnoreFirstRun() || i > 0)) || (args.PerIterCapture());
bool captureIterationPerf = (args.PerfCapture() && (!args.IgnoreFirstRun() || i > 0)) || args.SaveTensor() || args.PerIterCapture();;
Copy link
Contributor

@ryanlai2 ryanlai2 Jan 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to capture performance when we want to save tensor output? What about the scenario when we just want to save tensor?

try
{
model = LoadModel(path, args.PerfCapture() || args.PerIterCapture(), output, args, 0);
model = LoadModel(path, args.PerfCapture() || args.SaveTensor() || args.PerIterCapture(), output, args, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we want to capture load model performance if we only want to save tensor?

for (auto deviceCreationLocation : deviceCreationLocations)
{
if (args.PerfCapture() || args.PerIterCapture())
if (args.PerfCapture() || args.SaveTensor() || args.PerIterCapture())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we want to worry about performance when saving tensors?

@ryanlai2
Copy link
Contributor

Hey @laks0209, here are some tensor test ideas that @pmbrown1055 and I spoke about:

Running WinMLRunner and outputting tensor CSV files and compare against an expected tensor CSV file to make sure values are correct.

With combinations of

  • CPU / GPU
  • Input Image as png, Input Image as CSV, Garbage data that is seeded so output tensor is deterministic

There may be a % threshold of error tolerance needed

@laks0209
Copy link
Contributor Author

Thanks Ryan for the suggestions/corrections. I will update the code and add test cases as per the conversation.

@laks0209 laks0209 force-pushed the feature/OutputTensor branch 4 times, most recently from 7af8c3e to ac2e9de Compare January 24, 2019 19:01
@laks0209
Copy link
Contributor Author

laks0209 commented Jan 24, 2019

Hi @ryanlai2 , Please review the latest commit with the changes and the tests.

@laks0209 laks0209 force-pushed the feature/OutputTensor branch from ac2e9de to 7b6cb13 Compare January 30, 2019 23:05
{
LearningModel model = nullptr;
output.PrintLoadingInfo(path);
model = LoadModelCryptography(path);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this for saving output tensor results? Also, won't the below line of code:

model = LearningModel::LoadFromFilePath(path);

overwrite the loading of the model here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have made correction in the latest commit.

Assert::AreEqual(static_cast<size_t>(2), GetOutputCSVLineCount());
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Whitespace

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have made correction in the latest commit.

}
break;

case TensorKind::Int64:
Copy link
Contributor

@ryanlai2 ryanlai2 Feb 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a test to test this case if we're going to add it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a model that we can test for Int64? To verify that this code path works?

output.PrintLoadingInfo(path);
model = LoadModelCryptography(path);

model = LearningModel::LoadFromFilePath(path);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already load the model below in line 25 so I think this line would be redundant. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes thats right. Sorry. I have updated it.

@laks0209 laks0209 force-pushed the feature/OutputTensor branch 2 times, most recently from fae6075 to 28967ed Compare February 2, 2019 00:45
}

void SetDefaultCSVFileNamePerIteration()
void SetDefaultFolder()
Copy link
Contributor

@ryanlai2 ryanlai2 Feb 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change this method name to capture the idea that it's for setting the folder for per iterations run data?

Maybe: "SetDefaultPerIterationFolder"?

Copy link
Contributor

@ryanlai2 ryanlai2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to include a model that takes Int64 tensorkind as input so that we can test that the codepath works?

@laks0209 laks0209 force-pushed the feature/OutputTensor branch 2 times, most recently from b347e38 to 517a544 Compare February 5, 2019 23:05
@laks0209 laks0209 force-pushed the feature/OutputTensor branch from 517a544 to 253771b Compare February 5, 2019 23:16
@laks0209
Copy link
Contributor Author

laks0209 commented Feb 5, 2019

Removed Int64 tensorkind. And have changed the naming of the folder. Please review the latest commit :-)

@ryanlai2 ryanlai2 merged commit d2b1522 into microsoft:master Feb 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants