You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I discovered an outlier parsing a PE file where the table definition array for a StandAlongSignature token is significantly larger than the file size (in my case, 2 bytes per row, and 286331153 rows, (or 572.7 MB needed), The thing reads well past the end of the file (like the other issue I highlighted). The file source file is 532992 bytes.
The root source of the issue is in the List ParseTableDefinitions() function of the MetaDataTablesHdr class when setting the number of rows per table. There needs to be an escape or sanity checking the rows to the file size, and then maybe later when setting the offset of the tables. (Not really sure... but right now, it' continues to fill up memory until it runs out...)
The next table definition has an offset of 572672716
The next table after that one has an offset of 2290659634
For me, it gets triggered when checking if a PE instance's TypeRefHash property is null. (Doing it in an if condition.)
A simple workaround to ignore the headers with incorrect/invalid row counts that has an offset larger than the length of the file - startOfTableDefinitions offset. I don't see how to fix it quickly, but ignoring it works. Could add the same sanity check after the blocksize is extracted below in the function to deal with situations where the bytes read will exceed the same value. Regardless,ntt least this way it won't read to the point of running out of memory.)
This is the workaround I used.
// this comes after the if statement where the tables[i].Name property is set (around line 260) of MetaDataTablesHdr.
if(tables[i].RowCount > PeFile.Length - startOfTableDefinitions) {
// can't be valid
// make rows 0
tables[i].RowCount = 0; // is there a way to calculate this based on an anchor of the next section offset?
// need a way to flag this so it's clear that there was a data structure error.
}
Sample to recreate the issue: fa15e258ba4d3bf1ebc527f59c845e963014f7b39ade2590ddabb43fe420757e
The text was updated successfully, but these errors were encountered:
Every table contains now an indicator if it could be parsed. Example MdtHdr?.TableDefinitions[(int)MetadataToken.TypeReference].IsInvalid. The TypeRefHash uses this property to check if the typeRef table is invalid before computing it. In that case, null is returned instead of an hash.
@Bobbymac001 Could you provide feedback if that solved the issue? The new version will be out in the next hours.
I discovered an outlier parsing a PE file where the table definition array for a StandAlongSignature token is significantly larger than the file size (in my case, 2 bytes per row, and 286331153 rows, (or 572.7 MB needed), The thing reads well past the end of the file (like the other issue I highlighted). The file source file is 532992 bytes.
The root source of the issue is in the List ParseTableDefinitions() function of the MetaDataTablesHdr class when setting the number of rows per table. There needs to be an escape or sanity checking the rows to the file size, and then maybe later when setting the offset of the tables. (Not really sure... but right now, it' continues to fill up memory until it runs out...)
The next table definition has an offset of 572672716
The next table after that one has an offset of 2290659634
For me, it gets triggered when checking if a PE instance's TypeRefHash property is null. (Doing it in an if condition.)
A simple workaround to ignore the headers with incorrect/invalid row counts that has an offset larger than the length of the file - startOfTableDefinitions offset. I don't see how to fix it quickly, but ignoring it works. Could add the same sanity check after the blocksize is extracted below in the function to deal with situations where the bytes read will exceed the same value. Regardless,ntt least this way it won't read to the point of running out of memory.)
This is the workaround I used.
Sample to recreate the issue: fa15e258ba4d3bf1ebc527f59c845e963014f7b39ade2590ddabb43fe420757e
The text was updated successfully, but these errors were encountered: