Skip to content

Conversation

@LeicongLi
Copy link

@LeicongLi LeicongLi commented Feb 19, 2019

What changes were proposed in this pull request?

  1. added 5 fields into protobuf definition, inputDimMasks, inputScales, outputDimMasks, outputScales, isMklInt8Enabled; updated related auto-generated code.
  2. added 1 Trait MklInt8Convertible which hold data and methods to calculate required information for converting from fp32 to int8.
  3. added functions to serialize and deserialize MKL-DNN Int8 related values
  4. made MKL-DNN Linear module inherit MKL-DNN Int8 Trait
    (Please fill in changes proposed in this patch)

How was this patch tested?

  1. Construct a module, Linear in my case.
  2. Generate and hold random values
  3. Assign random values to above fields
  4. Save the module
  5. Load the module
  6. Verify the values stored in the fields are the same as assigned at step Migrate dl code from webscaleml to spark-dl #2.

(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If it is possible, please attach a screenshot; otherwise, remove this)

Related links or issues (optional)

fixed https://github.com/intel-analytics/BigDL/issues/XXX

@LeicongLi LeicongLi changed the title Scaleandmask Scale and Mask Feb 19, 2019

// ================================= Internal APIs ===========================================

private var protobufTest : String = Integer.toHexString(java.util.UUID.randomUUID().hashCode())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this for ?

(implicit ev: TensorNumeric[T]): Unit = {

val protobufModel = context.bigdlModule
module.setProtobufTest(protobufModel.getProtobufTest)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed ?

// mask == 0 means we use single scales, which will get the max of whole tensor (in/out).
// scalesOfInput will maintain the max value of input tensor
// scalesOfOutput will maintain the max value of output tensor
var scalesOfInput: Int8ScalesAndMask = new Int8ScalesAndMask(Int8ScalesAndMask.SINGLE_SCALE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer not initializing it here as it's only used in Int8, they are null for other cases

@wzhongyuan
Copy link
Contributor

Please add unit tests for it

@jason-dai
Copy link
Contributor

I don't think we should put these in AbstractModule; instead, define a subclass like MklInt8Module, similar to QuantizedModule or MklDnnLayer

@wzhongyuan
Copy link
Contributor

@jason-dai yes, that sounds better

* @tparam B Output data type
* @tparam T The numeric type in this module parameters.
*/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's these added blank lines for ?

* @return Unit
*/
private def updateScalesHelper(scales: ArrayBuffer[Array[Float]],
scale: Array[Float], index: Int): Unit = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The indention of scale is not right.

*/
protected def createSerializeBigDLModule[T: ClassTag](modelBuilder : BigDLModule.Builder,
context: SerializeContext[T])
(implicit ev: TensorNumeric[T]): SerializeResult = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The syntax is a little strange.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I will modify these syntax issues.

*/
trait MklInt8Convertible {
// input dimension mask
protected var inDimMask: Int = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By BigDL conversion, I prefer to use inputDimMask, similar below for outputDimMask


"Saving and loading scale and mask" should "work properly" in {

val myModel = Linear(3, 4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a unit test for empty scales

@wzhongyuan
Copy link
Contributor

I think there are other layers that need to implement this new trait. Also for mkl-blas layers, we should add that. @i8run

@LeicongLi LeicongLi closed this Mar 12, 2019
dding3 pushed a commit to dding3/BigDL that referenced this pull request Nov 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants