Skip to content

Commit

Permalink
Automatic update of fbcode/foxi to 8f74bc4df3a4cfc69b1a3eadf62aa29d99…
Browse files Browse the repository at this point in the history
…61c72d AND update Glow AND update C2 (pytorch#19792)

Summary:
Pull Request resolved: pytorch#19792

This diff also contains the contents of D15092641 and D15090411 so as to not let c2, foxi, and glow get out of sync

Previous import was 81e1683d6348eee4b5ed1145222dc2c41be4269c

Included changes:
- **[8f74bc4](houseroad/foxi@8f74bc4)**: Small fixes (#12) <Jack Montgomery>
- **[72097e4](houseroad/foxi@72097e4)**: Add multiple quantization params per tensor (#11) <Jack Montgomery>
- **[b681fe0](houseroad/foxi@b681fe0)**: Merge pull request #10 from jackm321/add_autoinstrument_graph_prop <Jack Montgomery>
- **[a68d835](houseroad/foxi@a68d835)**: Add ONNXIFI_GRAPH_PROPERTY_AUTO_INSTRUMENT_NODES <Jack Montgomery>

Reviewed By: rdzhabarov, zrphercule

Differential Revision: D15086794

fbshipit-source-id: 8df02c62303b580e16a218d6be7791747e3d7213
  • Loading branch information
jackm321 authored and facebook-github-bot committed Apr 26, 2019
1 parent 7a8bc85 commit 48d5ab5
Showing 1 changed file with 9 additions and 7 deletions.
16 changes: 9 additions & 7 deletions caffe2/operators/onnxifi_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,10 @@ void SetInputTensorDescriptorTypeAndBuffer(
CAFFE_THROW(
"Unsupported Int8Tensor type in ONNXIFI: ", cpu_tensor.dtype().name());
}
desc->is_quantized = true;
desc->scale = cpu_int8tensor.scale;
desc->bias = cpu_int8tensor.zero_point;
desc->quantizationParams = 1;
desc->quantizationAxis = 1;
desc->scales = &cpu_int8tensor.scale;
desc->biases = &cpu_int8tensor.zero_point;
}

TypeMeta OnnxifiTypeToDataType(uint64_t onnxifi_type) {
Expand Down Expand Up @@ -89,9 +90,10 @@ void SetOutputTensorDescriptorTypeAndBuffer(

desc->buffer = reinterpret_cast<onnxPointer>(
cpu_tensor->raw_mutable_data(OnnxifiTypeToDataType(onnxifi_type)));
desc->is_quantized = true;
desc->scale = cpu_int8tensor->scale;
desc->bias = cpu_int8tensor->zero_point;
desc->quantizationParams = 1;
desc->quantizationAxis = 1;
desc->scales = &cpu_int8tensor->scale;
desc->biases = &cpu_int8tensor->zero_point;
}
void BlobToTensorDescriptor(
const std::string& name,
Expand Down Expand Up @@ -131,7 +133,7 @@ void BlobToTensorDescriptor(
desc->dimensions = shape.size();
shapes->emplace_back(shape.cbegin(), shape.cend());
desc->shape = shapes->back().data();
desc->is_quantized = 0;
desc->quantizationParams = 0;
}
}
} // namespace
Expand Down

0 comments on commit 48d5ab5

Please sign in to comment.