Skip to content
Permalink
Browse files

Allow changing the tensor dimension type (to 32b) (#3542)

Summary:
The previously used size_t is host-dependent and a 64b dimension width is clearly overkill for most of the networks. 32b data type adds more data level paralleism for computations with tensor dimension data as well as simplifies address computations.

The patch adds a new typedef glow::dim_t with a signed counterpart glow::sdim_t which are defined to a fixed width 32b type.

As a side effect, it simplified the OpenCL kernel interfacing since there is no need to pass host size_t width to it since the dimension type is now fixed width.

The 32b dimension type can be enabled via cmake -DTENSOR_DIMS_32_BITS=ON
Pull Request resolved: #3542

Test Plan: Tested with LLVM 7 (CPU backend) and OpenCL (pocl-phsa x86_64). Both with 32b and 64b index types. Passes the basic **ninja check** for both.

Reviewed By: gcatron

Differential Revision: D18486877

Pulled By: bertmaher

fbshipit-source-id: 07f2963727568fa98a26960fe8dbac26f0439199
  • Loading branch information
pjaaskel authored and facebook-github-bot committed Dec 3, 2019
1 parent 79be561 commit d2794f48c2eeeab3fb279a55f29a43257059ef4a
Showing with 3,148 additions and 2,983 deletions.
  1. +9 −0 CMakeLists.txt
  2. +11 −11 examples/char-rnn.cpp
  3. +11 −10 examples/fr2en.cpp
  4. +1 −1 examples/lenet-loader.cpp
  5. +3 −3 examples/mnist.cpp
  6. +14 −14 examples/ptb.cpp
  7. +1 −1 examples/resnet-runtime.cpp
  8. +8 −8 examples/training/resnet50/main.cpp
  9. +50 −0 include/glow/Base/DimType.h
  10. +43 −43 include/glow/Base/Tensor.h
  11. +1 −1 include/glow/Base/Traits.h
  12. +72 −58 include/glow/Base/Type.h
  13. +20 −20 include/glow/Graph/Graph.h
  14. +1 −1 include/glow/Graph/Node.h
  15. +1 −1 include/glow/Graph/NodeValue.h
  16. +2 −2 include/glow/Graph/Nodes.h
  17. +5 −5 include/glow/IR/IRBuilder.h
  18. +19 −18 include/glow/Importer/CommonOperatorLoader.h
  19. +2 −2 include/glow/Importer/ProtobufLoader.h
  20. +8 −0 include/glow/LLVMIRCodeGen/LLVMIRGen.h
  21. +5 −5 include/glow/Quantization/Base/Base.h
  22. +3 −3 lib/Backend/Backend.cpp
  23. +3 −2 lib/Backends/CPU/CMakeLists.txt
  24. +26 −29 lib/Backends/CPU/CPUBackend.cpp
  25. +4 −4 lib/Backends/CPU/CPULLVMIRGen.cpp
  26. +4 −4 lib/Backends/CPU/Transforms.cpp
  27. +591 −596 lib/Backends/CPU/libjit/libjit.cpp
  28. +147 −154 lib/Backends/CPU/libjit/libjit_conv.cpp
  29. +10 −8 lib/Backends/CPU/libjit/libjit_defs.h
  30. +50 −0 lib/Backends/CPU/libjit/libjit_dim_t.h
  31. +28 −28 lib/Backends/CPU/libjit/libjit_matmul.cpp
  32. +24 −25 lib/Backends/Interpreter/Interpreter.cpp
  33. +1 −1 lib/Backends/Interpreter/InterpreterFunction.cpp
  34. +1 −1 lib/Backends/Interpreter/InterpreterFunction.h
  35. +373 −372 lib/Backends/Interpreter/InterpreterNodes.cpp
  36. +5 −6 lib/Backends/OpenCL/ClassGen/OpenCLSpecificNodesVerification.h
  37. +26 −14 lib/Backends/OpenCL/OpenCL.cpp
  38. +3 −1 lib/Backends/OpenCL/OpenCLTensorLayout.cpp
  39. +2 −2 lib/Backends/OpenCL/Transforms.cpp
  40. +35 −55 lib/Backends/OpenCL/kernels.cl
  41. +19 −18 lib/Base/Image.cpp
  42. +26 −25 lib/Base/Tensor.cpp
  43. +1 −1 lib/Converter/Float16Converter.cpp
  44. +4 −4 lib/Exporter/ONNXModelWriter.cpp
  45. +6 −6 lib/Graph/Grad.cpp
  46. +67 −70 lib/Graph/Graph.cpp
  47. +1 −1 lib/Graph/Hook.cpp
  48. +1 −1 lib/Graph/Node.cpp
  49. +1 −1 lib/Graph/NodeValue.cpp
  50. +66 −67 lib/Graph/Nodes.cpp
  51. +3 −1 lib/Graph/TensorLayout.cpp
  52. +11 −11 lib/IR/IRBuilder.cpp
  53. +2 −2 lib/IR/IRGen.cpp
  54. +2 −2 lib/IR/IRUtils.cpp
  55. +18 −18 lib/Importer/Caffe2ModelLoader.cpp
  56. +17 −18 lib/Importer/ONNXModelLoader.cpp
  57. +6 −6 lib/LLVMIRCodeGen/BundleSaver.cpp
  58. +7 −1 lib/LLVMIRCodeGen/GlowJIT.cpp
  59. +1 −1 lib/LLVMIRCodeGen/LLVMBackend.cpp
  60. +134 −117 lib/LLVMIRCodeGen/LLVMIRGen.cpp
  61. +3 −3 lib/Onnxifi/Base.cpp
  62. +31 −31 lib/Optimizer/GraphOptimizer/GraphOptimizer.cpp
  63. +13 −13 lib/Optimizer/GraphOptimizer/Lower.cpp
  64. +5 −5 lib/Optimizer/IROptimizer/IROptimizer.cpp
  65. +3 −3 lib/Quantization/Base/Base.cpp
  66. +4 −4 tests/benchmark/AddBench.cpp
  67. +12 −12 tests/benchmark/BERTProxyLayerBench.cpp
  68. +16 −16 tests/benchmark/BatchGemmBench.cpp
  69. +8 −6 tests/benchmark/Bench.h
  70. +8 −8 tests/benchmark/GemmBench.cpp
  71. +5 −5 tests/benchmark/GemmParallelBench.cpp
  72. +13 −14 tests/benchmark/SLSBench.cpp
  73. +15 −15 tests/benchmark/TransposeBench.cpp
  74. +10 −0 tests/models/CMakeLists.txt
  75. +1 −1 tests/models/onnxModels/ArgMaxKeepDim.onnxtxt
  76. +1 −1 tests/models/onnxModels/maxPoolWithArgmax.onnxtxt
  77. +15 −14 tests/stress/ParameterSweepTest.cpp
  78. +1 −1 tests/stress/SparseLengthsSumTest.cpp
  79. +37 −37 tests/unittests/BackendCorrectnessTest.cpp
  80. +4 −4 tests/unittests/BackendTest.cpp
  81. +26 −26 tests/unittests/BackendTestUtils.cpp
  82. +9 −9 tests/unittests/BackendTestUtils.h
  83. +41 −41 tests/unittests/Caffe2ImporterTest.cpp
  84. +3 −3 tests/unittests/GemmTest.cpp
  85. +39 −41 tests/unittests/GradCheckTest.cpp
  86. +8 −9 tests/unittests/GraphGradTest.cpp
  87. +61 −61 tests/unittests/GraphOptzTest.cpp
  88. +17 −8 tests/unittests/GraphTest.cpp
  89. +11 −11 tests/unittests/HyphenTest.cpp
  90. +6 −6 tests/unittests/ImageTest.cpp
  91. +1 −1 tests/unittests/ImporterTestUtils.h
  92. +53 −55 tests/unittests/MLTest.cpp
  93. +6 −6 tests/unittests/OCLTest.cpp
  94. +78 −78 tests/unittests/OnnxImporterTest.cpp
  95. +1 −1 tests/unittests/OperatorGradTest.cpp
  96. +402 −405 tests/unittests/OperatorTest.cpp
  97. +17 −1 tests/unittests/PartitionerTest.cpp
  98. +11 −12 tests/unittests/QuantizationTest.cpp
  99. +39 −40 tests/unittests/RecommendationSystemTest.cpp
  100. +24 −24 tests/unittests/TensorsTest.cpp
  101. +4 −4 tests/unittests/TraceEventsTest.cpp
  102. +13 −18 tests/unittests/TypeAToTypeBFunctionConverterTest.cpp
  103. +4 −1 tools/ClassGen/InstrBuilder.cpp
  104. +13 −22 tools/ClassGen/InstrGen.cpp
  105. +3 −0 tools/ClassGen/MemberType.cpp
  106. +5 −0 tools/ClassGen/MemberType.h
  107. +5 −2 tools/ClassGen/NodeBuilder.cpp
  108. +4 −4 tools/ClassGen/NodeGen.cpp
  109. +4 −4 tools/loader/ImageClassifier.cpp
  110. +12 −12 tools/loader/TextTranslator.cpp
  111. +5 −5 tools/loader/XModelBuilder.cpp
  112. +2 −0 torch_glow/tests/functionality/weight_freezing_test.py
@@ -15,6 +15,7 @@ option(GLOW_BUILD_PYTORCH_INTEGRATION "Build integration for PyTorch" OFF)
option(GLOW_BUILD_TESTS "Build the tests" ON)
option(GLOW_WITH_BUNDLES "Build bundles" OFF)
option(LINK_PROTOBUF_AS_DLL "Link against protobuf build as dynamic libray." OFF)
option(TENSOR_DIMS_32_BITS "Set the max bitwidth of the tensor dimension and related indices to 32b instead of 64b." OFF)

set(CMAKE_CXX_STANDARD 14)
set(CXX_STANDARD_REQUIRED ON)
@@ -139,6 +140,14 @@ if (GLOW_WITH_HABANA)
"${HL_THUNK_LIB}")
endif ()

if (TENSOR_DIMS_32_BITS)
add_definitions(-DDIM_T_32)
set(LLVMCPURuntimeExtraFlags "-DDIM_T_32")
message(STATUS "Using 32b tensor dimensions.")
else()
message(STATUS "Using 64b tensor dimensions.")
endif ()

# Top level setup for external backends
ExternalBackendsInit()

@@ -96,9 +96,9 @@ static void loadText(Tensor &inputText, Tensor &nextChar, llvm::StringRef text,
// |ello| |World
// |llo |W|orld
// |lo W|o|rld
for (size_t i = 0; i < B; i++) {
for (size_t j = 0; j < S; j++) {
size_t c = clipASCII(text[i + j]);
for (dim_t i = 0; i < B; i++) {
for (dim_t j = 0; j < S; j++) {
dim_t c = clipASCII(text[i + j]);

IH.at({i, j, c}) = 1.0;
if (train) {
@@ -125,14 +125,14 @@ PseudoRNG &getRNG() {
/// to one. The algorithm that we use here picks a random number between zero
/// and one. Then, we scan the tensor and accumulate the probabilities. We stop
/// and pick the index when sum is greater than the selected random number.
static char getPredictedChar(Tensor &inputText, size_t slice, size_t word) {
static char getPredictedChar(Tensor &inputText, dim_t slice, dim_t word) {
auto IH = inputText.getHandle();

// Pick a random number between zero and one.
double x = std::abs(getRNG().nextRand());
double sum = 0;
// Accumulate the probabilities into 'sum'.
for (size_t i = 0; i < 128; i++) {
for (dim_t i = 0; i < 128; i++) {
sum += IH.at({slice, word, i});
// As soon as we cross the threshold return the index.
if (sum > x) {
@@ -159,8 +159,8 @@ static std::unique_ptr<llvm::MemoryBuffer> loadFile(llvm::StringRef filename) {
/// Creates a new RNN network. The network answers the question, given N chars
/// of input, what is the character following each one of these chars.
static Function *createNetwork(Module &mod, PlaceholderBindings &bindings,
size_t minibatchSize, size_t numSteps,
size_t hiddenSize) {
dim_t minibatchSize, dim_t numSteps,
dim_t hiddenSize) {
Function *F = mod.createFunction("main");

auto *X = mod.createPlaceholder(
@@ -212,10 +212,10 @@ int main(int argc, char **argv) {
LOG(INFO) << "Loaded " << text.size() << " chars.\n";
PlaceholderBindings inferBindings, trainingBindings;

const size_t numSteps = 50;
const size_t minibatchSize = 32;
const size_t batchSize = text.size() - numSteps;
const size_t hiddenSize = 256;
const dim_t numSteps = 50;
const dim_t minibatchSize = 32;
const dim_t batchSize = text.size() - numSteps;
const dim_t hiddenSize = 256;

CHECK_GT(text.size(), numSteps) << "Text is too short";
TrainingConfig TC;
@@ -175,7 +175,7 @@ struct Model {
Placeholder *embedding_fr_, *embedding_en_;
Node *encoderHiddenOutput_;

Placeholder *loadEmbedding(llvm::StringRef langPrefix, size_t langSize) {
Placeholder *loadEmbedding(llvm::StringRef langPrefix, dim_t langSize) {
auto &mod = EE_.getModule();
auto *result =
mod.createPlaceholder(ElemKind::FloatTy, {langSize, EMBEDDING_SIZE},
@@ -298,7 +298,7 @@ void Model::loadDecoder() {
auto *input = mod.createPlaceholder(ElemKind::Int64ITy, {batchSize_},
"decoder.input", false);
auto *inputTensor = bindings.allocate(input);
for (size_t i = 0; i < batchSize_; i++) {
for (dim_t i = 0; i < batchSize_; i++) {
inputTensor->getHandle<int64_t>().at({i}) = en_.word2index_["SOS"];
}

@@ -310,11 +310,12 @@ void Model::loadDecoder() {
ElemKind::FloatTy, {EMBEDDING_SIZE, HIDDEN_SIZE}, "decoder.w_hh", false);
auto *bHh = mod.createPlaceholder(ElemKind::FloatTy, {HIDDEN_SIZE},
"decoder.b_hh", false);
auto *outW = mod.createPlaceholder(ElemKind::FloatTy,
{EMBEDDING_SIZE, en_.index2word_.size()},
"decoder.out_w", false);
auto *outB = mod.createPlaceholder(
ElemKind::FloatTy, {en_.index2word_.size()}, "decoder.out_b", false);
auto *outW = mod.createPlaceholder(
ElemKind::FloatTy, {EMBEDDING_SIZE, (dim_t)en_.index2word_.size()},
"decoder.out_w", false);
auto *outB =
mod.createPlaceholder(ElemKind::FloatTy, {(dim_t)en_.index2word_.size()},
"decoder.out_b", false);
loadMatrixFromFile("fr2en/decoder_w_ih.bin", *bindings.allocate(wIh));
loadMatrixFromFile("fr2en/decoder_b_ih.bin", *bindings.allocate(bIh));
loadMatrixFromFile("fr2en/decoder_w_hh.bin", *bindings.allocate(wHh));
@@ -362,7 +363,7 @@ void Model::translate(const std::vector<std::string> &batch) {
Tensor seqLength(ElemKind::Int64ITy, {batchSize_});
input.zero();

for (size_t j = 0; j < batch.size(); j++) {
for (dim_t j = 0; j < batch.size(); j++) {
std::istringstream iss(batch[j]);
std::vector<std::string> words;
std::string word;
@@ -372,7 +373,7 @@ void Model::translate(const std::vector<std::string> &batch) {

CHECK_LE(words.size(), MAX_LENGTH) << "sentence is too long.";

for (size_t i = 0; i < words.size(); i++) {
for (dim_t i = 0; i < words.size(); i++) {
auto iter = fr_.word2index_.find(words[i]);
CHECK(iter != fr_.word2index_.end()) << "Unknown word: " << words[i];
input.getHandle<int64_t>().at({j, i}) = iter->second;
@@ -387,7 +388,7 @@ void Model::translate(const std::vector<std::string> &batch) {
auto OH = bindings.get(output_)->getHandle<int64_t>();
for (unsigned j = 0; j < batch.size(); j++) {
for (unsigned i = 0; i < MAX_LENGTH; i++) {
int64_t wordIdx = OH.at({i, j});
dim_t wordIdx = OH.at({i, j});
if (wordIdx == en_.word2index_["EOS"])
break;

@@ -57,6 +57,6 @@ int main() {

// Read output and find argmax.
auto out = bindings.get(output)->getHandle<float>();
printf("digit: %zu\n", out.minMaxArg().second);
printf("digit: %zu\n", (size_t)out.minMaxArg().second);
return 0;
}
@@ -158,16 +158,16 @@ void validateModel(ExecutionEngine &EE, PlaceholderBindings &bindings,
::glow::convertPlaceholdersToConstants(F, bindings, {inputPH, outputPH});
EE.compile(CompilationMode::Infer);

size_t rightAnswer = 0;
size_t offset = numIterations * minibatchSize;
dim_t rightAnswer = 0;
dim_t offset = numIterations * minibatchSize;
size_t sampleCounter = offset;
size_t iterations = 10;
std::vector<Tensor> estimates;
evalBatch(EE, bindings, iterations, sampleCounter, inputPH, outputPH,
imageInputs, labelInputs, F->getName(),
[&](const Tensor &sampleIn, const Tensor &sampleOut,
const Tensor &label, size_t sampleIndex) {
auto correct = label.getHandle<int64_t>().at({0, 0});
auto correct = label.getHandle<sdim_t>().at({0, 0});
auto guess = sampleOut.getHandle().minMaxArg().second;
rightAnswer += (guess == correct);
if (sampleIndex < offset + minibatchSize) {
@@ -55,8 +55,8 @@ llvm::cl::opt<std::string> dumpTrainingGraphDAGFileOpt(

} // namespace

unsigned loadPTB(Tensor &inputWords, Tensor &targetWords, size_t numSteps,
size_t vocabSize, size_t minibatchSize, size_t maxNumWords) {
unsigned loadPTB(Tensor &inputWords, Tensor &targetWords, dim_t numSteps,
dim_t vocabSize, dim_t minibatchSize, dim_t maxNumWords) {

std::ifstream ptbInput("ptb/simple-examples/data/ptb.train.txt");
CHECK(ptbInput.is_open()) << "Error loading ptb.train.txt";
@@ -112,9 +112,9 @@ unsigned loadPTB(Tensor &inputWords, Tensor &targetWords, size_t numSteps,
}

// Load the PTB database into two 3d tensors for word inputs and targets.
size_t batchLength = numWords / minibatchSize;
size_t numBatches = (batchLength - 1) / numSteps;
size_t numSequences = minibatchSize * numBatches;
dim_t batchLength = numWords / minibatchSize;
dim_t numBatches = (batchLength - 1) / numSteps;
dim_t numSequences = minibatchSize * numBatches;

// While we dont have embedding, we are using one-hot encoding to represent
// input words. To limit the size of the data we use an upper bound on the
@@ -125,7 +125,7 @@ unsigned loadPTB(Tensor &inputWords, Tensor &targetWords, size_t numSteps,
auto TIH = targetWords.getHandle<int64_t>();
for (unsigned batch = 0; batch < minibatchSize; batch++) {
for (unsigned iter = 0; iter < numBatches; iter++) {
size_t sequence = batch + iter * minibatchSize;
dim_t sequence = batch + iter * minibatchSize;
for (unsigned step = 0; step < numSteps; step++) {
int wordCounterId = step + iter * numSteps + batch * batchLength;
const std::string word1 = words[wordCounterId];
@@ -169,13 +169,13 @@ void testPTB() {
Tensor inputWords;
Tensor targetWords;

const size_t minibatchSize = 10;
const size_t numSteps = 10;
const size_t numEpochs = 20;
const dim_t minibatchSize = 10;
const dim_t numSteps = 10;
const dim_t numEpochs = 20;

const size_t hiddenSize = 20;
const size_t vocabSize = 500;
const size_t maxNumWords = 10000;
const dim_t hiddenSize = 20;
const dim_t vocabSize = 500;
const dim_t maxNumWords = 10000;

float learningRate = .1;

@@ -272,11 +272,11 @@ void testPTB() {

runBatch(EE, bindings, 1, sampleCounter, {X, Y},
{&inputWordsBatch, &targetWordsBatch}, tfName);
for (size_t step = 0; step < numSteps; step++) {
for (dim_t step = 0; step < numSteps; step++) {
for (unsigned int i = 0; i < minibatchSize; i++) {
auto T =
result->getHandle<float>().extractSlice(step * minibatchSize + i);
size_t correct = targetWords.getHandle<int64_t>().at(
dim_t correct = targetWords.getHandle<dim_t>().at(
{minibatchSize * batch + i, step});
float soft_guess = -std::log(T.getHandle<float>().at({correct}));
perplexity += soft_guess;
@@ -147,7 +147,7 @@ int main(int argc, char **argv) {

// Load model, create a context, and add to HostManager.

std::vector<size_t> inputShape{1, 3, 224, 224};
std::vector<dim_t> inputShape{1, 3, 224, 224};

Placeholder *input;
PlaceholderList phList;
@@ -119,7 +119,7 @@ void loadImagesAndLabels(Tensor &images, Tensor &labels) {
labelsH.at({n, 0}) = static_cast<unsigned long>(dbInput.get());
// ResNet50 model got trained in NCHW format.
for (unsigned c = 0; c < IMAGE_COLORS; ++c) {
auto bgrc = IMAGE_COLORS - 1 - c;
dim_t bgrc = IMAGE_COLORS - 1 - c;
for (unsigned h = 0; h < 32; ++h) {
for (unsigned w = 0; w < 32; ++w) {
// ResNet BGR color space vs CIFAR RGB.
@@ -139,11 +139,11 @@ int main(int argc, char **argv) {
" ResNet50 Training Example\n\n");

// We expect the input to be NCHW.
std::vector<size_t> allImagesDims = {CIFAR_NUM_IMAGES, IMAGE_COLORS,
IMAGE_HEIGHT, IMAGE_WIDTH};
std::vector<size_t> initImagesDims = {1, IMAGE_COLORS, IMAGE_HEIGHT,
IMAGE_WIDTH};
std::vector<size_t> allLabelsDims = {CIFAR_NUM_IMAGES, 1};
std::vector<dim_t> allImagesDims = {CIFAR_NUM_IMAGES, IMAGE_COLORS,
IMAGE_HEIGHT, IMAGE_WIDTH};
std::vector<dim_t> initImagesDims = {1, IMAGE_COLORS, IMAGE_HEIGHT,
IMAGE_WIDTH};
std::vector<dim_t> allLabelsDims = {CIFAR_NUM_IMAGES, 1};

ExecutionEngine EE(executionBackend);
auto &mod = EE.getModule();
@@ -201,7 +201,7 @@ int main(int argc, char **argv) {
// These tensors allocate memory for all images and labels prepared for
// training.
Tensor images(ElemKind::FloatTy, allImagesDims);
Tensor labels(ElemKind::Int64ITy, allLabelsDims);
Tensor labels(IndexElemKind, allLabelsDims);

loadImagesAndLabels(images, labels);

@@ -223,7 +223,7 @@ int main(int argc, char **argv) {
EE.run(bindings, tfName);
timer.stopTimer();

auto correct = labels.getHandle<int64_t>().raw(0);
auto correct = labels.getHandle<sdim_t>().raw(0);
auto guess = findMaxIndex(result->getHandle(), 10);
score += guess == correct;
++total;
@@ -0,0 +1,50 @@
/**
* Copyright (c) 2017-present, Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef GLOW_DIMENSION_TYPE_H
#define GLOW_DIMENSION_TYPE_H

#include <cinttypes>
#include <cstddef>
#include <cstdint>

namespace glow {

#ifdef DIM_T_32
// The dimensions of Tensors are stored with this type. Note: The same
// fixed width type is used both in the host and the possible co-processors
// handling tensor data. The bit width should be chosen carefully for maximum
// data level parallel execution.
using dim_t = uint32_t;
using sdim_t = int32_t;

#define PRIdDIM PRId32
#define PRIuDIM PRIu32

#else // DIM_T_32
using dim_t = uint64_t;
using sdim_t = int64_t;

#define PRIdDIM PRId64
#define PRIuDIM PRIu64

#endif // DIM_T_32

constexpr unsigned DIM_T_BITWIDTH = sizeof(dim_t) * 8;
constexpr unsigned SDIM_T_BITWIDTH = sizeof(sdim_t) * 8;

} // namespace glow

#endif

0 comments on commit d2794f4

Please sign in to comment.
You can’t perform that action at this time.