Skip to content

Commit

Permalink
Apply clang-tidy fixes (#102)
Browse files Browse the repository at this point in the history
Apply fixes to examples
Apply fixes to client headers
Change datatype enums
Apply fixes to core headers
Change constant format in clang-tidy
Apply to batching and bindings
Apply to buffers
Apply to clients
Apply to core
Apply to observation
Apply to pre_post
Apply to servers
Apply to util
Apply to workers
Apply to tests
Apply more passes
Apply iwyu changes

Signed-off-by: Varun Sharma <varun.sharma@amd.com>
  • Loading branch information
varunsh-xilinx committed Jan 4, 2023
1 parent 28b3c8f commit 1accec6
Show file tree
Hide file tree
Showing 158 changed files with 3,261 additions and 2,737 deletions.
18 changes: 7 additions & 11 deletions .clang-tidy
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Checks: >
*,
-altera-*,
-cppcoreguidelines-non-private-member-variables-in-classes,
-cppcoreguidelines-pro-bounds-array-to-pointer-decay,
-cppcoreguidelines-pro-bounds-pointer-arithmetic,
-cppcoreguidelines-pro-type-reinterpret-cast,
-fuchsia-*,
Expand All @@ -38,6 +39,9 @@ Checks: >
# cppcoreguidelines-non-private-member-variables-in-classes
# We allow protected member variables in virtual classes.
#
# cppcoreguidelines-pro-bounds-array-to-pointer-decay
# This gets triggered on using assert() in clang-tidy-10
#
# cppcoreguidelines-pro-bounds-pointer-arithmetic
# We use pointer math to get at underlying buffer data
#
Expand Down Expand Up @@ -91,23 +95,17 @@ CheckOptions:
- key: readability-identifier-naming.ClassCase
value: CamelCase
- key: readability-identifier-naming.ClassConstantCase
value: CamelCase
- key: readability-identifier-naming.ClassConstantPrefix
value: k
value: lower_case
- key: readability-identifier-naming.ClassMemberCase
value: lower_case
- key: readability-identifier-naming.ClassMemberSuffix
value: _
- key: readability-identifier-naming.ClassMethodCase
value: camelBack
- key: readability-identifier-naming.ConstantCase
value: CamelCase
- key: readability-identifier-naming.ConstantPrefix
value: k
value: lower_case
- key: readability-identifier-naming.ConstantMemberCase
value: CamelCase
- key: readability-identifier-naming.ConstantMemberPrefix
value: k
value: lower_case
- key: readability-identifier-naming.ConstantParameterCase
value: lower_case
- key: readability-identifier-naming.ConstantPointerParameterCase
Expand All @@ -124,8 +122,6 @@ CheckOptions:
value: CamelCase
- key: readability-identifier-naming.EnumConstantCase
value: CamelCase
- key: readability-identifier-naming.EnumConstantPrefix
value: k
- key: readability-identifier-naming.FunctionCase
value: camelBack
- key: readability-identifier-naming.GlobalConstantCase
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

--------------------------------------------------------------------------------
For src/amdinfer/util/ctpl.h
For src/amdinfer/util/ctpl.hpp

Copyright (C) 2014 by Vitaliy Vitsentiy
Copyright (C) 2021 Xilinx, Inc.
Expand Down
2 changes: 1 addition & 1 deletion docs/dependencies.rst
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ The following files are included in the AMD Inference Server repository under th
bicycle-384566_640.jpg,`Pixabay <https://pixabay.com/photos/bicycle-bike-biking-sport-cycle-384566/>`__,`bicycle-384566_640.jpg <https://cdn.pixabay.com/photo/2014/07/05/08/18/bicycle-384566_640.jpg>`__,`Pixabay License <https://pixabay.com/service/license/>`_,Used for testing\ :superscript:`d 0`
CodeCoverage.cmake,:github:`bilke/cmake-modules`,`CodeCoverage.cmake <https://github.com/bilke/cmake-modules/blob/master/CodeCoverage.cmake>`__,BSD-3,CMake module for test coverage measurement\ :superscript:`d 0`
crowd.jpg,`Flickr <https://www.flickr.com/photos/mattmangum/2306189268/>`__,`2306189268_88cc86b30f_z.jpg <https://farm3.staticflickr.com/2009/2306189268_88cc86b30f_z.jpg>`__,`CC BY 2.0 <https://creativecommons.org/licenses/by/2.0/legalcode>`_,Used for testing\ :superscript:`d 0`
ctpl.h,:github:`vit-vit/CTPL`,`ctpl.h <https://github.com/vit-vit/CTPL/blob/master/ctpl.h>`__,Apache 2.0,C++ Thread pool library\ :superscript:`a 0`
ctpl.hpp,:github:`vit-vit/CTPL`,`ctpl.h <https://github.com/vit-vit/CTPL/blob/master/ctpl.h>`__,Apache 2.0,C++ Thread pool library\ :superscript:`a 0`
dog-3619020_640.jpg,`Pixabay <https://pixabay.com/photos/dog-spitz-smile-ginger-home-pet-3619020/>`__,`dog-3619020_640.jpg <https://cdn.pixabay.com/photo/2018/08/20/14/08/dog-3619020_640.jpg>`__,`Pixabay License <https://pixabay.com/service/license/>`_,Used for testing\ :superscript:`d 0`
nine_9273.jpg,`Keras MNIST dataset <https://keras.io/api/datasets/mnist/>`__,?,`CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0/legalcode>`__,Used for testing\ :superscript:`d 0`
amdinferConfig.cmake,:github:`alexreinking/SharedStaticStarter`,`SomeLibConfig.cmake <https://github.com/alexreinking/SharedStaticStarter/blob/master/packaging/SomeLibConfig.cmake>`__,MIT,CMake module for installing libraries\ :superscript:`a 0`
Expand Down
2 changes: 1 addition & 1 deletion docs/example_resnet50_cpp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ You can also make a single asynchronous request to the server where you get back
:start-after: +validate
:end-before: -validate
:language: cpp
:dedent: 2
:dedent: 6

There are also some helper methods that wrap the basic inference APIs provided by the client.
The ``inferAsyncOrdered`` method accepts a vector of requests, makes all the requests asynchronously using the ``modelInferAsync`` API, waits until each request completes, and then returns a vector of responses.
Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart_inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Each tensor must have a name, a data type, an associated shape and the data itse
amdinfer::InferenceRequestInput input_tensor;
// depending on the implementation, the string used here may be significant
input_tensor.setName("input_0");
input_tensor.setDatatype(amdinfer::DataType::INT64);
input_tensor.setDatatype(amdinfer::DataType::Int64);
input_tensor.setShape({2, 3});
// the data should be flattened
std::vector<uint64_t> data{{1, 2, 3, 4, 5, 6}};
Expand Down
136 changes: 75 additions & 61 deletions examples/resnet50/migraphx.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,13 @@
* online for discussion around this example.
*/

#include <algorithm> // for copy, max
#include <array> // for array
#include <cassert> // for assert
#include <chrono> // for duration
#include <cstdint> // for uint64_t
#include <cstdlib> // for exit, getenv
#include <exception> // for exception
#include <filesystem> // for path, oper...
#include <initializer_list> // for initialize...
#include <iostream> // for operator<<
Expand Down Expand Up @@ -55,6 +57,7 @@ Images preprocess(const std::vector<std::string>& paths) {
const std::array<float, 3> mean{0.485F, 0.456F, 0.406F};
const std::array<float, 3> std{4.367F, 4.464F, 4.444F};
const auto image_size = 224;
const auto convert_scale = 1 / 255.0;

// this example uses a custom image preprocessing function. You may use any
// preprocessing logic or skip it entirely if your input data is already
Expand All @@ -70,7 +73,7 @@ Images preprocess(const std::vector<std::string>& paths) {
options.color_code = cv::COLOR_BGR2RGB;
options.convert_type = true;
options.type = CV_32FC3;
options.convert_scale = 1.0 / 255.0;
options.convert_scale = convert_scale;
return amdinfer::pre_post::imagePreprocess(paths, options);
}

Expand Down Expand Up @@ -106,8 +109,9 @@ std::vector<amdinfer::InferenceRequest> constructRequests(const Images& images,

for (const auto& image : images) {
requests.emplace_back();
// NOLINTNEXTLINE(google-readability-casting)
requests.back().addInputTensor((void*)image.data(), shape,
amdinfer::DataType::FP32);
amdinfer::DataType::Fp32);
}

return requests;
Expand Down Expand Up @@ -182,64 +186,74 @@ Args getArgs(int argc, char** argv) {
}

int main(int argc, char* argv[]) {
std::cout << "Running the MIGraphX example for ResNet50 in C++\n";

Args args = getArgs(argc, argv);

const auto http_port_str = std::to_string(args.http_port);
const auto server_addr = "http://" + args.ip + ":" + http_port_str;
amdinfer::HttpClient client{server_addr};

std::optional<amdinfer::Server> server;
if (args.ip == "127.0.0.1" && !client.serverLive()) {
std::cout << "No server detected. Starting locally...\n";
server.emplace();
server.value().startHttp(args.http_port);
} else if (!client.serverLive()) {
throw amdinfer::connection_error("Could not connect to server at " +
server_addr);
try {
std::cout << "Running the MIGraphX example for ResNet50 in C++\n";

Args args = getArgs(argc, argv);

const auto http_port_str = std::to_string(args.http_port);
const auto server_addr = "http://" + args.ip + ":" + http_port_str;
amdinfer::HttpClient client{server_addr};

std::optional<amdinfer::Server> server;
if (args.ip == "127.0.0.1" && !client.serverLive()) {
std::cout << "No server detected. Starting locally...\n";
server.emplace();
server.value().startHttp(args.http_port);
} else if (!client.serverLive()) {
std::cerr << "Could not connect to server at " << server_addr << "\n";
return 1;
} else {
// the server is reachable so continue on
}

std::cout << "Waiting until the server is ready...\n";
amdinfer::waitUntilServerReady(&client);

std::cout << "Loading worker...\n";
std::string endpoint = load(&client, args);

std::vector<std::string> paths = resolveImagePaths(args.path_to_image);
Images images = preprocess(paths);

std::vector<amdinfer::InferenceRequest> requests =
constructRequests(images, args.input_size);

assert(paths.size() == requests.size());
const auto num_requests = requests.size();

std::cout << "Making inference...\n";
auto start = std::chrono::high_resolution_clock::now();

// +validate:
// migraphx.cpp
std::vector<amdinfer::InferenceResponse> responses =
amdinfer::inferAsyncOrdered(&client, endpoint, requests);
assert(num_requests == responses.size());

for (auto i = 0U; i < num_requests; ++i) {
const amdinfer::InferenceResponse& response = responses[i];
assert(!response.isError());

std::vector<amdinfer::InferenceResponseOutput> outputs =
response.getOutputs();
// for resnet50, we expect a single output tensor
assert(outputs.size() == 1);
std::vector<int> top_indices = postprocess(outputs[0], args.top);
printLabel(top_indices, args.path_to_labels, paths[i]);
}
// -validate:
auto stop = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = stop - start;
std::cout << "Time taken for inference, postprocessing and printing: "
<< duration.count() << " ms\n";

return 0;
} catch (const amdinfer::runtime_error& e) {
std::cerr << e.what() << "\n";
return 1;
} catch (const std::exception& e) {
std::cerr << e.what() << "\n";
return 1;
}

std::cout << "Waiting until the server is ready...\n";
amdinfer::waitUntilServerReady(&client);

std::cout << "Loading worker...\n";
std::string endpoint = load(&client, args);

std::vector<std::string> paths = resolveImagePaths(args.path_to_image);
Images images = preprocess(paths);

std::vector<amdinfer::InferenceRequest> requests =
constructRequests(images, args.input_size);

assert(paths.size() == requests.size());
const auto num_requests = requests.size();

std::cout << "Making inference...\n";
auto start = std::chrono::high_resolution_clock::now();

// +validate:
// migraphx.cpp
std::vector<amdinfer::InferenceResponse> responses =
amdinfer::inferAsyncOrdered(&client, endpoint, requests);
assert(num_requests == responses.size());

for (auto i = 0U; i < num_requests; ++i) {
const amdinfer::InferenceResponse& response = responses[i];
assert(!response.isError());

std::vector<amdinfer::InferenceResponseOutput> outputs =
response.getOutputs();
// for resnet50, we expect a single output tensor
assert(outputs.size() == 1);
std::vector<int> top_indices = postprocess(outputs[0], args.top);
printLabel(top_indices, args.path_to_labels, paths[i]);
}
// -validate:
auto stop = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = stop - start;
std::cout << "Time taken for inference, postprocessing and printing: "
<< duration.count() << " ms\n";

return 0;
}

0 comments on commit 1accec6

Please sign in to comment.