Build a Triton Inference Server with MNIST Example: Part (1/4)
Author(s): Hao WEI Originally published on Towards AI. a step-by-step coding practice In this series of blogs, we will walk through how to build a triton inference server, host it in AWS ECS, and accelerate the speed using tensorRT. We will use …