* Fix Docker image build on Linux
* Build Docker images in CI
* Fix bash syntax
* Only load, not push
* Parallelize docker build
It's currently the slowest step.
* Only build Linux images
This PR adds an ability to estimate per deployment and per allocation memory usage of NLP transformer models. It uses torch.profiler and performs logs the peak memory usage during the inference.
This information is then used in Elasticsearch to provision models with sufficient memory (elastic/elasticsearch#98874).
Co-authored-by: David Olaru <dolaru@elastic.co>
* Reduce Docker image size from 4.8GB to 2.2GB
* Use torch+cpu variant if target platform is linux/amd64
Avoids downloading large & unnecessary NVIDIA deps defined in the package on PyPI
* Build linux/arm64 image using buildx and QEMU