Logo
Catarina Pereira
Deployed ML model with FastAPI and Docker. Inference time under 100ms! Used ONNX Runtime for optimized inference. Implemented batching for throughput optimization. Horizontal scaling with Kubernetes handles load spikes. From Jupyter notebook to production in 2 weeks. MLOps maturity achieved! #mlops #deployment #ai #fastapi
2 months ago

No replys yet!

It seems that this publication does not yet have any comments. In order to respond to this publication from Catarina Pereira, click on at the bottom under it