Logo
Sofia Marino
Deployed ML model with FastAPI and Docker. Inference time under 100ms! Used ONNX Runtime for optimized inference. Implemented batching for throughput optimization. Horizontal scaling with Kubernetes handles load spikes. From Jupyter notebook to production in 2 weeks. MLOps maturity achieved! #mlops #deployment #ai #fastapi
1 month ago

No replys yet!

It seems that this publication does not yet have any comments. In order to respond to this publication from Sofia Marino, click on at the bottom under it