- Data Comeback
- Posts
- 🚀 You Trained the Model. Now What?
🚀 You Trained the Model. Now What?
Training is step one. Deploying is the real challenge.

You built the model.
Cross-validation looks great.
Notebook saved.
...Then what?
If no one ever showed you how to move from notebook to production, you're not alone.
😓 Why This Hurts
A model in a notebook doesn’t deliver value.
Until it’s in production, it’s just a demo.
And yet, most ML courses stop at training.
So let’s walk through the missing step: deployment.
✅ Micro-Guide: From Model to Production
Here are 3 simple options — from quick to production-ready.
1. Export + Load with joblib
or pickle
Save your trained model:
import joblib
joblib.dump(model, 'model.pkl')
Load it later:
model = joblib.load('model.pkl')
prediction = model.predict(X_new)
🟢 Fast and simple
🔴 Not scalable or safe for untrusted environments
2. Wrap It in a Basic API with Flask
Turn your model into a lightweight web service.
from flask import Flask, request, jsonify
import joblib
model = joblib.load('model.pkl')
app = Flask(__name__)
@app.route('/predict', methods=['POST'])
def predict():
data = request.get_json()
prediction = model.predict([data['features']])
return jsonify({'prediction': prediction.tolist()})
Run it, send JSON, get predictions.
🟢 Great for demos, internal tools
🔴 Needs containerization for production
3. Use MLflow for Lifecycle Management
MLflow helps you track, package, and serve models.
mlflow models serve -m runs:/<run-id>/model -p 5000
✅ Version control
✅ Model registry
✅ REST API built-in
Perfect for teams or if you're working in MLOps stacks.
âš¡ Bonus Tip: Start with One Use Case
Don’t aim for enterprise-grade deployment on day one.
Just pick one use case:
Internal dashboard
Batch scoring job
API for another team
Deploy one model end-to-end.
Then improve from there.
📊 Poll
Have you deployed a model to production?
Click here to vote — curious to see where everyone’s at.