Service overview
The RedAI lab maintains two MLflow tracking servers so you can choose the right balance between discoverability and privacy. The public tracker is read-only by default—anyone can browse runs without credentials, but logging requires the shared write password. The internal tracker sits behind per-project credentials for work that must stay within the lab.
- Public dashboard: https://vision.csi.miamioh.edu/mlflow/public/ — open for viewing, prompt for credentials only when you log runs.
- Internal tracker: https://vision.csi.miamioh.edu/mlflow/internal/ — requires the per-project username/password before you can view or log.
Internal access reminder: MLflow will only accept internal requests that include MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD; the lab staff issues these per project, so reach out before your first run and keep them in your private .env file.
Quick start (Python + dotenv)
- Pick an experiment name that matches your repo/course (“project” level).
- Set the tracking URI to the endpoint you need.
- Log runs for each configuration you try; MLflow keeps every run distinct under the experiment.
Prefer environment variables so CLI tools, scripts, and notebooks stay consistent. Drop the values into a .env file and load them with python-dotenv. If you omit the username/password the request is treated as read-only; include them to log runs. Set MLFLOW_EXPERIMENT_NAME to your repo or dataset name so runs stay grouped.
MLFLOW_TRACKING_URI=https://vision.csi.miamioh.edu/mlflow/public/
MLFLOW_TRACKING_USERNAME=<public-write-username>
MLFLOW_TRACKING_PASSWORD=<public-write-password>
MLFLOW_EXPERIMENT_NAME=use-a-real-name-not-this-you-lazy-bum
MLFLOW_TRACKING_URI=https://vision.csi.miamioh.edu/mlflow/internal/
MLFLOW_TRACKING_USERNAME=<internal-username>
MLFLOW_TRACKING_PASSWORD=<internal-password>
MLFLOW_EXPERIMENT_NAME=use-a-real-name-not-this-you-lazy-bum
from dotenv import load_dotenv
import mlflow
import os
load_dotenv()
mlflow.set_tracking_uri(os.environ["MLFLOW_TRACKING_URI"])
experiment_name = os.environ.get(
"MLFLOW_EXPERIMENT_NAME",
"use-a-real-name-not-this-you-lazy-bum",
)
mlflow.set_experiment(experiment_name)
with mlflow.start_run():
mlflow.log_param("model", "efficientnet-b2")
mlflow.log_metric("val_acc", 0.9123)
mlflow.log_artifact("training_curves.png")
MLflow automatically reads MLFLOW_TRACKING_USERNAME/MLFLOW_TRACKING_PASSWORD for HTTP basic auth, so no extra client configuration is necessary.
Tips
- Create a dedicated experiment per repo or dataset split so runs stay organized.
- Think of experiments as the “project” bucket and runs as each training configuration; MLflow will keep every run visible under the same experiment.
- Use
mlflow.sklearn.autolog(),mlflow.pytorch.autolog(), etc. to capture models with minimal code changes. - Artifacts live on the NAS; large uploads take longer but can be downloaded later via the UI or CLI.
- Public endpoint is viewable by anyone on campus, but logging requires the shared write credentials (load them via
.envwhen needed). - If you see “Invalid Host header” errors, double-check that you are using the HTTPS URL above (the proxy enforces host validation).
- Need a new project password or help debugging a run? Reach out to the RedAI infrastructure contact listed on the Resources page.