Infrastructure
- On-prem GPU stack (4× GTX 1080 Ti) for LiteLLM and small-batch inference.
- 8× NVIDIA A30 node for multi-GPU training runs.
- 2× Titan-class node for low-latency local experiments.
- Access to RedHawk, TALON, and OSC clusters for large-scale ML workloads.
- 360° capture rigs, panoramic tripod heads, and field data kits.
Active / Recent Support
Summarize internal seed funding, industry collaborations, or external grants here.
Partnerships
Use this space to acknowledge civic agencies, cultural heritage partners, or corporate sponsors contributing data and feedback.