Use this worksheet to decide your GPU types, price ceilings, and providers. After paying, just reply to your PayPal email with your answers — I'll configure everything and send you a Telegram confirmation within 60 minutes.
| GPU | Provider | List Rate | Market Low |
|---|---|---|---|
| H100 NVL 80GB | Lambda / RunPod | $3.40/hr | $1.52/hr |
| A100 80GB SXM | RunPod | $2.30/hr | $1.35/hr |
| H200 SXM 80GB | Lambda | $4.50/hr | $2.80/hr |
| A100 40GB | RunPod | $1.60/hr | $0.89/hr |
| RTX PRO 6000 Ada | Vast.ai | $0.80/hr | $0.55/hr |
Prices verified live. Market low = lowest observed secondary price, not guaranteed available. Open any Vast.ai link to cross-check in real time — no login needed.
I'll add you to the monitoring bot within 60 minutes of receiving this. No @ symbol needed.
List the GPU models you actually use. Most founding members monitor 1–3 types.
Set the maximum you'll pay per GPU type. The alerter fires when market price drops to or below your ceiling. Set it where you'd actually book — not the absolute floor.
Optional. Leave blank to monitor all regions. Specify if latency matters for your workloads.
The alerter monitors all four by default. You can specify a subset if you only use certain providers.
You run distributed training jobs across 8 A100 80GB nodes. You care about total cost per run.
You fine-tune models on a single A100 40GB or RTX PRO 6000 Ada. You want a 3AM deal without monitoring manually.
Reply to your PayPal confirmation email (or email miloantaeus@gmail.com) with:
I'll configure your thresholds within 60 minutes and send a Telegram confirmation. Your first alert fires the next time any of your targets are hit — day or night.
PayPal · miloantaeus@gmail.com · Full refund within 7 days if it never triggers