Mapping Canadian AI Compute: Why We Built the Zeever Compute Index
May 2, 2026
Founder, Developer, AI Researcher
Today we launched Canada’s AI Compute Landscape — a national view of GPU infrastructure, AI service layers, and sovereign AI capacity built on a verified inventory of 39 providers. This post explains why the page exists, how the methodology was put together, and what we learned during the research that surprised us.
The page is the public surface. The work behind it is methodology.
The gap that prompted this
Canadian organizations choosing where to run AI workloads face a fragmented market with no single comparison surface. Three patterns kept showing up:
- Vendor websites list rates that don’t reflect Canadian regional availability.
- Sovereign Canadian providers — the providers most relevant to regulated industries and federal procurement — don’t publish list prices at all.
- Hyperscalers list flagship GPUs in their Canadian catalogs that, in practice, may not be deliverable from a Canadian region today.
The result is that procurement teams compare what’s easy to compare (US hyperscaler list prices) and ignore what they can’t (sovereign Canadian capacity, non-US foreign operators, marketplace pricing). The decisions get made on incomplete information.
The Zeever AI Compute Index is an attempt to fix that. Not by writing a buyer’s guide, but by maintaining a verified inventory, defining a transparent normalization formula, and publishing the methodology rather than just the conclusions.
Scope: what counts as Canadian GPU compute
A vendor is in scope for the inventory if it meets at least one of three tests:
- Sells GPU compute to Canadian organizations, regardless of vendor headquarters.
- Operates a Canadian data centre region offering GPU instances.
- Maintains Canadian sovereign AI capacity — even if not commercially priced (e.g. Mila’s academic allocation under the Pan-Canadian AI Strategy).
The current inventory tracks 39 vendors split across six categories: 11 sovereign Canadian, 13 US-headquartered hyperscalers in Canada, 6 non-US foreign operators with Canadian regions, 3 marketplace and distributed providers, 3 specialty or unverified Canadian-presence vendors, and 1 sovereign-partial (Cohere, which is Canadian-owned but operates via a US-managed compute partner).
Why we normalized on H100 USD/GPU·hr
The index is opinionated about one thing: published, verified pricing matters. Vendors that publish list rates appear in the cost ranking. Vendors that operate on quote-only pricing are flagged separately as “Opaque” — not because they’re hiding something, but because their pricing model genuinely doesn’t translate to a per-hour comparison.
For vendors with verified, published H100 rates in a Canadian region, the formula is:
index = vendor_h100_usd_per_hour ÷ ceiling_h100_usd_per_hourWhere the ceiling is $7.50/GPU·hr — Microsoft Azure Canada Central, the most expensive published rate at compilation. Lower index = cheaper. The current range runs from 0.18 (Thunder Compute, $1.38) to 1.00 (Azure CA, $7.50).
Why H100 specifically
The H100 is the de facto current-generation training and high-end inference GPU as of Q2 2026. Most serious AI workloads target it directly or compare against it. Normalizing to a single GPU SKU eliminates per-instance configuration variance — vCPU, memory, storage — that would otherwise make the comparison incoherent.
H200 and B200 rates are recorded in the inventory where published, but H100 is the anchor. As newer generations become standard, the anchor will move; the formula stays the same.
Currency and conversion
All index values are computed in USD. The most significant CAD-denominated rate in the ranking is ISAIC at CAD $2.50/hour, converted at approximately 1.37 CAD/USD ≈ USD $1.83/hour. The conversion rate is recorded with each row and refreshed quarterly.
The sovereignty taxonomy
Cost alone doesn’t answer the question Canadian procurement teams actually ask. The other half is jurisdiction. The “Tier” column on the public table classifies each vendor on three axes:
- Canadian-owned (Yes / No / Unknown)
- Canada-hosted (Yes / Partial / Marketplace / No / Unknown)
- US jurisdiction risk (Yes / No / Partial — based on whether the operating entity is US-headquartered and therefore subject to the US CLOUD Act)
The combinations produce six tiers: Sovereign (11 vendors), Sovereign-partial (1), Non-US foreign operator (6), US CLOUD Act (13), Marketplace (2), and Unverified (6).
Sovereignty is a procurement criterion, not a value judgment. A US hyperscaler with Canadian data centres is appropriate for many workloads; a sovereign Canadian provider is appropriate for others. The taxonomy makes the trade-off legible.
Why “Opaque” matters more than another estimate
The hardest methodology decision was what to do with vendors that don’t publish list pricing. Seven of the 11 sovereign Canadian providers fall into this bucket: BUZZ HPC, TELUS, Bell AI Fabric, Hypertec, Consensus Core, CoEvo, ThinkOn.
The easy path would be to estimate. Pull a few public reference points, eyeball the cost structure, publish a number with a footnote. We chose not to.
The reason is that the readers of this page are procurement teams making real spend decisions. A fabricated estimate, even with a hedge, gets quoted back as fact. So we mark these vendors Opaque in the index column and surface four of them as sovereign anchors on the cost frontier visualization without a numeric position. The page tells you who they are, what they offer, and that the only way to compare them on price is to request a quote.
That’s less satisfying than a clean ranked list. It’s also more honest.
What we found in the research
Hyperscaler Canadian H100 availability is thinner than the catalogs suggest
Google Cloud Canada lists H100 in northamerica-northeast1(Montreal), but founder reports indicate instances reportedly fail to launch in practice. Oracle Cloud lists BM.GPU.H100 in some regions; Canada-specific availability and pricing weren’t pulled cleanly in this verification pass. IBM Cloud Canada launched the Toronto MZR in 2023 but current-gen GPU SKU availability remains limited.
Three hyperscalers are tracked but not indexed because the catalog says yes and the console says no. If those become verifiable, they’ll move into the ranked tier.
The cheapest published H100 in Canada is a sovereign provider
ISAIC, the AI Sandbox at the University of Alberta, publishes CAD $2.50/hour (≈ USD $1.83/hour) — cheaper than every commercial neocloud in the inventory except Thunder Compute and Oblivus. It’s also Canadian-owned, Canada-hosted, and outside US CLOUD Act reach. The catch: it’s a research sandbox without commercial SLAs. The trade-off is real, but the data point upends the assumption that sovereign Canadian capacity is automatically the premium option.
13 of 39 vendors are US CLOUD Act exposed
A third of the Canadian-region GPU market is operated by US-headquartered companies. AWS, Azure, GCP, Oracle, IBM, CoreWeave, DigitalOcean, Vultr, RunPod, Paperspace, Akamai/Linode, plus DataHive (now Cologix). The data lives in Canada; the operator answers to a US legal regime. For most workloads this doesn’t change anything. For regulated workloads it changes everything.
Two provinces host 95% of vendor presence
Quebec and Ontario together account for nearly all verified Canadian GPU compute. Alberta and British Columbia have meaningful sovereign and hyperscaler presence. Saskatchewan and New Brunswick have limited capacity. Manitoba, Nova Scotia, and Newfoundland have effectively zero verified GPU compute. The province scorecard on the page makes the regional imbalance visible — and underlines that “Canadian sovereign AI” today is mostly Quebec hydroelectric and Ontario Bell-and-Hypertec capacity.
Limitations we’re explicit about
Honest methodology means saying what it can’t do well. The page calls out:
- Canadian H100 availability changes faster than this page updates. Capacity that was constrained last quarter may be available next; quoted rates change with cloud-provider promotions.
- The Azure Canada Central H100 ceiling of $7.50/hour is a midpoint estimate. Azure pricing varies by SKU, region, and reservation tier; production decisions should pull from the Azure Pricing API directly.
- Oracle Cloud and IBM Cloud Canadian H100 rates were not pulled in this version. They’re tracked in the inventory but their indexed values are blank.
- Marketplace pricing is real but not directly comparable. A Vast.ai host may offer H100 at $1.00/hour with no SLA. SaladCloud delivers consumer-grade GPU compute at sub-$0.30/hour rates. These are legitimate options for the right workloads but aren’t comparable to enterprise-grade dedicated capacity.
Data sources
Every claim on the page is sourced. The vendor inventory is compiled from:
- Vendor pricing pages verified directly via web request on the compilation date.
- Vendor press releases and announcements for capacity, partnership, and regional launch claims.
- Regulatory filings including CPPIB infrastructure investment disclosures, Bell Canada/BCE filings, and HIVE Digital Technologies (TSXV: HIVE) public filings for BUZZ HPC capacity claims.
- Industry audits including Founder Reality’s Canadian GPU audit for hyperscaler ground-truth pricing and availability.
- News reporting from The Globe and Mail, The Logic, Data Center Dynamics, HPCwire, and Connect CRE for Canadian sovereign AI capacity announcements.
- Government program documentation including Innovation, Science and Economic Development Canada’s $2-billion Sovereign AI Compute Strategy disclosures.
Per-row source URLs are recorded in the v3 CSV under the source_urls column. The full CSV is available for download from the landscape page.
What’s next
This is v3 of the inventory. The next compilation pass will tighten three areas:
- Pull verified H100 rates for Oracle Cloud Canada, IBM Cloud Canada, and Vultr Toronto so they can move into the ranked tier.
- Re-verify Google Cloud Montreal H100 launch behaviour. If practice catches up to catalog, they enter the index. If not, the limitation note stays.
- Add reserved/committed-use pricing where vendors publish it. The current ranking is on-demand only; long-running training workloads get materially different economics with a 1- or 3-year commit.
If you find an error, want to add a vendor, or have pricing data we haven’t verified, email [email protected]. The inventory is maintained as a public good for the Canadian AI ecosystem — corrections are reviewed and the page is republished.
See it
Canada’s AI Compute Landscape · Full methodology · Download v3 CSV
Frequently asked questions
What is the Zeever AI Compute Index?
The Zeever AI Compute Index is a public, auditable inventory of GPU providers serving the Canadian market. It tracks 39 vendors and ranks the 14 with verified, published H100 USD per GPU-hour pricing on a normalized 0–1 scale, where 0.18 is the cheapest published rate and 1.00 is Microsoft Azure Canada Central at the ceiling of $7.50/hour.
Why H100 and not a broader GPU comparison?
The H100 is the de facto current-generation training and high-end inference GPU as of Q2 2026. Most serious AI workloads target it directly or compare against it. Normalizing to a single GPU SKU eliminates per-instance configuration variance — vCPU, memory, storage — that would otherwise muddy the comparison. H200 and B200 rates are recorded in the inventory where published, but H100 is the anchor.
What does "Opaque" mean in the index?
Opaque means the vendor does not publish list pricing. Most sovereign Canadian providers — TELUS, Bell AI Fabric, BUZZ HPC, Hypertec, Consensus Core, CoEvo, ThinkOn — sell through enterprise quote, not a published per-hour rate. Rather than fabricate a comparison number, the index records what is verifiable. Opaque is not "expensive." It means list pricing isn't public, and the only way to know the rate is to ask.
Why does sovereignty matter for AI compute?
Sovereignty is a procurement criterion, not a value judgment. A Canadian region operated by a US-headquartered hyperscaler is still subject to the US CLOUD Act, which lets US authorities compel disclosure of data regardless of where it physically resides. For regulated industries — healthcare, federal government, critical infrastructure — that exposure changes the procurement calculus. The taxonomy makes the trade-off legible without prescribing a choice.
How often is the inventory updated?
The full inventory is verified quarterly. Each row carries a last_verified_date field. Material vendor announcements — new Canadian regions, sovereignty milestones, major pricing changes — trigger interim updates between quarterly refreshes.
How can I correct or contribute to the inventory?
Email [email protected] with corrections, new vendors, or pricing updates. Every claim on the page is sourced to public vendor pages, regulatory filings, or named industry audits. Per-row source URLs are recorded in the v3 CSV available on the page. The index is maintained as a public good for the Canadian AI ecosystem and is not a paid placement service.