Data centers in space are hard — not just because of weight or cost, but because of this equation: P = εσT⁴A. That’s how much heat a surface radiates into the void. Temperature to the fourth power. Double the temperature, sixteen times the radiation. This whole post is basically explaining how thoroughly that T⁴ drives the entire discussion.

In space, with no atmosphere, radiation is your only cooling option. Yes, there’s weight and launch cost, but thermal rejection is the constraint most people disregard — and the one I believe points in a different direction than current efforts. Managing heat is not new at all in space, managing this amount of heat is. I’ve spent time on both sides: I worked on space hardware as a postdoc, and I co-founded an AI company that operates billions of inference runs. I’ve been asked to run due diligence on this three times now, and this is the gist, openly. TL;DR: yes, there’s a high-risk, high-reward, capital-intensive play here. Great for VCs, more so if they hint you at running hot chips.


The conversation has been picking up fast. Startups like Starcloud have discussed flying commercial GPUs to orbit. Google researchers have explored solar-powered compute in sun-synchronous orbit under Project Suncatcher. Companies like Axiom Space are openly planning orbital data infrastructure. The Economist recently called the idea “less crazy than you think.” And In-Q-Tel’s reported backing of some of these startups tells you the defense-intelligence community is paying close attention.

The pitch makes sense: perpetual solar power, no land permits, tons of space (pun intended) and a space market that’s growing fast.

Let’s take the H100 as an example — many companies specifically promise this chip. Its thermal design power is 700W. Silicon chips degrade above ~150°C, and most throttle or shut down well before 100°C. So you have to dissipate 700 watts per chip. On Earth, you conduct heat into heatsinks, push it into liquid loops, blow air across fins. In space, eventually, you must radiate it — there’s nothing else. A perfect blackbody at 80°C (run the equation at the top) dissipates about 880 W/m². Real spacecraft radiators do worse: after accounting for emissivity (~0.9), absorbed solar flux, and Earth IR backload, practical heat rejection at these temperatures runs roughly 350–500 W/m². NASA’s ISS thermal control system rejects about 70 kW across its external loops with ~14 tons of hardware — that’s 5 watts per kilogram. Using 450 W/m² — optimistic but achievable with careful radiator design — one H100 needs 700 / 450 ≈ 1.6 m² of radiator surface. The chip itself is 814 mm² — a large postage stamp. You need to spread 700 watts from that postage stamp across a surface ~2,000 times larger. That means heat pipes, loop heat pipes, working fluid, manifolds, and pumps — all of which draw their own power and add failure modes. At the ISS mass ratio, the thermal system for a single H100 weighs 140 kg. That’s just the plumbing to keep one tiny chip from melting.

But most demonstrators using these chips don’t have these outsized radiators. My prediction for what’s actually happening — or will happen soon — when someone flies an H100 to orbit: they’re throttling it hard. A 700W chip on a thermally constrained spacecraft won’t run at 700W. It’ll be capped at 300–350W to keep junction temperatures survivable with the radiator area they can actually fly. And even then, they’ll likely duty-cycle: run the GPU for a few minutes, then pause and let the spacecraft radiate the accumulated heat before the next burst. The “H100 in orbit” won’t deliver ~990 dense FP16 TFLOPS. It’ll probably deliver a fraction of that, with idle gaps baked into the thermal budget.

IMO they should fly an older, lower-power chip. A V100 — a 2017 GPU — draws 300W and delivers ~125 TFLOPS in FP16 tensor. It needs a substantially smaller radiator, about 0.7 m². The most expensive GPU will give you less performance than a 9-year-old one.

If Data Centers in space want to compete with the ones on the ground, what would it take to run an H100 at full power all the time? The problem shifts from thermodynamics to logistics: how do you fit all this inside a rocket? A Falcon 9 fairing is 5.2 meters wide with about 145 m³ of usable volume. Starship’s payload bay is about 8 meters across with roughly 1,000 m³. One GPU needs ~1.6 m² of radiator, which folds small. But scale it: a DGX node — 8 GPUs — needs ~13 m² of radiator and ~1 ton of thermal hardware. The volume is manageable — ISS-era deployable panels stow at about 5–15 m²/m³, so ~113 m² packs into roughly 8–23 m³. On a Falcon 9, that’s about 10% of your volume — but roughly half of your mass capacity gone, just for thermal plumbing. On Starship, it’s ~2% of volume and ~9% of mass, at a launch cost of roughly $900,000 at an Elon-level-optimistic $100/kg to LEO. And that’s before solar panels and batteries, which typically launch at ~150 W/kg, so add another ~1 ton of mass for power generation. I’d say that’s a 60:1 mass ratio — sixty kilograms of support hardware for every kilogram of compute silicon (a DGX node weighs ~35 kg; at ~2 tons of thermal and power infrastructure, the ratio is stark). Stuff that needs to work in zero gravity, surviving launch vibration and cosmic ray bombardment, deploying reliably, lasting years without maintenance. Yet, this is feasible. Hard, but so is space.

And the linchpin is that fourth power. If you double the radiator’s surface area, you double the power you can dissipate. But if you can run the radiator at double the temperature, T⁴ scaling means you can dissipate the same heat with 1/16th the surface. Space thermal systems massively favor hot components, and silicon’s junction temperature ceiling locks you into the worst part of the curve. This is a materials problem. And that’s where the interesting due diligence begins — “what if you designed for orbit instead of adapting Earth hardware?”


Here’s what could go right. Silicon carbide (SiC) and gallium nitride (GaN) are wide-bandgap semiconductors. They operate at junction temperatures up to 600°C. Run the same T⁴ math: a surface at 600°C (873 K) radiates about 32,000 W/m² — roughly 37× more than silicon’s 880 W/m² at 80°C. That chip that needed 1.6 m² of deployable origami radiator? At SiC temperatures, it needs less than 220 cm² — a 15-by-15-centimeter panel, about the size of a smartphone. You can bolt the radiator directly to the chip. No deployment, no booms, no origami, no moving parts.

And the thermal story is only half of it. Wide-bandgap semiconductors are inherently more radiation-hard than silicon. Moreover, SiC’s wider bandgap (~3.3 eV vs silicon’s 1.1 eV) means cosmic rays deposit less charge per strike, making single-event upsets (SEUs) orders of magnitude less likely. SiC also withstands far higher total ionizing dose (TID) before performance degrades. Silicon GPUs in orbit need heavy radiation shielding or accept high error rates; SiC runs hot and shrugs off the radiation environment that silicon struggles with. One material solves both of orbit’s hardest semiconductor problems.

Is that technology ready? NASA Glenn ran SiC integrated circuits continuously at 500°C for over a year. GaN is already flying in space for RF and power conversion. The jump from wide-bandgap power electronics to dense digital logic — the kind of transistor density you see in modern GPUs — is non-trivial. Those NASA SiC chips had about 200 transistors; the latest generation targets ~3,000. An H100 has 80 billion. But we don’t need an H100 equivalent in SiC. We need to optimize for the thermal envelope — chips designed for the workloads that generate the most value per FLOPS per watt in orbit. I believe the sweet spot is tens of millions of transistors, where silicon was in the late 1990s. SiC today is where silicon was in the 1960s — 40 years behind at Moore’s Law pace, but we can compress that timeline by applying decades of silicon process learning to SiC. Ten years? That’s a VC time-horizon bet.

But even before the materials catch up, the right question isn’t “can we run a training cluster in orbit?” — it’s “what workloads fit this thermal envelope?” And it turns out some of the highest-value workloads in AI are also the least thermally expensive. Embedding creation — turning raw data into vector representations — costs a fraction of the compute of training or full-precision inference, and provides extremely actionable information, including monitoring assets and activities on the ground. This is what we build at LGND — embedding-based search over satellite imagery using foundation model representations and linear probes, no heavyweight decoder required. We routinely adapt our factory to least cost; adapting to a space factory of embeddings would not be hard.

That’s a workload that fits inside an orbital thermal budget today, not after a decade of SiC R&D. So does feature extraction from satellite imagery, quantized inference on compact models, and RAG pipelines where the heavy lifting is a vector similarity search, not a forward pass through 70 billion parameters. These workloads generate enormous value per FLOP, and in orbit, where every watt is a thermal problem, value-per-FLOP is the metric that matters.

Value per FLOP/Watt is the metric that matters for AI workloads in orbit.

The metric that collapses the entire ground-vs-orbit debate into one number is TFLOPS per watt. On Earth, the electricity bill dominates; in orbit, that bill becomes another radiator pipe. The chart below plots every major chip family — NVIDIA, AMD, Intel, Apple, both CPU and GPU — on two axes: how much compute it delivers (horizontal) and how much heat it produces when running constantly (vertical, inverted — up is better). Axes are log-log. Arrows connect successive chip generations. The diagonal lines are iso-efficiency contours: chips along the same diagonal deliver the same GFLOPS per watt. The industry’s flagship training GPUs (P100→B200) march relentlessly toward more FLOPS and more watts — each generation is a worse thermal problem in orbit. But NVIDIA’s inference line (T4→L4) goes almost purely sideways: double the compute, same 70-watt thermal envelope. That’s the trajectory that works in space. The shaded region marks where wide-bandgap semiconductors (SiC, GaN) could operate at 600°C junction temperatures — where Stefan-Boltzmann’s T⁴ scaling means each watt needs 37× less radiator surface. Not because the chips use less power, but because they run hot enough to radiate it efficiently. The T4 and L4 are today’s best orbital payload. SiC/GaN is the decade-horizon escape from silicon’s thermal ceiling.

NVIDIA trainNVIDIA inferenceAMD GPUApple MServer CPUSiC/GaN

Fig 1. Compute versus power for major chip families. Top-right is the sweet spot: more compute, less heat. Arrows connect successive generations; dashed diagonals are iso-efficiency lines (same GFLOPS/W). NVIDIA’s training chips (P100→B200) maximize throughput at any thermal cost. Its inference chips (T4→L4) maximize useful output per watt, sitting at ~70W — the inference design philosophy is the orbital compute philosophy. The shaded region shows where SiC/GaN could operate at 600°C. GPU data uses FP16 dense tensor TFLOPS; CPUs and Apple M-series use FP32 peak, which flatters them for this comparison. Sources: NVIDIA, AMD, Apple.


Design your pipeline for the thermal budget and the physics starts working for you instead of against you. Now layer on the strategic advantages that only orbit provides:

Security by isolation. An orbital node is air-gapped by vacuum — no physical intrusion vector, no insider threat at the cage level, and no way to physically seize the hardware. (Jurisdiction still follows the flag: under Article VIII of the Outer Space Treaty, the state of registry retains legal authority over its space objects.) But the physical security model is unmatched: for classified intelligence, financial models, genomic data, or proprietary training runs, the attack surface is far smaller and more sophisticated.

Sensor-to-insight at the point of capture. Process the image, the signals intercept, the environmental reading the moment it’s collected. Transmit answers, not raw data. For persistent ISR, disaster response, maritime awareness, and real-time agricultural monitoring, this is the difference between actionable intelligence and yesterday’s data — and it sidesteps the downlink bandwidth bottleneck that limits what ground-based processing can do with orbital sensors.

Resilience. Orbital compute doesn’t care about grid failures, undersea cable cuts, floods, or geopolitical instability on the ground. A distributed constellation provides compute capacity that survives scenarios that would take every terrestrial data center in a region offline.

Each of these is a use case where the thermal constraints of orbit stop being a liability and start being a feature — if you design for them.

So, where could we first find actual commercial traction? Defense and intelligence are the near-term customer. They already pay space premiums and are willing to fund secure compute and low-latency capture-to-insight pipelines. The In-Q-Tel investment in Starcloud signals exactly this. The commercial path follows the defense path, as it usually does with space technology.

The main downside is the integration moat, which clearly favors SpaceX. Vertical integration across hot-semiconductor design, thermal architecture, and launch operations will be nearly impossible to replicate — and SpaceX already owns launch and thermal engineering, even if they haven’t moved into compute silicon. No near competitor has two of those three capabilities.


In the late 1990s, the semiconductor industry hit an interconnect wall. Aluminum wiring couldn’t carry enough current at shrinking geometries. IBM bet on copper — a harder material to integrate, requiring entirely new barrier layers and deposition techniques. The transition took a decade of R&D, was painful and expensive, but it worked, and the rest of the industry followed. Orbital compute is at a similar inflection. And the “hot chip” story might come back down to Earth: if SiC or GaN logic matures for orbit, the ability to run chips at much higher temperatures could simplify terrestrial data center cooling and shrink its footprint dramatically.

Orbital data centers boil down to strategies that time the answers to these questions:

  • What’s your sustained FLOPS per orbit?
  • How well can you fold your thermal system?
  • What’s your roadmap for non-silicon compute?