Point2’s cables are made up of eight e-Tube fibers, each carrying more than 200 gigabits of data per second.
Summary
- In data-center terms, scaling out involves linking computers, while scaling up packs more GPUs into a computer, challenging copper’s physical limits.
- Copper cables face a phenomenon at high data rates at high data rates that necessitate wider wires and more power, complicating a data center’s dense connections.
- Point2 and AttoTude propose radio-based cables, offering longer reach, lower power consumption, and narrower cables than copper, without the cost and complexity of optics.
- Startups aim to directly integrate radio cables with GPUs, easing cooling needs and enhancing data-center efficiency.
How fast you can train g…
Point2’s cables are made up of eight e-Tube fibers, each carrying more than 200 gigabits of data per second.
Summary
- In data-center terms, scaling out involves linking computers, while scaling up packs more GPUs into a computer, challenging copper’s physical limits.
- Copper cables face a phenomenon at high data rates at high data rates that necessitate wider wires and more power, complicating a data center’s dense connections.
- Point2 and AttoTude propose radio-based cables, offering longer reach, lower power consumption, and narrower cables than copper, without the cost and complexity of optics.
- Startups aim to directly integrate radio cables with GPUs, easing cooling needs and enhancing data-center efficiency.
How fast you can train gigantic new AI models boils down to two words: up and out.
In data-center terms, scaling out means increasing how many AI computers you can link together to tackle a big problem in chunks. Scaling up, on the other hand, means jamming as many GPUs as possible into each of those computers, linking them so that they act like a single gigantic GPU, and allowing them to do bigger pieces of a problem faster.
The two domains rely on two different physical connections. Scaling out mostly relies on photonic chips and optical fiber, which together can sling data hundreds or thousands of meters. Scaling up, which results in networks that are roughly 10 times as dense, is the domain of much simpler and less costly technology—copper cables that often span no more than a meter or two.
But the increasingly high GPU-to-GPU data rates needed to make more powerful computers work are coming up against the physical limits of copper. As the bandwidth demands on copper cables approach the terabit-per-second realm, physics demands that they be made shorter and thicker, says David Kuo, vice president of product marketing and business development at the data-center-interconnect startup Point2 Technology. That’s a big problem, given the congestion inside computer racks today and the fact that Nvidia, the leading AI hardware company, plans an eightfold increase in the maximum number of GPUs per system, from 72 to 576 by 2027.
“We call it the copper cliff,” says Kuo.
The industry is working on ways to unclog data centers by extending copper’s reach and bringing slim, long-reaching optical fiber closer to the GPUs themselves. But Point2 and another startup, AttoTude, advocate for a solution that’s simultaneously in between the two technologies and completely different from them. They claim the tech will deliver the low cost and reliability of copper as well as some of the narrow gauge and distance of optical—a combination that will handily meet the needs of future AI systems.
Their answer? Radio.
Later this year, Point2 will begin manufacturing the chips behind a 1.6-terabit-per-second cable consisting of eight slender polymer waveguides, each capable of carrying 448 gigabits per second using two frequencies, 90 gigahertz and 225 GHz. At each end of the waveguide are plug-in modules that turn electronic bits into modulated radio waves and back again. AttoTude is planning essentially the same thing, but at terahertz frequencies and with a different kind of svelte, flexible cable.
Both companies say their technologies can easily outdo copper in reach—spanning 10 to 20 meters without significant loss, which is certainly long enough to handle Nvidia’s announced scale-up plans. And in Point2’s case, the system consumes one-third of optical’s power, costs one-third as much, and offers as little as one-thousandth the latency.
According to its proponents, radio’s reliability and ease of manufacturing compared with those of optics mean that it might beat photonics in the race to bring low-energy processor-to-processor connections all the way to GPU, eliminating some copper even on the printed circuit board.
What’s wrong with copper?
So, what’s wrong with copper? Nothing, so long as the data rate isn’t too high and the distance it has to go isn’t too far. At high data rates, though, conductors like copper fall prey to what’s called the skin effect.
A 1.6-terabit-per-second e-Tube cable has half the area of a 32-gauge copper cable and has up to 20 times the reach. Point2 Technology
The skin effect occurs because the signal’s rapidly changing current leads to a changing magnetic field that tries to counter the current. This countering force is concentrated at the middle of the wire, so most of the current is confined to flowing at the wire’s outer edge—the “skin”—which increases resistance. At 60 hertz—the mains frequency in many countries—most of the current is in the outer 8 millimeters of copper. But at 10 GHz, the skin is just 0.65 micrometers deep. So to push high-frequency data through copper, the wire needs to be wider, and you need more power. Both requirements work against packing more and more connections into a smaller space to scale up computing.
To counteract the skin effect and other signal-degrading issues, companies have developed copper cables with specialized electronics at either end. With the most promising, called active electrical cables, or AECs, the terminating chip is called a retimer (pronounced “re-timer”). This IC cleans up the data signal and the clock signal as they arrive from the processor. The circuit then retransmits them down the copper cable’s typically eight pairs of wires, or lanes. (There is a second set for transmitting in the other direction.) At the other end, the chip’s twin takes care of any noise or clock issues that accumulate during the journey and sends the data on to the receiving processor. Thus, at the cost of electronic complexity and power consumption, an AEC can extend the distance that copper can reach.
Don Barnetson, senior vice president and head of product at Credo, which provides network hardware to data centers, says his company has developed an AEC that can deliver 800 Gb/s as far as 7 meters—a distance that’s likely needed as computers hit 500 to 600 GPUs and span multiple racks. The first use of AECs will probably be to link individual GPUs to the network switches that form the scale-out network. This first stage in the scale-out network is important, says Barnetson, because “it’s the only nonredundant hop in the network.” Losing that link, even momentarily, can cause an AI training run to collapse.
But even if retimers manage to push the copper cliff a bit farther into the future, physics will eventually win. Point2 and AttoTude are betting that point is coming soon.
Terahertz radio’s reach
AttoTude grew out of founder and CEO Dave Welch’s deep investigations into photonics. A cofounder of Infinera, an optical telecom–equipment maker purchased by Nokia in 2025, Welch developed photonic systems for decades. He knows the technology’s weaknesses well: It consumes too much power (about 10 percent of a data center’s compute budget, according to Nvidia); it’s extremely sensitive to temperature; getting light into and out of photonics chips requires micrometer-precision manufacturing; and the technology’s lack of long-term reliability is notorious. (There’s even a term for it: “link flap.”)
“Customers love fiber. But what they hate is the photonics,” says Welch. “Electronics have been demonstrated to be inherently more reliable than optics.”
Fresh off Nokia’s US $2.3 billion purchase of Infinera, Welch asked himself some fundamental questions as he contemplated his next startup, beginning with “If I didn’t have to be at [an optical wavelength], where should I be?” The answer was the highest frequency that’s achievable purely with electronics—the terahertz regime, 300 to 3,000 GHz.
“You start with passive copper, and you do everything you can to run in passive copper as long as you can.” —Don Barnetson, Credo
So Welch and his team set about building a system that consists of a digital component to interface with the GPU, a terahertz-frequency generator, and a mixer to encode the data on the terahertz signal. An antenna then funnels the signal into a narrow, flexible waveguide.
As for the waveguide, it’s made of a dielectric at the center, which channels the terahertz signal, surrounded by cladding. One early version was just a narrow, hollow copper tube. Welch says that the second-generation cable—made up of fibers only about 200 µm across— points to a system with losses down to 0.3 decibels per meter—a small fraction of the loss from a typical copper cable carrying 224 Gb/s.
Welch predicts this waveguide will be able to carry data as far as 20 meters. That “happens to be a beautiful distance for scale-up in data centers,” he says.
So far, AttoTude has made the individual components—the digital data chip, the terahertz-signal generator, the circuit that mixes the two—along with a couple generations of waveguides. But the company hasn’t yet integrated them into a single pluggable form. Still, Welch says, the combination delivers enough bandwidth for at least 224 Gb/s transmission, and the startup demonstrated 4-meter transmission at 970 GHz last April at the Optical Fiber Communications Conference, in San Francisco.
Radio’s reach in the data center
Point2 has been aiming to bring radio to the data center longer than AttoTude has. Formed nine years ago by veterans of Marvell, Nvidia, and Samsung, the startup has pulled in $55 million in venture funding, most notably from computer cables and connections maker Molex. The latter’s backing “is critical, because they’re a major part of the cable-and-connector ecosystem,” says Kuo. Molex has already shown that it can make Point2’s cable without modifying its existing manufacturing lines, and now Foxconn Interconnect Technology, which makes cables and connectors, is partnering with the startup. The support could be a big selling point for the hyperscalers who would be Point2’s customers.
Nvidia’s GB200 NVL72 rack-scale computer relies on many copper cables to link its 72 processors together.NVIDIA
Each end of the Point2 cable, called an e-Tube, consists of a single silicon chip that converts the incoming digital data into modulated millimeter-wave frequencies and an antenna that radiates into the waveguide. The waveguide itself is a plastic core with metal cladding, all wrapped in a metal shield. A 1.6-Tb/s cable, called an active radio cable (ARC), is made up of eight e-Tube cores. At 8.1 millimeters across, that cable takes up half the volume of a comparable AEC cable.
One of the benefits of operating at RF frequencies is that the chips that handle them can be made in a standard silicon foundry, says Kuo. A collaboration between engineers at Point2 and the Korea Advanced Institute of Science and Technology, reported this year in the IEEE Journal of Solid-State Circuits, used 28-nanometer CMOS technology, which hasn’t been cutting edge since 2010.
The scale-up network market
As promising as their tech sounds, Point2 and AttoTude will have to overcome the data-center industry’s long history with copper. “You start with passive copper,” says Credo’s Barnetson. “And you do everything you can to run in passive copper as long as you can.”
The boom in liquid cooling for data-center computing is evidence of that, he says. “The entire reason people have gone to liquid cooling is to keep [scaling up] in passive copper,” Barnetson says. To connect more GPUs in a scale-up network with passive copper, they must be packed in at densities too high for air cooling alone to handle. Getting the same kind of scale-up from a more spread-out set of GPUs connected by millimeter-wave ARCs would ease the need for cooling, suggests Kuo.
Meanwhile, both startups are also chasing a version of the technology that will attach directly to the GPU.
Nvidia and Broadcom recently deployed optical transceivers that live inside the same package as a processor, separating the electronics and optics by micrometers rather than centimeters or meters. Right now, the technology is limited to the network-switch chips that connect to a scale-out network. But big players and startups alike are trying to extend its use all the way to the GPU.
Both Welch and Kuo say their companies’ technologies could have a big advantage over optical tech in such a transceiver-processor package. Nvidia and Broadcom—separately—had to do a mountain of engineering to make their systems possible to manufacture and reliable enough to exist in the same package as a very expensive processor. One of the many challenges is how to attach an optical fiber to a waveguide on a photonic chip with micrometer accuracy. Because of its short wavelength, infrared laser light must be lined up very precisely with the core of an optical fiber, which is only around 10 µm across. By contrast, millimeter-wave and terahertz signals have a much longer wavelength, so you don’t need as much precision to attach the waveguide. In one demo system it was done by hand, says Kuo.
Pluggable connections will be the technology’s first use, but radio transceivers co-packaged with processors are “the real prize,” says Welch.