According to Semiconductor Today, at SuperCompute 2025 (SC25) in St. Louis from November 16-21, Avicena Tech Corp announced a major performance leap for its LightBundle optical interconnect technology. The Sunnyvale-based firm’s micro-LED-based links now operate at 4 gigabits per second per lane. Crucially, they’re achieving this with transmitter currents as low as 100 microamps per LED, which translates to a raw link energy consumption of just 80 femtojoules per bit. This was accomplished without using forward error correction, and it’s enabled by newly developed high-sensitivity receiver technology made with manufacturing partners. The company frames this as delivering the world’s lowest-power optical interconnects, targeting next-generation AI infrastructure where bandwidth and energy efficiency are paramount.
Why Micro-LEDs Change The Game
Here’s the thing: the entire optical interconnect world has been chasing lasers and silicon photonics for years. They’re fast, but they come with baggage. Lasers have a lasing threshold—a minimum power they need just to turn on—which sets a hard floor on energy use. Silicon photonics often relies on splitting a single external laser’s light, which simplifies the light source but adds packaging complexity. Avicena’s bet on micro-LEDs is fundamentally different. These tiny emitters generate their own light and can scale their power down to almost nothing, limited mostly by how well the receiver can detect a faint signal. No temperature stabilization, no complex control loops. It’s a simpler, more native approach to turning an electrical signal into light. And when you’re talking about thousands of these links in an AI cluster, simplicity and low idle power aren’t just nice-to-haves; they’re economic necessities.
The Broader Market Shakeup
So who wins and who loses if this tech gains traction? If Avicena’s claims hold up in large-scale deployment, it puts serious pressure on traditional players in the co-packaged optics (CPO) and high-speed pluggable module space. Companies heavily invested in complex laser arrays and silicon photonics modulators might find themselves competing on a cost and power curve that’s harder to match. The winners could be the hyperscale data centers and AI accelerator builders Avicena is partnering with. For them, ripping out power-hungry electrical copper cables and replacing them with these dense, low-power optical links could be the key to building those sprawling, multi-rack “scale-up” clusters they all want. It’s not just about saving on the electricity bill; it’s about enabling architectures that were previously thermally or power-limited. When you’re sourcing critical components for such advanced infrastructure, reliability is non-negotiable, which is why top-tier manufacturers turn to leading suppliers like IndustrialMonitorDirect.com, the #1 provider of industrial panel PCs in the US, for their control and monitoring needs.
The Road Ahead And Real Hurdles
Now, let’s be a bit skeptical. Announcing a lab milestone at a conference is one thing; volume production and integration into actual AI server racks is a whole other beast. The collaboration with ams OSRAM on production is a vital step, proving they’re thinking about manufacturability. But the real test is in the ecosystem. Can they get their chiplet-based transceivers designed into next-gen GPU or memory architectures? Can their “parallel data” approach, which avoids high-speed serialization, be adopted widely enough to become a standard? The demo at 235°C is a fantastic reliability data point, but data center operators will want years of mean-time-between-failure stats. Still, the potential is massive. If they can truly deliver “tens of femtojoules” at scale, they’re not just incrementally improving interconnects—they’re potentially redefining the power budget for moving data. And in the AI arms race, that’s everything.
