Age of Interconnection in AI Semiconductors
AI value is shifting from raw compute to memory, packaging, networking, optics, and power delivery.
AI workloads are increasingly constrained by data movement rather than arithmetic. Companies controlling bandwidth, proximity, power efficiency, and scale-up or scale-out connectivity can capture a growing share of semiconductor profits.
| Instrument | Side | Target | Reason |
|---|---|---|---|
| NVDA | Long | NVIDIA controls a vertically integrated AI infrastructure stack spanning GPUs, NVLink, networking, systems, and rack-scale architectures, giving it leverage as data movement becomes central to AI performance. | |
| AVGO | Long | Broadcom benefits from Ethernet switching, custom AI silicon partnerships, and connectivity silicon that becomes more valuable as hyperscale AI clusters expand. | |
| ALAB | Long | Astera Labs sells PCIe, CXL, and retimer products that help AI servers move data across chips and memory tiers, positioning it directly in the interconnect bottleneck. |
Advanced packaging becomes a strategic AI supply constraint.
CoWoS, hybrid bonding, interposers, and substrates are becoming decisive because modern AI accelerators need HBM placed close to compute and multiple dies integrated inside one package.
| Instrument | Side | Target | Reason |
|---|---|---|---|
| TSM | Long | TSMC can monetize scarce integration capacity through CoWoS and next-generation packaging as leading AI chip designers compete for access to HBM-adjacent compute. |
HBM turns memory from commodity into architecture.
AI accelerators are increasingly designed around memory bandwidth, capacity, and hierarchy. HBM scarcity and HBM4 adoption can support unusually strong pricing power for leading memory suppliers.
| Instrument | Side | Target | Reason |
|---|---|---|---|
| 000660.KS | Long | SK hynix has leading HBM share and can benefit from sustained AI memory scarcity, premium HBM4 pricing, and rising bandwidth requirements in accelerator designs. | |
| 005930.KS | Long | Samsung can gain if it improves execution in HBM and uses its scale in memory manufacturing to capture demand from hyperscaler and accelerator customers. |
Optical interconnect moves closer to the chip package.
Copper links face reach, power, and signal-integrity limits at AI cluster speeds. Co-packaged optics and optical engines can become essential for high-bandwidth, energy-efficient AI networking.
| Instrument | Side | Target | Reason |
|---|---|---|---|
| MRVL | Long | Marvell has positioned itself in optical scale-up and AI connectivity through silicon photonics, custom silicon, and the Celestial AI acquisition. | |
| COHR | Long | Coherent can benefit from rising optical component demand as AI data centers require higher-speed and lower-power links beyond the limits of copper. | |
| LITE | Long | Lumentum is exposed to optical communications demand that can expand as AI clusters require dense, high-speed optical connectivity. |
AI rack power delivery becomes a semiconductor profit pool.
Megawatt-scale AI racks make legacy low-voltage power distribution inefficient. 800V DC architectures, GaN, SiC, VRMs, and high-efficiency power conversion can become strategic infrastructure for AI factories.
| Instrument | Side | Target | Reason |
|---|---|---|---|
| MPWR | Long | Monolithic Power Systems can benefit from rising demand for efficient power management and conversion inside high-density AI servers and racks. | |
| IFX.DE | Long | Infineon has power semiconductor exposure that can gain from high-voltage AI rack architectures, SiC, GaN, and data-center power conversion. | |
| TXN | Long | Texas Instruments can participate in the AI power chain through analog and power-management components needed for efficient voltage conversion and control. |
Themes
The content on this page is for informational purposes only and does not constitute financial advice. Stoquate is not a licensed financial advisor. Always conduct your own research and consult a qualified professional before making any investment decisions.