Open Access Paper
17 November 2017 Chip scale package fiber optic transceiver integration for harsh environments
Author Affiliations +
Proceedings Volume 10563, International Conference on Space Optics — ICSO 2014; 1056335 (2017) https://doi.org/10.1117/12.2304174
Event: International Conference on Space Optics — ICSO 2014, 2014, Tenerife, Canary Islands, Spain
Abstract
We present fiber optic technology for 850 nm, VCSEL-based embedded optical computing solutions. We introduce concepts for compact, rugged fiber optic transceivers that provide multi-channel operation at 12.5 Gbps per channel. The transceiver can be placed in close proximity to high performance ASICs to provide direct optical I/O between components. The transceiver is packaged with material having match coefficients of thermal expansion (CTE), and expanded beam optical interface – these features offer survivability and operation over wide temperature ranges.

Introduction

There is considerable interest in the commercial markets to reduce the power consumption associated with data communications over copper interconnect. The availability of high performance ASICs, such as FPGAs, that have 10’s of channels operating at data rates above 10 Gbps have created a trend to place optical transceivers near the ASIC. The objective is to minimize the signal loss and power consumption associated with driving high speed signals across copper traces. The traditional PCB layout places the optical transceivers near the edge of the PCB, far away from the centrally located ASICs. In this situation, a more than 50% of power consumed in both the ASIC and transceiver is dedicated to driving high speed signals across the PCB. Optical interconnect also allows an increase in channel density without the EMI related crosstalk penalties.

This solution requires a unique packaging approach, as compared to traditional fiber optic modules. Several companies are developing advanced packaging to make embedded optical modules (EOM) possible [1-4]. These efforts created compact 1 x 12 transmitter and receiver components (called microPODs™), with each channel operating at 10 Gbps.

However, the EOM components may be placed in such close proximity to the ASIC that the local temperature is much higher than the typical 70 C rating of fiber optic transceivers. High performance computing ASICs can draw ~100 W of power, raising the operating temperature for nearby components. There is also a trend for data centers to operate equipment at a higher temperature to reduce the costs of cooling. A transceiver that can operate reliably at higher temperature (~100 C) is needed for these applications.

Chip-Scale-Packaged Embedded Optical Modules

To operate at high temperatures, the EOM must be constructed of materials that can withstand high temperature (ideally compatible with solder reflow) and maintain efficient optical coupling over wide temperatures.

The CORE is an ‘optical engine’ that performs the electrical to optical signal conversion. The CORE has an electrical wire-bond interface to the PCB. The CORE (see Figure 1) contains the transceiver ASIC, VCSEL (4x) and PIN (4x) arrays, collimating optics and mechanical features for alignment to a fiber connector. The CORE footprint is 8 x 8 mm2 and the height is 1.2 mm. The VCSEL has an efficient thermal path to the bottom of the CORE (with a measured ΔT < 8° between VCSEL and package case in the configuration described in this white paper).

Figure 1:

The CORE is a flip-chip assembled optoelectronic component with integrated coupling optics.

00123_PSISDG10563_1056335_page_2_1.jpg

The cross-section of the CORE in a FR-4 arrangement with a ruggedized vertical connector (RVCON™) is shown in Figure 2. The CORE to RVCON™ optical interface uses collimated beams (aka, expanded beam interface, but at a micro-scale). This interface relaxes the alignment tolerance at the connector interface. This interface uses four ‘expansion joints’ to accommodate the CTE mis-match between the RVCON™ and CORE materials. This interface has been verified over thermal cycling between -55 C to 125 C to be mechanically sound, with less than 1 dB loss and 1 dB variation in optical coupling.

Figure 2:

The CORE cross-section with a top-down RVCON™ connector. The CORE is made of materials with matching CTEs, designed to survive temperature extremes, including solder reflow. The expanded beam interface to the RVCON allows offers tolerance to optical misalignment due to the CTE mis-match between the CORE and RVCON.

00123_PSISDG10563_1056335_page_3_1.jpg

Requirements for Embedded Optical Modules

The requirements for EOM components are much different than traditional pluggable fiber optic components (e.g., SFP, XFP, active optical cables). The placement of fiber optic transceivers near the ASIC will require new approaches to packaging, thermal control, fiber connectors and qualification. While standard specifications have not been established for EOM components, we can anticipate expected requirements.

Data rate – for communications between boards and longer distances (i.e., rack-to-rack) the industry has moved to optical interconnects at rates of 10 Gbps and above (per channel). The lowest cost and mature solution for aggregate bandwidths of 100 Gbps, and link lengths up to 100 m, is VCSEL-based short-reach technology (see Figure 3). The 100 m reach covers ~90% of the optical links in a typical data center application. The predominate arrangement is 12 channels operating in parallel. This technology is well qualified today, due to large volume 10 G to 14 G Ethernet/Fibre Channel products into the commercial market.

Figure 3:

Normalized 100G link cost using SR, DFB LR and WDM (Courtesy B. Koley, Google Inc.)

00123_PSISDG10563_1056335_page_3_2.jpg

The next node is expected to be 25 Gbps per channel. At 25 Gps, the number of signal compensating electronics needed is skyrocketing along with other costs in PCB material and connector. Designers are looking to embedded optical modules close to the electronics and moved data optically between components on the board (‘chip-to-chip’). This transition point is making copper interconnects more expensive and optics more favorable and at 25Gbps for both inside and outside the systems chassis. The technology for 25 Gbps has been demonstrated by several groups and is in active development. This development includes the VCSEL devices and circuitry that perform equalization and forward error correction (FEC) on the MMF link.

Power consumption – The power consumption (or energy per bit transferred) of photonic links is dominated by the circuitry used to drive the light source and to detect the optical signal. This is true for any photonic technology currently in development (VCSELs, silicon-based modulators, or ring resonators), the circuitry accounts for ~90% of link power consumption. The most recent result from IBM shows 1.37 pJ/bit at 15 Gbps for a full link (see Figure 4) [5]. Note: in this discussion a ‘link’ is the electrical->optical->electrical conversion (no SERDES). VCSEL technology is currently the lowest power technology, since SMF components require TEC for stabilization over temperature and the optical coupling is less efficient. However, much research is being performed on SMF (e.g., silicon photonics and ring resonators) and this technology is expected to achieve similar power efficiency at the 40Gbs to 50Gbs per channel nodes.

Figure 4:

Power efficiency of a VCSEL link in 90-nm CMOS (Courtesy Clint Schow, IBM Research)

00123_PSISDG10563_1056335_page_3_3.jpg

Temperature – The key issues facing EOM deployment will be the thermal operating environments and fears about optics reliability in high heat environments. High performance ASICs can reach temperatures of 110C where most optics has reliability issues. Trends in big data centers are to raise internal systems temperatures due to the high cost of cooling. Increasing the temperature can reduce the data center energy consumption by 2-5% per degree [6].

The high heat environments can accelerate device failures (especially lasers) and move around optical sub-assembly alignments.

EOM based on Semiconductor Packaging

Figure 5 shows a transceiver component that is packaged within a standard ASIC package (100-pin QFN). The elimination of the leads allows for high speed operation. The part is assembled by wirebonding the CORE component into the QFN package and adding support for the fiber cable attach. The transceiver has room for additional ASICs, and can be configured with built-in optical time domain reflectometry (OTDR) with an external micro-controller, or as a standard-transceiver (no OTDR) with an integrated microcontroller.

Figure 5:

Transceiver in a 100-pin QFN package.

00123_PSISDG10563_1056335_page_4_1.jpg

There are ancillary benefits to a flip-chip assembly approach to building transceivers. The package is designed for signal integrity (reduced signal loss and crosstalk) and supports data rates to future nodes of 10, 17 and 25 Gbps (defined by the commercial VCSEL market). Quad transmit and receive ASICs for data rates up to 12.5 Gbps are available in a form-factor designed for CORE packaging. Figure 6 shows the transmitter evaluation for 10 GbE applications.

Figure 6:

10.3125 Gbps operation of circuits designed for incorporation within next generation CORE components.

00123_PSISDG10563_1056335_page_4_2.jpg

Built-in-Test -As fiber becomes more prevalent for short distance links, fiber networks can have a number of short span links (chip-chip, board-to-board, etc). In large scale deployment, the fiber system may be vulnerable to fiber faults, especially at the connection interfaces. To address the cost associated with fiber system maintenance and to enhance overall network availability, NAVAIR initiated programs to develop built-in-test (BIT) within the transceiver components. BIT capability can detect and isolate the faults within the transceiver and along the fiber path allowing for quick and accurate resolution.

BIT technology can monitor both transmit and receive average signal strength (link-loss) and the amplitude of the eye-opening (valid signal). Advanced BIT can perform OTDR by incorporation of a timing ASIC (with pulse generation and detection capability). Figure 7 shows an OTDR measurement using this timing ASIC coupled to an optical transceiver.

Figure 7:

High-resolution (< 5 cm) OTDR measurement using next generation transceiver ASICs.

00123_PSISDG10563_1056335_page_4_3.jpg

Removable pigtail - A fiber connector allows the user to switch out a damaged fiber pigtail without replacing the entire transceiver (see Figure 8). Since the fiber cables are not rated for the temperature profile experienced in solder reflow, the removable pigtail allows creation of a transceiver that can survive the pick-and-place solder reflow process. There is a cost savings associated with the ability to assemble, replace and re-work transceivers using a standard reflow process.

Figure 8:

EOM transceiver with removable fiber connector.

00123_PSISDG10563_1056335_page_5_1.jpg

Next Generation EOM with Chip Scale Packaging

Next generation EOMs can bypass the semiconductor package and create transceivers based on chip scale packaging (CSP). The CORE as a stand-alone transceiver, eliminates extraneous parts, and offer electrical I/O paths that will support bandwidths of 25 Gbps. The concept is shown in Figure 9.

Figure 9:

CSP transceiver design. This is a component that can be soldered directly to a PCB.

00123_PSISDG10563_1056335_page_5_2.jpg

The component is assembled on a transparent carrier, which can either be sapphire or glass. We currently us sapphire (as our existing component has some support circuitry – thus, we use silicon-on-sapphire circuitry), but we plan to migrate to glass to reduce costs. The transceiver ASIC, OE devices and electrical I/O are on the bottom side of the carrier. A lens component is aligned and attached to the top of the carrier. This component is a stack containing collimated optical lenses, which are sealed, and a silicon layer. The silicon layer has mechanical features for attachment of a fiber connector and openings to allow the light to pass.

In this configuration, the transparent carrier has electrical signal routing that interconnects the ASICs, OE chips and copper-posts. The carrier is created in a wafer process that creates copper posts (sometimes called ‘pillars’) topped with solder caps. This process has been developed to support flip-chip ASIC packaging and is a variant of IBM’s C4 process (controlled collapse chip connection).

The carrier size is 7.7 mm x 8.3 mm with 80 electrical I/O. While the current layout is for a 4+4 format, we reserved the area for the additional I/O needed for a 1 x 12 format (either a 1 x 12 transmitter or 1 x 12 receivers). Therefore, the carrier will support either 4 + 4 or 1 x 12 formats with a simple change to the routing metallization on the carrier.

The electrical connections on the carrier should support high speed routing. The copper-post spacing and routing were designed to match 50 ohms. We modeled the electrical crosstalk between channels and found better than 30 dB of isolation.

Early integration of this CSP approach is shown in Figure 10 below

Figure 10

CSP Transceiver 4x4mm

00123_PSISDG10563_1056335_page_5_3.jpg

Conclusion

We present methods of creating compact fiber optic transceivers for that can operate over wide temperatures. This approach has promise to significantly reduce the cost of transceiver components and assembly processes, bringing the cost in-line with that of current commercial transceivers. The approach enables the incorporation of advanced built-in-test and solderable transceiver components.

Acknowledgements

This work was performed under the following SBIR contracts:

  • Army Research Lab: Michael Gerhold (TPOC)

  • Dept of Energy: Rich Carlson (TPOC)

  • NAVAIR: Mark Beranek (TPOC)

References

[1] 

L. Huff, “State of the Short-Reach Optics Market,” in Optical Fiber Communication Conf, (2011). Google Scholar

[2] 

Schares, L., “Optics in Future Data Center Networks,” in High Performance Interconnects (HOTI), 2010 IEEE 18th Symposium, 104 –108 (2010). Google Scholar

[3] 

Xuezhe Zheng, “BGA package integration of electrical, optical, and capacitive interconnects,” in Electronic Components and Technology Conference, 59th, 191 –195 (2009). Google Scholar

[4] 

T. E. Wilson, “1000-Gb/s Hybrid Macro for Direct-on-Die Integration of Terabit Optical Link Interconnect in Large Silicon CMOS SOCs,” in 36th Annual GOMACTech Conference MARCH 21-24, (2011). Google Scholar

[5] 

C. P. Lai, C. L. Schow, A. V. Rylyakov, B. G. Lee, F. E. Doany, R. A. John, and J. A. Kash, “20-Gb/s Power-Efficient CMOS-Driven Multimode Links,” in Proceedings of OFC, (2011). Google Scholar

[6] 

Nosayba El-Sayed, “Temperature Management in Data Centers: Why Some (Might) Like It Hot,” SIGMETRICS, 12 (2012). Google Scholar
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chuck Tabbert and Charles Kuznia "Chip scale package fiber optic transceiver integration for harsh environments", Proc. SPIE 10563, International Conference on Space Optics — ICSO 2014, 1056335 (17 November 2017); https://doi.org/10.1117/12.2304174
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Transceivers

Fiber optics

Interfaces

Connectors

Vertical cavity surface emitting lasers

Packaging

Copper

RELATED CONTENT


Back to Top