Verizon sees an opportunity to substantially reduce the cost of delivering data services on shorter routes using equipment supporting speeds of 200 Gbps on a single fiber. The carrier anticipates deploying the technology in the second half of 2014 after it is released to the market in the first half of the year, said Steven Gringeri, Verizon engineering fellow, in an interview.
Telecompetitor checked in with Verizon after learning that the company had successfully completed a trial of a 200G connection using pre-production software on Ciena optical networking equipment. The trial used a single wavelength on an 88-wavelength system between New York and Boston that also carried production traffic. The production network supports speeds of 100 Gbps per wavelength, and the system uses traditional 50GHz wavelength spacing, Gringeri said.
The higher-speed connection comes at a trade-off, Gringeri explained. While a 100G wavelength can carry signals for 2,000-2,500 kilometers without regeneration, that range is reduced to 500-800 km for a 200G wavelength, he said. The 200G connection uses 16QAM technology, which doubles the amount of data that can be carried in comparison with the QPSK technology used with 100G. But the 16QAM approach generates more noise, thereby reducing reach.
“We don’t want to have to regenerate,” said Gringeri. Doing so, he said, would negate some of the benefits of using 200G.
He added, however, that “there is a lot of demand in the 500- to 800-km range.”
The eastern U.S., in particular, has numerous city pairs that exchange high volumes of traffic with one another and are located between 500 and 800 km apart, he said.
On those routes, Verizon sees the potential for major cost savings from using 200G. “The cost of hardware is incrementally higher,” said Gringeri. And that, he said, means that “the cost per bit is nearly half.”
As network operators seek to boost network speeds, increasingly they will find themselves factoring in the trade-offs between reach and data rates. Orange Business Services, for example, has deployed 400 Gbps connectivity using two wavelengths and equipment from Alcatel-Lucent on a high-traffic 560-km route between Paris and Lyon.
Equipment developers also are looking at boosting bandwidth by minimizing the spacing between wavelengths and combining multiple wavelengths into super-channels. Some network operators have done trials of speeds as high as 1 Tbps using that approach.
Good to see moore's law is alive and well in transport. Now if we could get metcalfe's law (the network effect) working everywhere, then we could have a real explosion in the last mile (wired and wireless access). The real problem with transport pricing is there is a disconnect between total supply and demand. This is a both a relative and absolute pricing issue. In relative terms it is stock vs flow pricing. In absolute terms it is pricing which reflects marginal cost and getting as much demand elasticity as possible into the ecosystem to amortize the investment rapidly. Average cost based pricing is non-generative and market limiting.