Resultados 1 a 5 de 5
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,966

    [EN] AT&T: 400 Gbps Ethernet connection between NY, DC

    400GbE Connection Established During First Phase of Trials

    AT&T Inc.
    Mar 20, 2017


    DALLAS, March 20, 2017 /PRNewswire/ -- AT&T* successfully completed the first of a multi-phase trial testing 400 gigabit Ethernet data speeds. This brings us one step closer to quadrupling network speeds for businesses.

    In the field trial, we established a 400GbE connection between New York and Washington, D.C. This proved the AT&T nationwide software-centric network is ready for next-generation speeds.

    400GbE end-to-end service was transported across the network, which was carrying live traffic. A software-defined network (SDN) controller created a service along the direct path between the two cities, and through software control rerouted the service to a second path to simulate a response to a network failure.

    Late last year, we announced our intention to be the first in the industry to demonstrate 400GbE service across our production network, aligning with our shift toward a software-centric network.

    Traffic on the AT&T network continues to grow. 400GbE speeds will allow our business customers to transport massive amounts of data faster than ever. That also means faster uploads and downloads and ultra-fast video streaming.

    "Our approach to roll out the next generation of Ethernet speeds is working. We continue to see enormous data growth on our network, fueled by video. And this will help with that growth," said Rick Hubbard, senior vice president, AT&T Network Product Management.

    Next-generation speeds like 400GbE can help transform the way our customers do business.

    We're moving on to the second phase – a 400GbE end-to-end service transported across the AT&T OpenROADM metro network to the customer. This will show the network is ready for 400GbE to serve customers in metro areas.

    Phase 3 will test the first instance of a 400GbE open router platform. The "disaggregated router" platform uses merchant silicon and open source software – another industry first.

    *AT&T products and services are provided or offered by subsidiaries and affiliates of AT&T Inc. under the AT&T brand and not by AT&T Inc.

    http://www.prnewswire.com/news-relea...300426096.html

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,966

    Adva targets new data center interconnection opportunities with 600G technology

    Sean Buckley
    Mar 20, 2017

    Adva Optical Networking is looking to further its mark in the data center interconnection (DCI) space, introducing its TeraFlex terminal for its flagship FSP 3000 CloudConnect solution.

    The TeraFlex solution is capable of transporting 600 Gbps of data over a single wavelength, delivering total duplex capacity of 3.6 Tbps in a single-rack unit.

    Adva claims that this density represents 50% more density than technology from its competitors.

    Stephan Rettenberger, SVP of marketing and investor relations for ADVA Optical Networking, told FierceTelecom that the latest version builds upon its previous 200G capabilities. Adva is working closely with Acacia on the digital signal processing (DSP) front.

    “Acacia is introducing a new generation of technology will give us 600 Gbps on a single wavelength,” Rettenberger said. “We believe that is the very best use of the technology that money can buy and we will be the early adopters of that technology.”

    Rettenberger added that these new technologies will help raise the bar in the DCI market segment.

    “I think this is the new industry benchmark and it’s the Acacia piece that makes the difference in this solution,” Rettenberger said. “It’s an interesting footprint for those who care about maximum spectral efficiency, highest possible throughput, and lowest power consumption and that’s the hero box where everyone is trying to make noise.”

    DCI momentum growing

    Having provided optical solutions to the data center market for over 20 years, Adva is no stranger to the DCI market.

    However, it’s clear from the vendor’s fourth-quarter earnings report that DCI is becoming a larger part of its revenue stream.

    Adva reported annual 2016 revenues of 608.3 million, up 28.2% year-over-year. According to Ovum, these record sales have given the vendor global market leadership in several areas of the growing DCI market.

    “We grew our 2016 top line revenue, and close to 30% was organic,” Rettenberger said. “A lot of that came from internet content providers and data center interconnect applications.”

    Open line systems (OLS) provide flexibility

    One of the key elements of the new system is flexibility. Featuring open APIs and management interfaces, the ADVA FSP 3000 CloudConnect platform supports all known DCI architectures.

    But what’s even more compelling about the new product is that it is available as a complete solution or as a disaggregated Open Line System (OLS).

    By decoupling the terminal functions from the line system, customers are able to evolve and optimize each network layer separately and to specific innovation cycles.

    There are various options to disaggregate optical functions. One option is to separate transponders/modems from the line equipment such as ROADMs and amplifiers. Because this requires interoperability between the different equipment types, IHS Markit says service providers gain the advantage of not being locked into one equipment vendor. It is also expected to allow for more flexibility in equipment upgrade cycles, and to reduce capex costs.

    Heidi Adams, senior research director of transport networks at IHS Markit, told FierceTelecom that the adoption of flexible coherent technology and higher-speed wavelengths is driving change into optical line systems.

    “Data center interconnect (DCI) is emerging both as a growth market for optical equipment and as an application that is powering innovation in optical transmission equipment and operations, Adams said. “Optical disaggregation and open line systems (OLS) have also sparked many debates over the course of the past year.”



    Adams said that optical disaggregation has been getting the attention of a number of Tier 1 service providers.

    “AT&T is a strong supporter of this type of approach and has backed the OpenROADM MSA (multi-source agreement),” Adams said. “Our recent service provider survey indicated this is currently the most preferred ‘variant’ of optical disaggregation with that audience.”

    However compelling OLS may be, IHS Markit says that a number of service providers remain undecided on how they will use the new technology.

    The research firm said that “one-third of survey respondents indicated that they are considering the use of OLS in their networks, but half said they are undecided or not familiar with the technology.”

    http://www.fiercetelecom.com/telecom...gth-technology

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,966

    Ciena’s Anthony McLachlan on Submarine Cable Systems

    Rob Powell
    March 21st, 2017

    The world of submarine cable systems has been evolving quite rapidly in recent years. With new cables coming online and big content increasingly driving things, equipment providers are shifting to meet new requirements and keep up with the pace of bandwidth demand. With us today to give his perspective on the subject is Anthony McLachlan, VP & GM Asia Pacific at Ciena. Anthony came to Ciena via Nortel, and looks after the company’s global submarine business as well as the APAC region.

    TR: What has been the biggest technology driver over the last decade or so in the submarine cable business?

    AM: I would say coherent modems were a bit of a game changer, because it was a nice first foray into driving a form of openness in subsea cables. It had typically been an original vendor provider who both built the wet plant and then also the custom transport stuff for you in the network. Coherent networks allowed us to open these things up, embracing different sorts of capacities on the cable and having a meaningful effect on how capacity is consumed across the network.

    TR: What do you see as the next drivers of technological change in the submarine cable business?

    AM: We’re getting into this highly connected world, and so we're going to need lots more connectivity: new cables, different types of cables, and plenty of innovation to help open them up and get at the bandwidth. We were at 2.5G then 10G then 40G then 100G and now in some cases 200G. Are we going to continue to mine the spectrum for the best performance? We've done it on existing cables, and as new cables come out, we can put more capacity on those. But it will be a combination of spectral efficiency and lowering the cost of bandwidth carried. I think we're also going to see a blurring of terrestrial and submarine. In both cases, it's going to start to be about how you connect all these things together to build more resilient networks, and how you drive a lot more software automation and control into the networks. Just look at our customers, and their customers, and the webscale guys and how they consume bandwidth. It's not a surprise you will see that in submarine, because it's where we see the direction going in terrestrial. Sure, it's still going under the water, but we're all going toward this datacenter-to-datacenter model. There's no reason the submarine industry needs to be a laggard in that space. I see a lot of work being done in the software end of town.

    TR: What do you mean by moving toward a data center-to-data center model for submarine networks

    AM: In terrestrial networks we have long talked about datacenter-to-datacenter or datacenter-to-user, but in submarine we started out being beach-to-beach and then PoP-to-PoP. But it's data center-to-data center that we are heading toward now. I think it's fairly ubiquitous that it should be about seamless networks and resilience.

    TR: Where do you see software playing a bigger role?

    AM: When you get into the software piece, there are lots of opportunities: automation of provisioning services to restoration service, the mining of data for analytics, and the prediction of traffic flows and network performance. From an underlying technology perspective, a lot of the toolsets are available but a lot of the software is not yet. I think it will all enable a more resilient network and take out a lot of the costs in the process. Additionally, some of the techniques we use in terrestrial packet networking to get more bandwidth and subscription services in there are coming into submarine. There should be a range of client interfaces and flexibility, leveraging all the subscription side of packet networking where you can oversubscribe and get at unused bandwidth in a predictive, deterministic way. I think we'll be able to create a whole range of new operational and commercial options.

    TR: How will Ciena’s approach differ between the submarine and terrestrial technologies going forward?

    AM: They are very much alike. Our software engines are coming through Blue Planet and will be on the same platform. There is of course some secret sauce that we put in for submarine to make sure we can do greater distances and get the best efficiency out of the transponders and DSPs and modulation schemes that we use. There will also be some applications unique to submarine, but from a general family of how we open up the interfaces and APIs, it will be the same, and reasonably seamless for our customers. I do think the application of open networks and that whole competitive environment is a healthy thing. Open networking allows our customers to be in control and have a choice around best-of-breed options, and Ciena is a big advocate for that. We don't provide wet plant, for instance, and we don't foresee changing that. We are conditioned to work over the top of whatever wet plant provider may be there.

    TR: How has the rise of big content players operating global networks affected the direction of Ciena’s product set?

    AM: The webscale or hyperscale guys are very big consumers of bandwidth and they have a consumption-based view of network bandwidth, moving workloads back and forth. They have probably become, by no stretch of the imagination, the biggest consumer of it, and they are changing how the consortium structure of cables are built—and are building some of their own cables as well. Their networking requirements aren't always the same as some of the classic telecoms requirements, but are rather more simple sometimes. They may in some cases just want Ethernet, in others they still want OTN, but at the core it is around how to improve spectral efficiency and deliver the lowest cost per bit carried. They are asking a lot of probing questions around performance and how we build products. Ciena has gone to market with a product tailored to webscale providers, Waveserver. It uses the same DSP WaveLogic technology, but in a different form factor so the webscale guys can consume it differently. So it is creeping into how we build networks, but the way we are approaching is that the fundamental building blocks around our WaveLogic DSP and all the goodness that sits there will be carried through, and it's really the packaging and presentation that may change depending on the customer type and needs.

    TR: What’s on deck for Ciena technology-wise in submarine cable systems?

    AM: Continued focus on how we drive scale and performance, mining the spectrum to drive more bandwidth across, a lot of new techniques coming to market, more choice around modulation types, software automation, and more controls in the hands of customers. We'll look at other techniques around C and L band to ensure that we are adaptive to what the market needs there. We'll do this both for existing cables and the new cable markets.

    TR: What’s the biggest challenge or hurdle you see ahead of the industry?

    AM: With the webscale players, the dynamics of the industry compared to the past is that there is a continued strong appetite for the foreseeable future on bandwidth. The challenge for us as an industry is to be able to ensure we can meet those demands in a timely fashion. That will cause new cables to be built, but it will also be about how we adapt and continue to use the cables we already have to the best return for customers. That means finding ways to reduce costs in the network and looking at new techniques and ways of doing business.

    TR: How close are the physical limits on bandwidth in fiber really?

    AM: Everyone talks about these limits, but I look at it like an Olympic race. I’m always surprised every Olympics, when there’s another unbreakable record broken. You think “how fast can these guys go?” You are always surprised to see the continued innovation pushing the envelope every year. Sure there’s a degree of physics there, but I think the techniques are still going to be looked at. Progress will come from a combination of several different things: cable design and capability, electronics, modulation schemes, and how all these things gel together. I think it’s about how you mine the spectrum. For example, we’ve introduced flexible grid, optimizing the amount of bandwidth across the spectrum and getting more efficiency. Ciena will continue to work at how we can extract bandwidth out of a network over and above individual links. As a collective, there’s an opportunity to extract more bandwidth by knowing where the traffic is used.

    TR: Thank you for talking with Telecom Ramblings

    http://www.telecomramblings.com/2017...hlan-submarine

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,966

    Nokia & Facebook Push Undersea Fiber to 32 Tbit/s

    The Nokia experiment used a 5,500km fiber pair that Facebook had purchased.

    Craig Matsumoto
    3/21/2017

    Nokia and Facebook have announced a test where they've jammed 32 Tbit/s down one undersea cable, more than double the bandwidth normally achieved today.

    The technology, involving a Bell Labs invention called probabilistic constellation shaping (PCS), wouldn't be commercialized for at least three years, estimates Kyle Hollasch, director of product marketing for optical at Nokia Corp.

    Rather, the announcement is a classic OFC post-deadline paper -- a late research result submitted in hopes of getting a Thursday evening presenting slot. Post-deadline papers often feature what are called hero experiments -- eye-popping numbers related to the next generation of speed, size, power or bandwidth.

    Nokia and Facebook 's result "shows these cables have a lot of running room," Hollasch says.

    Web-scale operators are running their own undersea networks. They'd been participating in consortia -- pooling resources with other companies to invest in undersea fiber -- but are now taking sole ownership of some fiber spans or even laying their own cables under the sea.

    You might think they'd be better off leasing capacity, but that model doesn't work for them, said Vijay Visirikala, a network architect at Google, during yesterday's OIDA Executive Forum. Capacity isn't always available in places where Google needs it. And the pricing of leased capacity doesn't fit Google's traffic patterns, which involve unpredictable bursts of demand.

    The Nokia experiment used a 5,500km fiber pair that Facebook had purchased. According to Hollasch, the project started when Stephen Grubb, Facebook's global optical network architect, was impressed by Nokia's 1Tbit/s terrestrial experiment at ECOC last fall. (See Nokia, DT Break Terabit Barrier.)

    "He said, 'I have a 5,500km cable under the ocean -- how'd you like to test on that?" Hollasch says. "You can imagine how a bunch of engineers reacted to that. This was Christmas and their birthday all at once."

    Typical undersea fiber capacity using 100Gbit/s wavelengths is 13 Tbit/s, Hollasch says. The Nokia team pushed that to 17 Tbit/s using the Photonic Service Engine 2 (PSE-2) chip (the star of the ECOC paper) and got up to 32 Tbit/s by applying PCS.

    PCS was invented in the early 2000s but was shelved after the dotcom bust. It's a modulation method that tunes the optical signal based on the length of the fiber. Hollasch likens it to a bicycle drive train, where different gears allow better performance under different conditions -- only PCS provides a theoretically infinite number of gears.

    "It lets you eke out every bit of performance," he says.

    For those who want to geek out on details: PCS uses 64-QAM modulation -- that is, a signal using 64 permutations of phase and amplitude. (This is what the PSE-2 chip does.) Normal QAM assigns an equal probability to all 64 points; PCS favors the points with lower amplitude. If you graph what's happening in 3D, you'd see a small hill -- a 3-D Gaussian curve, essentially. Nokia is showing off that graph at OFC, and Hollasch assures us it looks really cool.

    http://www.lightreading.com/optical/.../d/d-id/731313
    Última edição por 5ms; 21-03-2017 às 14:46.

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,966

    500G DWDM Muxponder

    PR: Nokia Unveils PSE-2 Chip, Expands 1830 PSS Product Family


    3/16/2016

    ...

    The PSE-2 is the world's most sophisticated and highly integrated electro-optic chipset. It gives network operators the power to nimbly balance wavelength capacity and reach, maximizing the efficiency of every fiber in their network. The Nokia-designed PSE-2 is available in two versions:

    • The PSE-2 Super Coherent (PSE-2s) provides the ultimate in performance and flexibility for applications with very high traffic demands and potentially challenging distance requirements. It can be programmed with seven unique modulation formats to support optimized 100G to 500G transport wavelength capacities, and distances for applications ranging from metro to ultra-long haul - including the industry's first 400G single carrier, the first 200G long haul and the first 100G ultra-long haul. The PSE-2s lowers cost per bit per kilometer by maximizing capacity for every distance, while using 50 percent less power.


    • The PSE-2 Compact (PSE-2c) is optimized for 100G DWDM applications where density, space and low power are paramount, including metro access and aggregation networks. The PSE-2c design creates more compact line cards that support "pay as you grow" pluggable optics, while consuming 66 percent less power.


    ...

    Powered by the PSE-2s and its variable modulation capabilities, the 1830 PSS 500G DWDM Muxponder gives network operators unprecedented capacity, reach, and wavelength flexibility. It also offers operators investment protection for their 1830 PSS platforms with an instant capacity upgrade, carrying as many as five 100G services per line card. The 500G line card is available and being delivered to customers now.

    ...

    Daniel Melzer, CTO of DE-CIX, said: "By operating the world's leading Internet exchange with peak traffic of more than 5 Terabits per second, DE-CIX is seeing an increased need to dynamically interconnect 100G router ports to handle our changing bandwidth needs. We are excited to see Nokia introducing the optical innovations of the 1830 PSS 500G Muxponder, which can be reprogrammed quickly to multiple transport wavelength capacity and distance configurations. This unprecedented flexibility on a single optical line card will deliver a highly cost-effective solution that can support both raw capacity at 500G and maximum long haul distance at 200G."

    ...

    http://www.lightreading.com/optical/.../d/d-id/721919




    1830 PSS 500G DWDM Muxponder

    Benefit from a DWDM line card that provides unprecedented capacity, reach, and wavelength flexibility. Our 1830 PSS 500G Muxponder gives you:

    • Flexible 100G–500G super coherent transport wavelengths
    • 2x more 200G distance
    • Efficient 500G superchannels for metro and regional distances
    • More 100G distance for ultra-long haul and direct connect to subsea networks
    • Five 100G services per line card


    https://networks.nokia.com/products/...ent-technology
    Última edição por 5ms; 21-03-2017 às 15:05.

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •