ESnet gives CISCO Nerd Lunch talk, learns televangelism is harder than it seems


As science transitions from lab-oriented to a distributed computational and data-intensive activity, the research and education (R&E) networking community is tracking the growing data needs of scientists. Huge instruments like the Large Hadron Collider are being planned and built. These projects require global-scale collaborations and contributions from thousands of scientists, and as the data deluge from the instruments grows, even more scientists are interested in analyzing it for the next breakthrough discovery. Suffice it to say that even though worldwide video consumption on the Internet is driving a similar increase in commercial bandwidth, the scale, characteristics, and requirements of scientific data traffic is quite different.

And this is why ESnet got invited to Cisco Systems’ headquarters last week to talk about how we how we handle data as part of their regular Nerd Lunch talk series. What I found interesting although not surprising, was that with Cisco being a big evangelist of telepresence, more employees attended the talk from their desks than in person.  This was a first for me and I came away with a new appreciation for the challenges of collaborating across distances.

From a speaker’s perspective, the lesson learnt by me was to brush up my acting skills. My usual preparations are to rehearse the difficult transitions and  focus on remembering the few important points to make on every slide. When presenting, that slide presentation portion of my brain goes on auto-pilot, while my focus turns towards evaluating the impact on the audience. When speaking at a podium one can observe when someone in the audience opens a notebook to jot down a thought, when their attention drifts to email on the laptop in front of them, or when a puzzled look appears on the face of someone as they try to figure out the impact of the point I’m trying to make. But these visual cues go missing with a largely webcast audience, making it harder to know when to stop driving home a point or when to explain the point further to the audience.  In the future, I’ll have to be better at keeping the talk interesting without the usual clues from my audience.

Maybe the next innovation in virtual-reality telepresence is just waiting to happen?

Notwithstanding the challenges of presenting to a remote audience, enabling remote collaboration is extremely important to ESnet. Audio, video and web collaboration is a key service offered by us to the DOE labs. ESnet employees use video extensively in our day-to-day operations. The “ESnet watercooler”, a 24×7 open video bridge, is used internally by our distributed workforce to discuss technical issues, as well as, to have ad-hoc meetings on topics of interest. As science goes increasingly global, scientists are also using this important ESnet service for their collaborations.

With my brief stint in front of a stage now over, it is back to ESnet and then on to the 100G invited panel/talk at IEEE ANTS conference in Mumbai. Wishing all of you a very Happy New Year!

Inder Monga

Why this spiking network traffic?


ESnet November 2010 Traffic

Last month was the first in which the ESnet network crossed a major threshold – over 10 petabytes of traffic! Traffic volume was 40% higher than the prior month and 10 times higher than just a little over 4 years ago. But what’s behind this dramatic increase in network utilization?  Could it be the extreme loads ESnet circuits carried for SC10, we wondered?

Breaking down the ESnet traffic highlighted a few things.  Turns out it wasn’t all that demonstration traffic sent across thousands of miles to the Supercomputing Conference in New Orleans (151.99 TB delivered), since that accounted for only slightly more than 1% of November’s ESnet-borne traffic.  We observed for the first time significant volumes of genomics data traversing the network as the Joint Genome Institute sent over 1 petabyte of data to NERSC. JGI alone accounted for about 10% of last month’s traffic volume. And as we’ve seen since it went live in March, the Large Hadron Collider continues to churn out massive datasets as it increases its luminosity, which ESnet delivers to researchers across the US.

Summary of Total ESnet Traffic, Nov. 2010

Total Bytes Delivered: 10.748 PB
Total Bytes OSCARS Delivered: 5.870 PB
Pecentage of OSCARS Delivered: 54.72%

What is is really going on is quite prosaic, but to us, exciting. We can follow the progress of distributed scientific projects such as the LHC  by tracking the proliferation of our network traffic, as the month-to-month traffic volume on ESnet correlates to the day-to-day conduct of science. Currently, Fermi and Brookhaven LHC data continue to dominate the volume of network traffic, but as we see, production and sharing of large data sets by the genomics community is picking up steam. What the stats are predicting: as science continues to become more data-intensive, the role of the network will become ever more important.


A few grace notes to SC10


As SC10 wound down, ESnet started disassembling the network of connections that brought experimental data from the rest of the country to New Orleans, (and at least a bit of the universe as well). We detected harbingers of 100Gbps in all sorts of places. We will be sharing our observations on promising and significant networking technologies with you in blogs to come.

We were impressed by the brilliant young people we saw at the SC Student Cluster Competition organized collaboratively part of SC Communities, which brings together programs designed to support emerging leaders and groups that have traditionally been under-represented in computing.  Teams came from U.S. universities, including Purdue, Florida A&M, SUNY Stonybrook, and University of Texas at Austin, as well as universities from China and Russia.

Florida A&M team

Nizhni Novgorod State University team

At ESnet, we are always looking for bright, committed students interested in networking internships (paid!). We are also still hiring.

 

As SC10 concluded, the computer scientists and network engineers on the streets of the city dissipated, replaced by a conference of anthropologists. SC11 is scheduled for Seattle. But before we go, a note of appreciation to New Orleans.

Katrina memorial

Across from the convention center is a memorial to the people lost to Katrina; a sculpture of a wrecked house pinioned in a tree. But if you walk down the street to the corner of Bourbon and Canal, each night you will hear the trumpets of the ToBeContinued Brass Band. The band is a group of friends who met in their high school marching bands and played together for years until scattered by Katrina. Like the city, they are regrouping, and are profiled in a new documentary.

Our mission at ESnet is to help scientists to collaborate and share research. But a number of ESnet people are also musicians and music lovers, and we draw personal inspiration from the energy, technical virtuosity and creativity of artists as well as other engineers and scientists. We are not alone in this.

New Orleans is a great American city, and we wish it well.

100G: it may be voodoo, but it certainly works


SC10, Thursday morning.

During the SC10 conference, NASA, NOAA, ESnet, the Dutch Research Consortium, US LHCNet and CANARIE announced that they would transmit 100Gbps of scientific data between Chicago and New Orleans.  Through the use of 14 10GigE interconnects, researchers attempted to  completely utilize the full 100 Gbps worth of bandwidth by producing up to twelve 8.5-to-10Gbps individual data flows.

Brian Tierney reports: “We are very excited that a team from NASA Goddard completely filled the 100G connection from the show floor to Chicago.  It is certainly the first time for the supercomputing conference that a single wavelength over the WAN achieved 100Gbps. The other thing that is so exciting about it that they used a single sending host to do it.”

“Was this just voodoo?” asked NERSC’s Brent Draney.

Tierney assures us that indeed it must have been… but whatever they did, it certainly works.

Visit Jon Dugan’s BoF in network measurement


ESnet’s Jon Dugan will lead a Bof on network measurement 12:15, Thurs in room 278-279 at SC10. Functional networks are critical to high performance computing, but to achieve optimal performance, it is necessary to accurately measure networks.  Jon will open up the session to discuss ideas in measurement tools such as perfSONAR, emerging standards, and the latest in research directions.

The circuits behind all those SC10 demos


It is midafternoon Wednesday at SC10 and the demos are going strong. Jon Dugan supplied an automatically updating graph in psychedelic colors  http://bit.ly/9HUrqL of the traffic ESnet is able to carry with all the circuits we set up. Getting this far required many hours of work from a lot of ESnet folk to accommodate the virtual circuit needs of both ESnet sites and SCinet customers using the OSCARS IDC software.  As always, the SCinet team has put in long hours in a volatile environment to deliver a high performance network that meets the needs of the exhibitors.

Catch ESnet roundtable discussions today at SC10, 1 and 2 p.m.


Wednesday Nov. 17 at SC10:

At 1 p.m. at Berkeley Lab booth 2448, catch ESnet’s Inder Monga’s round-table discussion on OSCARS virtual circuits. OSCARS, the acronym for On- demand Secure Circuits and Advance Reservation System, allows users to reserve guaranteed bandwidth. Many of the demos at SC10 are being carried by OSCARS virtual circuits which were developed by ESnet with DOE support. Good things to come: ESnet anticipates the rollout of OSCARS 0.6 in early 2011. Version 0.6 will offer greatly expanded capabilities and versatility, such as a modular architecture enabling easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture.

Then, stick around, because next at 2 p.m.  Brian Tierney from ESnet will lead a roundtable on the research being produced from the ARRA-funded Advanced Networking Initiative (ANI) testbed.

In 2009, the DOE Office of Science awarded ESnet $62 million in recovery funds to establish ANI, a next generation 100Gbps network connecting DOE’s largest unclassified supercomputers, as well as a reconfigurable network testbed for researchers to test new networking concepts and protocols.

Brian will discuss progress on the 100Gbps network, update you on the several research projects already underway on the testbed, discuss testbed capabilities and how to get access to the testbed. He will also answer your questions on how to submit proposals for the next round of testbed network research.

In the meantime, some celeb-spotting at the LBNL booth at SC10.

Inder Monga

Brian Tierney

We’ve got a new standard: IEEE P802.3az Energy-Efficient Ethernet ratified


GUEST BLOG: We’ve got EEE. Now what?

ESnet fully supports the drive for energy efficiency to reduce the amount of emissions caused by information and communication technologies (ICT). IEEE just announced that Energy-Efficient Ethernet (EEE) or IEEE P803.3az is the new standard enabling copper interfaces to reduce energy use when the network link is idle . Energy saving mechanisms of EEE can be applied in systems beyond the Ethernet physical interface, e.g. the PCI Express bus.  New hardware is required to benefit from EEE, however, so its full impact won’t be realized for a few years. ESnet is in the middle of the Advanced Network Initiative to deploy a cross-country 100G network and we would like to explore end-to-end power saving possibilities including 40G and 100G Ethernet interfaces…Here’s why:

In 2006 articles began to appear discussing the ever-increasing consumption of energy by ICT as well as how data center giants such as Google and Microsoft were locating new data centers based on the availability and cost of energy. Meanwhile, the IEEE was attempting to create a specification to reduce network energy usage, and four years later, ratified the P802.3az or Energy-Efficient Ethernet (EEE).

Earlier this year, the ITU World Summit for an Information Society reported that electricity demand by the ICT sector in industrialized countries is between 5 percent and 10 percent of total demand. But about half the electricity used is wasted by powered on equipment that is idle. So while completion of this project seems timely, the question remains how “triple-e” will impact energy use for Ethernet consumers. EEE defines a protocol to reduce energy usage during periods of low utilization for copper and backplane interfaces up to 10Gb/s.  It also reuses a couple of other IEEE protocols to allow uninterrupted communication between link partners.  While this combination of protocols can save energy, it is uncertain how much time the typical Ethernet link operates at low utilization, especially when the P802.3ba, or 40G and 100G Ethernet standard was just ratified in June, suggesting relief for pent up demand for bandwidth.

So why isn’t there an energy-efficient version of the higher-speed version of Ethernet?

The answer depends on the type of Ethernet interface and its purpose in the network, as an interface in a home desktop computer will likely be idle much longer than an uplink interface in a data center switch. A key feature of this new standard is called Low Power Idle. As the name suggests, during idle time the non-critical components of the interface go to sleep.  The link partner is activated by a wake up signal allowing the receiver time to prepare for an incoming frame.

Consider the utilization plot shown below:

File Server Bandwidth Utilization Profile

Not all links are the same

This window on a file server in an enterprise network shows plenty of idle periods. While there are several peaks over 500 Mb/s, the server is mostly idle, with average utilization under one percent. On the other hand, there are many examples of highly utilized links as well (just look at some of ESnet’s utilization plots). In those cases, less energy is saved, but the energy is being used to do something useful, like transfer information.

But when considering the number of triple-speed copper Ethernet interfaces deployed, energy savings start to add up. The P802.3az Task Force members estimated power savings in US alone can reach 5 Terawatt-hours per year, or enough energy to power 6 million 100W light bulbs. This translates into a reduction of the ICT carbon footprint by roughly 5 million tons per year.

Since EEE is built into the physical interface, new hardware will be required to take advantage of this feature and it will take a few years to reach 100% market saturation.

Getting back to the question about energy efficiency for 40G and 100G Ethernet, there are a few reasons why LPI was not specified for P802.3ba. This project overlapped with P802.3az so it is difficult to specify an energy-efficient method for the new speeds, given the record size of the project and the lack of P802.3az resources for work on optical interfaces.  This leads to another question:  Should there be an energy-efficient version of 40G and 100G Ethernet?  Or should there be an energy-efficient version of optical and P802.3ba interfaces?

To decide the scope of the project P802.3az we examined the magnitude of power consumed and number of interfaces in the market.  The power consumed for a 1000BASE-T interface is less than that used by a10GBASE-T interface, but there are orders of magnitudes more of the former. On the other hand, early in the project not many 10GBASE-T interfaces existed in the market, but the interfaces consumed power on the order of 10W-15W per interface.  These numbers are reduced by each new improvement in process technology, but they are still significant.

Considering first generation 100G transceivers can consume more than 20W each and the millions of optical Ethernet interfaces in the market, further standards development is worth pursuing.

Mike Bennett is a senior network engineer for LBLnet and chair of P802.3az. He can be reached at MJBennett@lbl.gov

ESnet recognized for outstanding performance


ESnet’s Evangelos Chaniotakis and Chin Guok received Berkeley Lab’s Outstanding Performance Award for their work in promoting technical standards for international scientific networking. Their work is notable because the implementation of open-source  software development and new technical standards for network interoperability sets the stage for scientists around the world to better share research and collaborate.

Guok and Chaniotakis worked extensively within the DICE community on development of the Inter-domain Controller Protocol (IDCP). They are taking the principles and lessons gained from years of development efforts and applying them to the efforts in international standards bodies such as the Open Grid Forum (OGF), as well as consortia such as the Global Lambda Infrastructure Facility (GLIF).

So far, the IDCP has been adopted by more than a dozen Research and Education (R&E) networks around the world, including Internet2 (the leading US higher education network), GEANT (the trans-European R&E network), NORDUnet (Scandinavian R&E network) and USLHCNet (high speed trans-Atlantic network for the LHC community).

Guok and Chaniotakis have also advanced the widescale deployment of ESnet’s virtual circuits OSCARS (On Demand Secure Circuits and Reservation System). OSCARS, developed with DOE support, enables networks
to schedule and move the increasingly vast amounts of data generated by large-scale scientific collaborations. Since last year, ESnet has seen a 30% increase in the use of virtual circuits. OSCARS virtual circuits now carry over 50% of ESnet’s monthly production traffic.  The increased use of virtual circuits was a major factor enabling ESnet to easily handle a nearly 300% rise in traffic from June 2009 to May 2010.

Why are we reincarnating OSCARS?


OSCARS ESnet traffic patterns

Some recent articles on new developments in virtual circuits such as Fenius and cloud computing with Google, Internet2’s announcements of its ION service, and the recently funded DYNES proposal are all powered by OSCARS or On Demand Secure Circuits and Reservation System, a software engine developed with DOE funding. This open-source software engine provides us with the capability of building a network with highly dynamic, traffic-engineered flows that meet the research data transport needs of scientists. The current deployed release, 0.5.2, has been deployed as a production service within ESnet for the past 3 years. We are currently enhancing 0.5.3 and plan to release the software in the Q4, 2010 time frame.

In the course of running this software as a production service and interacting with scientists, network researchers, and standards community at OGF, we realized we had to redesign the software architecture to be a much more robust and extensible platform. We wanted to be able to easily add new features to the OSCARS platform that would cater to a variety of network engineers and researchers.  With this in mind, the re-architectured OSCARS is planned as release version 0.6. Like any successful product, transitioning from a deployed release to a new one involves thorny issues like backward compatibility and feature parity. Hence, the current balancing act of taking something that is quite good and proven (0.5.2), but making it even better a.k.a. 0.6.

Here are four good reasons why OSCARS 0.6 is the way to go:

1. It can meet production requirements: The modular architecture enables features to be added through the use of distinct modules. This allows specific deployment requirements to be easily integrated into the service. For example, if it is necessary to support a federated AA implementation, the AA modules can be replaced with ones that are compliant with that AA framework (e.g. Shibboleth).  Another example would be High Availability (HA). The 0.6 architecture helps provide HA on a component basis, ensuring that the critical components do not fail.

2. It provides new complex features: As end-sites and their operators become comfortable with point to point provisioning of virtual circuits, we are getting increased requests for complex feature enhancements. The OSCARS 0.5 software architecture is not especially suitable for new features like multi-point circuits and/or multi-layer provisioning. But these new feature requests increase the urgency of moving to the 0.6 release that has been designed with such enhancements in mind. Moreover, the multi-layer ARCHSTONE research project funded by DOE will use 0.6 as the base research platform.

3. Research/GENI and other testbeds: The research community is a major constituent for OSCARS and its continuing development. This community is now conducting experiments on real infrastructure testbeds like the ANI and GENI. To really leverage the power of those testbeds, the research community wants to leverage the OSCARS software base/framework, while researching/innovating on certain algorithms and testing them. OSCARS 0.6 platform’s modular architecture enables the researcher to replace any component with new algorithmic research module. For example, with the new PCE engine re-design, one can write a flexible workflow of custom PCE’s. This flexibility does not exist with the purpose-built, but monolithic architecture of the OSCARS 0.5 codebase.

4. NSI Protocol/Standards: As the European and Asian research and education communities move towards interoperability with the US, it is important to leverage a common understanding brought through via standards. The NSI protocol standardization being discussed in the OGF NSI working group (http://ogf.org/gf/group_info/view.php?group=nsi-wg) needs to be implemented by the network middleware open source community like OSCARS. We feel that the 0.6 is the right platform to upgrade to the standard NSI protocol whenever it is ready.

At ESnet, we invest considerable time in new technology development, but balance this with our operational responsibilities. We invite the community to join in developing OSCARS 0.6, which has greatly improved capabilities over OSCARS 0.5.2. With your participation in the development process, we can accelerate the 0.6 architected software to production-quality as soon as possible.  If this excites you, we welcome you to contribute to the next stage of the OSCARS open source project.

–Chin Guok

Follow

Get every new post delivered to your Inbox.

Join 1,118 other followers

%d bloggers like this: