On our way to Joint Techs 2011, in Clemson S.C.


We are looking forward to the upcoming Winter 2011 ESCC/Internet2 Joint Techs meeting from January 30 to February 3, 2011 at Clemson University in South Carolina.  The meeting is cosponsored by ESnet and Internet2.

Write us into your schedule. All Joint Techs talks will take place the auditorium, and we look forward to your questions and comments.

  • January 31, from 9:10 to 9:30 a.m., Evangelos (Vangelis) Chaniotakis, ESnet, will talk about “Automated GOLEs and Fenius: Pragmatic Interoperability”; including the GNI API task force, the collaboration for Fenius, GLIF as a forum for interoperability, demonstrations done at Geneva and SC10, and future plans.
  • February 01, 2011, between 3:50 – 4:10 p.m. Eli Dart, ESnet will talk about “The Science DMZ – A well-known location for high-performance network services”. Eli will discuss the critical need for network performance for productivity as science becomes increasingly data-intensive.  Frequently problems with network performance occur very near the endpoints of a data transfer. Eli will propose a simple network architecture, dubbed the “Science DMZ” for simplifying network performance tuning.
  • February 2, from 8:30 – 8:50 a.m. Steve Cotter, head of ESnet, will present an update on ESnet’s projects in 2010, including the Advanced Networking Initiative and the related 100 gigabits/second testbed, and give an overview of ESnet’s plans for 2011.

Other Lawrence Berkeley National Laboratory staff will be giving talks at the meeting, including:

  • January 31, 2011, 9:30  – 9:50 a.m. Mike Bennett, LBNL IT Division, will present on “Green Ethernet”. Bennett will provide an overview of the recently approved IEEE standard for Energy-Efficient Ethernet, IEEE Std 802.3az-2010, which specified energy efficient modes for copper interfaces. He will discuss the operation of Low Power Idle and illuminate its anticipated benefits as well as identify opportunities for innovation beyond the scope of the standard.
  • February 01, from 11:20 a.m. – 11:40 a.m., Dan Klinedinst, LBNL IT Division will discuss “3D Modeling and Visualization of Real Time Security Events”  where he will introduce Gibson, a tool for modeling real time security events and information in 3D. This tool allows users to watch a visual representation of threats, defenses and compromises on their systems as they occur, or record them for later analysis and forensics.
  • February 01, 4:10- 4:30 p.m., Dan Gunter, Advanced Computing for Science Department, LBNL, will discuss “End-to-End Data Transfer Performance with Periscope and NetLogger.” Gunter will examine the data currently being collected by the NetLogger toolkit for end-to-end bulk data transfers performed by the “Globus Online” service (between GridFTP servers), and describe the integration of that data with the Periscope service, which provides on-demand caching and analysis of perfSONAR data.

At the following ESCC meeting topic talks will focus on specific issues pertaining to ESnet’s goals for the coming year, as well as how they implement the DOE Office of Science vision. By the way, if you are an ESCC member and can’t make it to the meeting, ESnet will videostream it using the ESnet ECS Ad-hoc bridge.

To access the ESCC meeting remotely:

1) Open a browser to mcu1.es.net
2) In the Conf ID field type 372211 (no service prefix)
3) Next to “Streaming rate” select RealPlayer or Quicktime
Press<Stream this conference>

ESCC – February 2, 2011

  • 1:10 p.m. Greg Bell, ESnet will review the Site Cloud/Remote Services template discussed at the July ESCC meeting, and provide a brief summary of recent conversations with cloud-services vendors.
  • 1:30 p.m., Vince Dattoria, ESnet program manager for DOE/SC/ASCR will give the view from Washington.
  • 2:00 p.m., Steve Cotter will present an ESnet update.
  • 2:45 p.m., Sheila Cisko, FNAL will give an update on ESnet Collaborative Services.
  • 3:30 p.m., Brian Tierney, ESnet will provide the rundown on ANI, ESnet’s ARRA-funded Advanced Networking Initiative. So far, the ANI testbed has accepted the second round of research proposals. Brian will describe some highlights of recent experiments, and discuss further research opportunities.
  • 3:50 p.m., Steve Cotter will discuss site planning for the upcoming 100G production backbone.
  • 4:20 p.m. Eli Dart will talk about Science Drivers for ESnet Planning, and discuss the Science DMZ.
  • 6:30 p.m., Kevin Oberman will participate in the evening focus section on IP Address Management issues, emphasizing IPv6

ESCC- February 3, 2011

8:30 a.m. onwards, panels and discussions on aspects of IPv6

1:35 p.m., Joe Metger, ESnet will participate in the session on R&D Monitoring Efforts, discussing DICE monitoring directions

See you in South Carolina!

Engineering mixed traffic on ANI testbed


The first crop of experiments using ESnet’s Advanced Networking Initiative testbed are now in full swing. In a project funded by the DOE Office of Science, Prof. Malathi Veeraraghavan and post-doc Zhenzhen Yan at the University of Virginia, along with consultant Prof. Admela Jukan, are investigating the role of hybrid networking in ESnet’s next generation 100 Gbps network.
Their goal is to learn how to optimize a hybrid network comprised of two components, an IP datagram network and a high-speed optical dynamic circuit network, to best serve users’ data communication needs.  ESnet deployed a hybrid network in 2006, based on an IP routed network and the “science data network” (SDN), which is a dynamic virtual circuit network.  “It is a question of efficiency, which essentially boils down to cost.” Veeraraghavan notes, “IP networks have to be operated at low utilization for the performance requirements of all types of flows to be met. With hybrid networks, it is feasible to meet performance requirements while still operating the network at higher utilization.”

Data flows have certain characteristics that make them suited for certain types of networks. It is a complex problem to match flows with the “right” networks. In the ESnet core network, one can identify flows by looking at multiple fields in packet headers, according to Veeraraghavan, but you can’t know the size of the flow (bytes) or whether a flow is long or short. A challenge of this project is to predict characteristics of data flows based on prior history.  To do this, the researchers are using machine learning techniques. Flows are classified based on size and duration. Large-sized (“elephant”) flows are known to consume a higher share of bandwidth and thus adversely affect small-sized (“mice”) flows. Therefore, they are good candidates to redirect to the SDN. If SDN circuits are to established dynamically, i.e., after a router starts seeing packets in a flow that history indicates is a good candidate for SDN, then the flow needs to not only be large-sized but also of long-duration (“tortoise”) because circuit setup takes minutes. Short-duration (“dragonfly”) flows are not good candidates for dynamic circuits, but if they are of large size and occur frequently, static circuits could be used.

Boston is notorious for its mixed traffic

The concept of different lanes handling different types of traffic is seen commonly in other contexts. For example, Veeraraghavan notes that on some urban streets with mixed traffic, “separate lanes are set aside for buses, cars, motorcycles, and bicycles.” Also, grocery store checkouts have separate express lanes for the equivalent of “mice flows”.  To support this concept, the researchers developed several modules of a system called Hybrid Network Traffic Engineering Software (HNTES), and tested these modules on the ANI Testbed.

HNTES, diagrammed

Their experiments use two computers loaded with the HNTES software, and two Juniper routers. The HNTES software configures the routers to mirror packets of certain pre-determined flows (this determination is made with an offline flow analysis tool that analyzes previously collected Netflow data to find flow identifiers of elephant and tortoise flows) to a server that runs a flow-monitoring module (part of HNTES). Upon detecting such a flow, HNTES reconfigures the router to redirect packets from this flow to a circuit (different path). In future versions, if dynamic circuits are deemed feasible, a HNTES module called IDC Interface Module will send messages to an OSCARS IDC server to reserve and provision a circuit before reconfiguring the router.  So far Veeraraghavan and her colleagues have completed phase I of the software implementation, and demonstrated it. The demonstrations were presented in a Oct. 2010 DOE annual review meeting with previously recorded CAMtasia video. The next step will be to improve several features in the software to understand what happens to the flow when it is redirected; do packets get lost, or does redirection cause out of order arrivals at the destination? They are also doing a theoretical study with flow simulations, to see if taking the trucks off the parkway and putting them on the freeway really benefits the flow of “traffic.”

ESnet publishes design guide for high-performance data movers


As science becomes more and more data-intensive, demand for the capability of moving large data sets between sites over high-performance networks keeps increasing.  Now ESnet engineers Eric Pouyoul and Roberto Morelli have designed a powerful, yet inexpensive data transfer host that can function as a test server for network troubleshooting or as a data mover for scientific applications.

Assemble your own machine for that D.I.Y. glow of accomplishment

“We have right now a very fast network, but some users have difficulty realizing its full potential,” said Pouyoul. “We are in the process of increasing network bandwidth from 10Gbps to 100Gbps. But with computers, any time you multiply speed by a factor of 10, something is going to break. We also need to provide enough compute power to simulate the large data transfers that occur in normal scientific research, so we are locating boxes at different places on the ESnet network.”

So far, ESnet has built three test hosts based on the design, and deployed them at different points in the network so that users can test their own installations using the ESnet hosts as a reference. “This way, ESnet and the community can experiment with data transfers over different parts of the network.” Pouyoul says.  The test hosts are available to resources located on networks devoted to scientific research.

Pouyoul and Morelli’s box is powerful, yet affordable.  Anybody can build one using off-the-shelf components. Pouyoul points out that while the I/O speed of their creation is comparable to a half million-dollar machine, the footprint is much smaller. And parts, excluding labor, cost only around 10 grand. “We wanted to make it cheap and easy for do-it-yourself deployments.” Pouyoul said. “Our next step is to publish documentation to encourage people to build and install one on their own network. This is close to a production level machine for the R&D community.” Pouyoul has overseen the building and deployment of several machines by students.  You can find more about Pouyoul and Morelli’s innovation at http://fasterdata.es.net/dtn.html. Instructions on how to run your own tests on our three public test hosts is at: http://fasterdata.es.net/disk_pt.html