The authors found that there is significant room for improvement in the data transfer capabilities currently in place for CMIP5, both in terms of workflow mechanics and in data transfer performance. In particular, the paper notes that performance improvements of at least an order of magnitude are within technical reach using current best practices.
Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.
Step 9: Why just rent fiber? Pick up your own dark fiber network at a bargain price for future expansion. In the meantime, boost your bandwidth to 100G for everyone. (2012)
Step 10: Here’s a cool idea, come up with a new network design so that scientists moving REALLY BIG DATASETS can safely avoid institutional firewalls, call it the Science DMZ, and get research moving faster at universities around the country. (2012)
Step 12: 100G is fast, but it’s time to get ready for 400G. To pave the way, ESnet installs a production 400G network between facilities in Berkeley and Oakland, Calif., and even provides a 400G testbed so network engineers can get up to speed on the technology. (2015)
Step 13: Celebrate 30 years as a research and education network leader, but keep looking forward to the next level. (2016)
The booths have been dismantled, the routers and switchers shipped back home and the SC14 conference in New Orleans officially ended Nov. 21, but many attendees are still reflecting on important connections made during the annual gathering of the high performance computing and networking community.
Among those helping make the right connections were ESnet staff, who used ESnet’s infrastructure to bring a combined network capacity of 400 gigabits-per-second (Gbps) in the Ernest Morial Convention Center. Those links accounted for one third of SC14’s total 1.22 Tbps connectivity, provided by SCinet, the conference’s network infrastructure designed and built by volunteers. The network links were used for a number of demonstrations between booths on the exhibition floor and sites around the world.
A quick review of the ESnet traffic patterns at https://my.es.net/demos/sc14#/summary shows that traffic apparently peaked at 12:15 p.m. Thursday, Nov. 20, with 79.2 Gbps of inbound data and 190 Gbps flowing out.
The Naval Research Laboratory (NRL), in collaboration with the DOE’s Energy Sciences Network (ESnet), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, the Center for Data Intensive Science (CDIS) at the University of Chicago, the Open Cloud Consortium (OCC) and significant industry support, have conducted a 100 gigabits per second (100G) remote I/O demonstration at the SC14 supercomputing conference in New Orleans, LA.
The remote I/O demonstration illustrates a pipelined distributed processing framework and software defined networking (SDN) between distant operating locations. The demonstration shows the capability to dynamically deploy a production quality 4K Ultra-High Definition Television (UHDTV) video workflow across a nationally distributed set of storage and computing resources that is relevant to emerging Department of Defense data processing challenges.
Corsa Technology, ESnet, and REANNZ have successfully demonstrated the first international Software Defined Networking (SDN)-only IP transit network of three Autonomous Systems (AS) managed as SDN domains. The partners took the approach of building and testing an Internet-scale SDN solution that not only embodies the SDN vision of separation of control and data, but enables seamless integration of SDN networks with the Internet.
This first implementation passed through 3 AS domains, namely Energy Sciences Network (ESnet) at Berkeley, REANNZ at Wellington, and Google research deployment at Victoria University, Wellington (NZ). ESnet’s node used the Corsa DP6420 640Gbps data plane as the OpenFlow hardware packet forwarder, controlled by the open-source VANDERVECKEN SDN controller stack (based on RouteFlow and Quagga).
ESnet Chief Technologist Inder Monga is co-author of “Optical Networks Come of Age,” was has just been published in the September 2014 issue of Optics and Photonics News.
Although fiber optic transmission capacity has grown by seven orders of magnitude in just 20 years, these systems serve mainly as the “fat pipes,” the large-scale plumbing of the Internet, according to the article. But that is changing.
“Greater use of optical networks—particularly in network edge applications that carry less aggregated, more “bursty,” service traffic—and continued traffic growth will soon revise this picture,” the authors write. “A changing landscape in fiber optic communication technologies is stimulating a resurgence of interest in optical switching. These trends are coming together in ways that hold promise for the long-anticipated widespread deployment of optically switched fiber networks that respond in real time to changing traffic and operator requirements.
“The ultimate mission is to enable the next-generation Internet—one that can support terabit-per-second speeds, but that remains economical and energy efficient,” the authors write.
In addition to Monga, the authors are Daniel Kilper, University of Arizona; Keren Bergman, Columbia University; Vincent W.S. Chan, MIT; George Porter, University of California, San Diego; and Kristin Rauschenbach, Notchway Solutions LLC.
Henrique Rodrigues, a Ph.D. student in computer science at the University of California, San Diego, who is working with ESnet, won the best student paper award at the Hot Interconnects conference held Aug. 26-28 in Mountain View, Calif. Known formally as the 2014 IEEE 22nd Annual Symposium on High-Performance Interconnects, Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales.
“Special thanks to ESnet that gave me the opportunity to work on such an important and interesting topic,” Rodrigues wrote to his ESnet colleagues. “Also to the reviewers of my endless drafts, making themselves available to provide feedback at all times. I hope to continue with the good collaboration moving forward!”
At the recent Supercomputing11 conference, the bubbly was flowing. ESnet launched its ANI 100 gigabit-per-second network, and marked a quarter century of networking for DOE science. That big news may have overshadowed another milestone—SC11 was the first time OSCARS 0.6 was publicly demonstrated in a production environment. Now we’d like to give OSCARS its due.
OSCARS, or On-Demand Secure Circuits and Advance Reservation System, allows users to set up virtual circuits on demand to reserve bandwidth, streamlining the transfer of massive data sets across multiple network domains. OSCARS originated at ESnet, but we open-sourced it to the community long ago. Last spring the more modular OSCARS version 0.6 was released for testers and early adopters.
The performance of OSCARS 0.6 at SC11 showed us that we met our design goal of creating a flexible and modular framework. This was reflected in the demos, which were easy for folks to customize according to their needs. In the demo, “Enabling Large Scale Science using Inter-domain Circuits over OpenFlow” Tom Lehman of ISI used OSCARS to provide the functionality to control Openflow switches. Thanks to the flexibility to customize software built into OSCARS 0.6, ESnet’s Eric Pouyol was able to produce a variation of that application, customizing OSCARS 0.6 for resource brokering. OSCARS also played a part in the successful demonstration of Internet2’s Dynamic Network System (DYNES).The goal of DYNES is to work with regional networks and campuses, using OSCARS to schedule and support scientific data flows from the LHC, and other data intensive science programs such as LIGO, Virtual Observatory, and other large-scale sky surveys.
Most of the 100 Gbps demos at SC were supported by both the ANI 100 Gbps network and the 100 Gbps SCinet showfloor network. OSCARS 0.6 was used to schedule all eight of the demos using the 100 Gbps ANI network, which included complex visualizations of climate models, the Large Hadron Collider and the VERY early history—13.5 billion years ago, or 100 billion in dog years— of the Universe. OSCARS also controlled the approximately 100 different connections at SCInet, as well as connecting to three other OSCARS instances on the show floor.
We used OSCARS 0.6 to provision the network, scheduling user time-slices of the 100 gigabit-per-second ANI and SCinet network, 24 hours a day, over the period of a week so they could test the demos in advance without having to get up at 3:00 a.m. to do it.
OSCARS 0.6 ended up making certain network engineers’ lives much easier. According to my colleague Evangelos Chaniatakis a.k.a. Vangelis, who was involved in the gritty details of setting up OSCARS 0.6 at the show, his team was required to make last-minute changes to the pre-existing network framework to work with the new hardware but didn’t receive the equipment until the week before the conference. The modularity ESnet built into OSCARS 0.6 helped the team get the network working at short notice.
Less of a Software, More of a Service
Every year the number of reservations and circuits at SC continues to grow. The SC11 network required roughly twice the number of VLANs over the previous year. While the bandwidth wasn’t much bigger, and there were approximately the same number of customers, this year’s users definitely had more requirements. “On the whole OSCARS 0.6 was really stable.” Vangelis reports. “It worked fine.” But the lessons learned at SC11 made us rethink the OSCARS 0.6 service module and requirements. In the near future, we intend to tweak OSCARS 0.6 to provide users more flexibility, making it less of a software and more of a service.
ESnet and its collaborators successfully completed three days of demonstrating its End-to-End Circuit Service at Layer 2 (ECSEL) software at the Open Networking Summit held at Stanford a couple of weeks ago. Our goal is to build “zero-configuration circuits” to help science applications seamlessly use networks for optimized end-to-end data transport. ECSEL, developed in collaboration with NEC, Indiana University, and the University of Delaware builds on some exciting new conceptual thinking in networking.
Wrangling Big Data
To put ECSEL in context, the proliferating tide of scientific data flows – anticipated at 2 petabytes per second as planned large-scale experiments get in motion – is already challenging networks to be exponentially more efficient. Wide area networks have vastly increased bandwidth and enable flexible, distributed, scientific workflows that involve connecting multiple scientific labs to a supercomputing site, a university campus, or even a cloud data center.
The increasing adoption of distributed, service-oriented computing means that resource and vendor independence for service delivery is a key priority for users. Users expect seamless end-to-end performance and want the ability to send data flows on demand, no matter how many domains and service providers are involved. The hitch is that even though the Wide Area Network (WAN) can have turbocharged bandwidth, at these exponentially increasing rates of network traffic even a small blockage in the network can seriously impair the flow of data, trapping users in a situation resembling commute conditions on sluggish California freeways. These scientific data transport challenges that we and other R&E networks face are just a taste of what the commercial world will encounter with the increasing popularity of cloud computing and service-driven cloud storage.
Abstracting a solution
One of the key feedback from application developers, scientists and end-users is that they do not want to deal with the complexity at the infrastructure level while still accomplishing their mission. At ESnet, we are exploring various ways to make networks work better for users. A couple of concepts could be game-changers, according to Open Network Summit conference presenter and Berkeley professor Scott Shenker: 1) using abstraction to manage network complexity, and 2) extracting and exposing simplicity out of the network. Shenker himself cites Barbara Liskov’s Turing Lecture as inspiration.
ECSEL is leveraging OSCARS and OpenFlow within the Software Defined Networking (SDN) paradigm to elegantly prevent end-to-end network traffic jams. OpenFlow is an open standard to allow application-driven manipulation of network flows. ECSEL is using OSCARS-controlled MPLS virtual circuits with OpenFlow to dynamically stitch together a seamless data plane delivering services over multi-domain constructs. ECSEL also provides an additional level of simplicity to the application, as it can discover host-network interconnection points as necessary, removing the requirement of applications being “statically configured” with their network end-point connections. It also enables stitching of the paths end-to-end, while allowing each administrative entity to set and enforce its own policies. ECSEL can be easily enhanced to enable users to verify end-to-end performance, and dynamically select application-specific protocol forwarding rules in each domain.
The OpenFlow capabilities, whether it be in an enterprise/campus or within the data center, were demonstrated with the help of NEC’s ProgrammableFlow Switch (PFS) and ProgrammableFlow Controller (PFC). We leveraged a special interface developed by them to program a virtual path from ingress to egress of the OpenFlow domain. ECSEL accessed this special interface programmatically when executing the end-to-end path stitching workflow.
Our anticipated next step is to develop ECSEL as an end-to-end service by making it an integral part of a scientific workflow. The ECSEL software will essentially act as an abstraction layer, where the host (or virtual machine) doesn’t need to know how it is connected to the network–the software layer does all the work for it, mapping out the optimum topologies to direct data flow and make the magic happen. To implement this, ECSEL is leveraging the modular architecture and code of the new release of OSCARS 0.6. Developing this demonstration yielded sufficient proof that well-architected and modular software with simple APIs, like OSCARS 0.6, can speed up the development of new network services, which in turn validates the value-proposition of SDN. But we are not the only ones who think that ECSEL virtual circuits show promise as a platform for spurring further innovation. Vendors such as Brocade and Juniper, as well as other network providers attending the demo were enthusiastic about the potential of ECSEL.
But we are just getting started. We will reprise the ECSEL demo at SC11 in Seattle, this time with a GridFTP application using Remote Direct Memory Access (RDMA) which has been modified to include the XSP (eXtensible Session Protocol) that acts as a signaling mechanism enabling the application to become “network aware.” XSP, conceived and developed by Martin Swany and Ezra Kissel of Indiana University and University of Delaware, can directly interact with advanced network services like OSCARS – making the creation of virtual circuits transparent to the end user. In addition, once the application is network aware, it can then make more efficient use of scalable transport mechanisms like RDMA for very large data transfers over high capacity connections.
We look forward to seeing you there and exchanging ideas. Until Seattle, any questions or proposals on working together on this or other solutions to the “Big Data Problem,” don’t hesitate to contact me.
Eric Pouyoul, Vertika Singh (summer intern), Brian Tierney: ESnet
We are proud to announce that two of ESnet’s projects have received IDEA (Internet2 Driving Exemplary Applications) awards in Internet2’s 2011 annual competition for innovative network applications that have had the most positive impact and potential for adoption within the research and education community. (see: Internet2’s press release).
Internet2 recognized OSCARS (On-Demand Secure Circuits and Advance Reservation System), developed by the ESnet team led by Chin Guok, including Evangelos Chaniotakis, Andrew Lake, Eric Pouyoul and Mary Thompson. Contributing partners also included Internet2, USC ISI and DANTE.
ESnet’s MAVEN (Monitoring and Visualization of Energy consumed by Networks) proof of concept application was also recognized with an IDEA award in the student category. MAVEN was prototyped by Baris Aksanli during his summer internship at ESnet. Baris is a Ph.D student at the University of California, San Diego conducting research at the System Energy Efficiency Lab with his thesis advisor, Dr. Tajana Rosing. Baris worked closely with his summer advisor, Inder Monga, and Jon Dugan to implement MAVEN as part of ESnet’s new Green Networking Initiative.
The idea behind OSCARS
OSCARS enables researchers to automatically schedule and guarantee end-to-end delivery of scientific data across networks and continents. For scientists, being able to count on reliable data delivery is critical as scientific collaborations become more expansive, often global. Meanwhile, in disciplines ranging from high-energy physics to climate, scientists are using powerful, geographically dispersed instruments like the Large Hadron Collider that are producing increasingly massive bursts of data, challenging the capabilities of traditional IP networks.
OSCARS virtual circuits can reliably schedule time-sensitive data flows – like those from the LHC – round the clock across networks, enabling research and education networks to seamlessly meet user needs. OSCARS code is also being deployed by R&E networks worldwide to support an ever-growing user base of researchers with data-intensive collaboration needs. Internet2, U.S. LHCnet, NORDUNet, RNP in Brazil as well as over 10 other regional and national networks have currently implemented OSCARS for virtual circuit services. Moreover, Internet2’s NSF-funded DyGIR and DYNES projects will in 2012 deploy over 60 more instances of OSCARS at university campuses and regional networks to support scientists involved in LHC, Laser Interferometer Gravitational-Wave Observatory (LIGO), Large Synoptic Survey Telescope (LSST) and Electronic Very-Long Baseline Interferometry (eVLBI) programs.
We are proud of the hard work and dedication the OSCARS development team has demonstrated since the start of this project. Just as importantly we are proud to see this work paying off in with new science collaboration and discoveries.
The potential of MAVEN
The Monitoring and Visualization of Energy consumed by Networks (MAVEN) project is a brand new prototype portal that will help network operators and researchers better track live network energy consumption and environmental conditions. MAVEN – implemented by Baris during his summer internship – is a first major step for ESnet in instrumenting our network with the tools to understand these operational dynamics. As networks continue to get bigger and faster, they will require more power and cooling in an era of decreased energy resources. To address this pressing challenge, ESnet is leading a new generation of research aimed at understanding how networks can operate in a more energy-efficient manner. We are grateful for Baris’ significant contributions in leading the development of MAVEN and glad to see that his talent is being recognized by the R&E networking community through this award.