How the World’s Fastest Science Network Was Built


Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

Funded by DOE’s Office of Science and managed by the Lawrence Berkeley National Laboratory (Berkeley Lab), ESnet moves about 51  petabytes of scientific data every month. This is a 13-step guide about how ESnet has evolved over 30 years.

Step 1: When fusion energy scientists inherit a cast-off supercomputer, add 4 dialup modems so the people at the Princeton lab can log in. (1975)


Step 2: When landlines prove too unreliable, upgrade to satellites! Data screams through space. (1981)


Step 3: Whose network is best? High Energy Physics (HEPnet)? Fusion Physics (MFEnet)?  Why argue? Merge them into one-Energy Sciences Network (ESnet)-run by the Department of Energy!  Go ESnet! (1986)


Step 4: Make it even faster with DUAL Satellite links! We’re talking 56 kilobits per second! Except for the Princeton fusion scientists – they get 112 Kbps! (1987)


Step 5:  Whoa, when an upgrade to 1.5 MEGAbits per second isn’t enough, add ATM (not the money machine, but Asynchronous Transfer Mode) to get more bang for your buck. (1995)


Step 6: Duty now for the future—roll out the very first IPv6 address to ensure there will be enough Internet addresses for decades to come. (2000)


Step 7: Crank up the fastest links in the network to 10 GIGAbits per second—16 times faster than the old gear—a two-generation leap in network upgrades at one time. (2003)


Step 8: Work with other networks to develop really cool tools, like the perfSONAR toolkit for measuring and improving end-to-end network performance and OSCARS (On-Demand Secure Circuit and Advance Reservation), so you can reserve a high-speed, end-to-end connection to make sure your data is delivered on time. (2006)


Step 9: Why just rent fiber? Pick up your own dark fiber network at a bargain price for future expansion. In the meantime, boost your bandwidth to 100G for everyone. (2012)


Step 10: Here’s a cool idea, come up with a new network design so that scientists moving REALLY BIG DATASETS can safely avoid institutional firewalls, call it the Science DMZ, and get research moving faster at universities around the country. (2012)



Step 11: We’re all in this science thing together, so let’s build faster ties to Europe. ESnet adds three 100G lines (and a backup 40G link) to connect researchers in the U.S. and Europe. (2014)


Step 12: 100G is fast, but it’s time to get ready for 400G. To pave the way, ESnet installs a production 400G network between facilities in Berkeley and Oakland, Calif., and even provides a 400G testbed so network engineers can get up to speed on the technology. (2015)


Step 13: Celebrate 30 years as a research and education network leader, but keep looking forward to the next level. (2016)


ESnet Connections Peak at 270 Gbps Flow In, Out of SC14 Conference

The booths have been dismantled, the routers and switchers shipped back home and the SC14 conference in New Orleans officially ended Nov. 21, but many attendees are still reflecting on important connections made during the annual gathering of the high performance computing and networking community.

Among those helping make the right connections were ESnet staff, who used ESnet’s infrastructure to bring a combined network capacity of 400 gigabits-per-second (Gbps) in the Ernest Morial Convention Center. Those links accounted for one third of SC14’s total 1.22 Tbps connectivity, provided by SCinet, the conference’s network infrastructure designed and built by volunteers. The network links were used for a number of demonstrations between booths on the exhibition floor and sites around the world.

A quick review of the ESnet traffic patterns at shows that traffic apparently peaked at 12:15 p.m. Thursday, Nov. 20, with 79.2 Gbps of inbound data and 190 Gbps flowing out.

Among the largest single users of ESnet’s bandwidth was a demo by the Naval Research Laboratory, which used ESnet’s 100 Gbps testbed to conduct a 100 Gbps remote I/O demonstration at SC14. Read the details at:

NRL and Collaborators Conduct 100 Gigabit/Second Remote I/O Demonstration

Screen Shot 2014-11-20 at 9.40.29 AM

The Naval Research Laboratory (NRL), in collaboration with the DOE’s Energy Sciences Network (ESnet), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, the Center for Data Intensive Science (CDIS) at the University of Chicago, the Open Cloud Consortium (OCC) and significant industry support, have conducted a 100 gigabits per second (100G) remote I/O demonstration at the SC14 supercomputing conference in New Orleans, LA.

The remote I/O demonstration illustrates a pipelined distributed processing framework and software defined networking (SDN) between distant operating locations. The demonstration shows the capability to dynamically deploy a production quality 4K Ultra-High Definition Television (UHDTV) video workflow across a nationally distributed set of storage and computing resources that is relevant to emerging Department of Defense data processing challenges.

Visit the My Esnet Portal at to view real-time network traffic on ESnet.
Visit the My Esnet Portal at to view real-time network traffic on ESnet.

Read more:

ESnet partners with Corsa, REANNZ and Google in first end-to-end trans-Pacific SDN BGP multi-AS network

Corsa Technology, ESnet, and REANNZ have successfully demonstrated the first international Software Defined Networking (SDN)-only IP transit network of three Autonomous Systems (AS) managed as SDN domains.  The partners took the approach of building and testing an Internet-scale SDN solution that not only embodies the SDN vision of separation of control and data, but enables seamless integration of SDN networks with the Internet.

This first implementation passed through 3 AS domains, namely Energy Sciences Network (ESnet) at Berkeley, REANNZ at Wellington, and Google research deployment at Victoria University, Wellington (NZ).  ESnet’s node used the Corsa DP6420 640Gbps data plane as the OpenFlow hardware packet forwarder, controlled by the open-source VANDERVECKEN SDN controller stack (based on RouteFlow and Quagga).

Read more.

ESnet’s Inder Monga Co-authors Article on Growing Role of Optical Networks

ESnet Chief Technologist Inder Monga is co-author of “Optical Networks Come of Age,” was has just been published in the September 2014 issue of Optics and Photonics News.

Although fiber optic transmission capacity has grown by seven orders of magnitude in just 20 years, these systems serve mainly as the “fat pipes,” the large-scale plumbing of the Internet, according to the article. But that is changing.

“Greater use of optical networks—particularly in network edge applications that carry less aggregated, more “bursty,” service traffic—and continued traffic growth will soon revise this picture,” the authors write. “A changing landscape in fiber optic communication technologies is stimulating a resurgence of interest in optical switching. These trends are coming together in ways that hold promise for the long-anticipated widespread deployment of optically switched fiber networks that respond in real time to changing traffic and operator requirements.

“The ultimate mission is to enable the next-generation Internet—one that can support terabit-per-second speeds, but that remains economical and energy efficient,” the authors write.

In addition to Monga, the authors are Daniel Kilper, University of Arizona; Keren Bergman, Columbia University; Vincent W.S. Chan, MIT; George Porter, University of California, San Diego; and Kristin Rauschenbach, Notchway Solutions LLC.

A pdf of the article can be found at:

ESnet Chief Technologist Inder Monga
ESnet Chief Technologist Inder Monga

ESnet Student Assistant Henrique Rodrigues Wins Best Student Paper Award at Hot Interconnects

Henrique Rodrigues, a Ph.D. student in computer science at the University of California, San Diego, who is working with ESnet, won the best student paper award at the Hot Interconnects conference held Aug. 26-28 in Mountain View, Calif. Known formally as the 2014 IEEE 22nd Annual Symposium on High-Performance Interconnects, Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales.

Rodrigues’ paper, “Traffic Optimization in Multi-Layered WANs using SDN,” was co-authored by Inder Monga, Chin Guok and Eric Poulyou of ESnet, Abhinava Sadasivarao and Sharfuddin Syed of Infinera Corp. and Tajana Rosing of UC San Diego.

“Special thanks to ESnet that gave me the opportunity to work on such an important and interesting topic,” Rodrigues wrote to his ESnet colleagues. “Also to the reviewers of my endless drafts, making themselves available to provide feedback at all times. I hope to continue with the good collaboration moving forward!”

OSCARS 0.6 hits the limelight

At the recent Supercomputing11 conference, the bubbly was flowing. ESnet launched its ANI 100 gigabit-per-second network, and marked a quarter century of networking for DOE science. That big news may have overshadowed another milestone—SC11 was the first time OSCARS 0.6 was publicly demonstrated in a production environment. Now we’d like to give OSCARS its due.

OSCARS, or On-Demand Secure Circuits and Advance Reservation System, allows users to set up virtual circuits on demand to reserve bandwidth, streamlining the transfer of massive data sets across multiple network domains. OSCARS originated at ESnet, but we open-sourced it to the community long ago. Last spring the more modular OSCARS version 0.6 was released for testers and early adopters.

Other famous OSCARS

The performance of OSCARS 0.6 at SC11 showed us that we met our design goal of creating a flexible and modular framework. This was reflected in the demos, which were easy for folks to customize according to their needs. In the demo, “Enabling Large Scale Science using Inter-domain Circuits over OpenFlow” Tom Lehman of ISI used OSCARS to provide the functionality to control Openflow switches. Thanks to the flexibility to customize software built into OSCARS 0.6, ESnet’s Eric Pouyol was able to produce a variation of that application, customizing OSCARS 0.6 for resource brokering. OSCARS also played a part in the successful demonstration of Internet2’s Dynamic Network System (DYNES).The goal of DYNES is to work with regional networks and campuses, using OSCARS to schedule and support scientific data flows from the LHC, and other data intensive science programs such as LIGO, Virtual Observatory, and other large-scale sky surveys.

Most of the 100 Gbps demos at SC were supported by both the ANI 100 Gbps network and the 100 Gbps SCinet showfloor network. OSCARS 0.6 was used to schedule all eight of the demos using the 100 Gbps ANI network, which included complex visualizations of climate models, the Large Hadron Collider and the VERY early history—13.5 billion years ago, or 100 billion in dog years— of the Universe. OSCARS also controlled the approximately 100 different connections at SCInet, as well as connecting to three other OSCARS instances on the show floor.

OSCAR the Grouch

We used OSCARS 0.6 to provision the network, scheduling user time-slices of the 100 gigabit-per-second ANI and SCinet network, 24 hours a day, over the period of a week so they could test the demos in advance without having to get up at 3:00 a.m. to do it.

OSCARS 0.6 ended up making certain network engineers’ lives much easier. According to my colleague Evangelos Chaniatakis a.k.a. Vangelis, who was involved in the gritty details of setting up OSCARS 0.6 at the show, his team was required to make last-minute changes to the pre-existing network framework to work with the new hardware but didn’t receive the equipment until the week before the conference. The modularity ESnet built into OSCARS 0.6 helped the team get the network working at short notice.

 Less of a Software, More of a Service

Every year the number of reservations and circuits at SC continues to grow. The SC11 network required roughly twice the number of VLANs over the previous year. While the bandwidth wasn’t much bigger, and there were approximately the same number of customers, this year’s users definitely had more requirements. “On the whole OSCARS 0.6 was really stable.” Vangelis reports. “It worked fine.”  But the lessons learned at SC11 made us rethink the OSCARS 0.6 service module and requirements. In the near future, we intend to tweak OSCARS 0.6 to provide users more flexibility, making it less of a software and more of a service.

ECSEL leverages OpenFlow to demonstrate new network directions

ESnet and its collaborators successfully completed three days of demonstrating its End-to-End Circuit Service at Layer 2 (ECSEL) software at the Open Networking Summit held at Stanford a couple of weeks ago. Our goal is to build “zero-configuration circuits” to help science applications seamlessly use networks for optimized end-to-end data transport. ECSEL, developed in collaboration with NEC, Indiana University, and the University of Delaware builds on some exciting new conceptual thinking in networking.

Wrangling Big Data 

To put ECSEL in context, the proliferating tide of scientific data flows – anticipated at 2 petabytes per second as planned large-scale experiments get in motion – is already challenging networks to be exponentially more efficient. Wide area networks have vastly increased bandwidth and enable flexible, distributed, scientific workflows that involve connecting multiple scientific labs to a supercomputing site, a university campus, or even a cloud data center.

Heavy network traffic to come

The increasing adoption of distributed, service-oriented computing means that resource and vendor independence for service delivery is a key priority for users. Users expect seamless end-to-end performance and want the ability to send data flows on demand, no matter how many domains and service providers are involved.  The hitch is that even though the Wide Area Network (WAN) can have turbocharged bandwidth, at these exponentially increasing rates of network traffic even a small blockage in the network can seriously impair the flow of data, trapping users in a situation resembling commute conditions on sluggish California freeways. These scientific data transport challenges that we and other R&E networks face are just a taste of what the commercial world will encounter with the increasing popularity of cloud computing and service-driven cloud storage.

Abstracting a solution

One of the key feedback from application developers, scientists and end-users is that they do not want to deal with the complexity at the infrastructure level while still accomplishing their mission. At ESnet, we are exploring various ways to make networks work better for users. A couple of concepts could be game-changers, according to Open Network Summit conference presenter and Berkeley professor Scott Shenker: 1) using abstraction to manage network complexity, and 2) extracting and exposing simplicity out of the network. Shenker himself cites Barbara Liskov’s Turing Lecture as inspiration.

ECSEL is leveraging OSCARS and OpenFlow within the Software Defined Networking (SDN) paradigm to elegantly prevent end-to-end network traffic jams.  OpenFlow is an open standard to allow application-driven manipulation of network flows. ECSEL is using OSCARS-controlled MPLS virtual circuits with OpenFlow to dynamically stitch together a seamless data plane delivering services over multi-domain constructs.  ECSEL also provides an additional level of simplicity to the application, as it can discover host-network interconnection points as necessary, removing the requirement of applications being “statically configured” with their network end-point connections. It also enables stitching of the paths end-to-end, while allowing each administrative entity to set and enforce its own policies. ECSEL can be easily enhanced to enable users to verify end-to-end performance, and dynamically select application-specific protocol forwarding rules in each domain.

The OpenFlow capabilities, whether it be in an enterprise/campus or within the data center, were demonstrated with the help of NEC’s ProgrammableFlow Switch (PFS) and ProgrammableFlow Controller (PFC). We leveraged a special interface developed by them to program a virtual path from ingress to egress of the OpenFlow domain. ECSEL accessed this special interface programmatically when executing the end-to-end path stitching workflow.

Our anticipated next step is to develop ECSEL as an end-to-end service by making it an integral part of a scientific workflow. The ECSEL software will essentially act as an abstraction layer, where the host (or virtual machine) doesn’t need to know how it is connected to the network–the software layer does all the work for it, mapping out the optimum topologies to direct data flow and make the magic happen. To implement this, ECSEL is leveraging the modular architecture and code of the new release of OSCARS 0.6.  Developing this demonstration yielded sufficient proof that well-architected and modular software with simple APIs, like OSCARS 0.6, can speed up the development of new network services, which in turn validates the value-proposition of SDN. But we are not the only ones who think that ECSEL virtual circuits show promise as a platform for spurring further innovation. Vendors such as Brocade and Juniper, as well as other network providers attending the demo were enthusiastic about the potential of ECSEL.

But we are just getting started. We will reprise the ECSEL demo at SC11 in Seattle, this time with a GridFTP application using Remote Direct Memory Access (RDMA) which has been modified to include the XSP (eXtensible Session Protocol) that acts as a signaling mechanism enabling the application to become “network aware.”  XSP, conceived and developed by Martin Swany and Ezra Kissel of Indiana University and University of Delaware,  can directly interact with advanced network services like OSCARS – making the creation of virtual circuits transparent to the end user. In addition, once the application is network aware, it can then make more efficient use of scalable transport mechanisms like RDMA for very large data transfers over high capacity connections.

We look forward to seeing you there and exchanging ideas. Until Seattle, any questions or proposals on working together on this or other solutions to the “Big Data Problem,” don’t hesitate to contact me.

–Inder Monga

ECSEL Collaborators:

Eric Pouyoul, Vertika Singh (summer intern), Brian Tierney: ESnet

Samrat Ganguly, Munehiro Ikeda: NEC

Martin Swany, Ahmed Hassany: Indiana University

Ezra Kissel: University of Delaware

Idea Power: Two ESnet Projects are Honored With Internet2 IDEA Awards

We are proud to announce that two of ESnet’s projects have received IDEA (Internet2 Driving Exemplary Applications) awards in Internet2’s 2011 annual competition for innovative network applications that have had the most positive impact and potential for adoption within the research and education community. (see: Internet2’s press release).

Internet2 recognized OSCARS (On-Demand Secure Circuits and Advance Reservation System), developed by the ESnet team led by Chin Guok, including Evangelos Chaniotakis, Andrew Lake, Eric Pouyoul and Mary Thompson. Contributing partners also included Internet2, USC ISI and DANTE.

ESnet’s MAVEN (Monitoring and Visualization of Energy consumed by Networks) proof of concept application was also recognized with an IDEA award in the student category. MAVEN was prototyped by Baris Aksanli during his summer internship at ESnet. Baris is a Ph.D student at the University of California, San Diego conducting research at the System Energy Efficiency Lab with his thesis advisor, Dr. Tajana Rosing. Baris worked closely with his summer advisor, Inder Monga, and Jon Dugan to implement MAVEN as part of ESnet’s new Green Networking Initiative.

The idea behind OSCARS

OSCARS enables researchers to automatically schedule and guarantee end-to-end delivery of scientific data across networks and continents. For scientists, being able to count on reliable data delivery is critical as scientific collaborations become more expansive, often global. Meanwhile, in disciplines ranging from high-energy physics to climate, scientists are using powerful, geographically dispersed instruments like the Large Hadron Collider that are producing increasingly massive bursts of data, challenging the capabilities of traditional IP networks.

OSCARS virtual circuits can reliably schedule time-sensitive data flows – like those from the LHC – round the clock across networks, enabling research and education networks to seamlessly meet user needs. OSCARS code is also being deployed by R&E networks worldwide to support an ever-growing user base of researchers with data-intensive collaboration needs. Internet2, U.S. LHCnet, NORDUNet, RNP in Brazil as well as over 10 other regional and national networks have currently implemented OSCARS for virtual circuit services. Moreover, Internet2’s NSF-funded DyGIR and DYNES projects will in 2012 deploy over 60 more instances of OSCARS at university campuses and regional networks to support scientists involved in LHC, Laser Interferometer Gravitational-Wave Observatory (LIGO), Large Synoptic Survey Telescope (LSST) and Electronic Very-Long Baseline Interferometry (eVLBI) programs.

We are proud of the hard work and dedication the OSCARS development team has demonstrated since the start of this project. Just as importantly we are proud to see this work paying off in with new science collaboration and discoveries.

The potential of MAVEN

The Monitoring and Visualization of Energy consumed by Networks (MAVEN) project is a brand new prototype portal that will help network operators and researchers better track live network energy consumption and environmental conditions. MAVEN – implemented by Baris during his summer internship – is a first major step for ESnet in instrumenting our network with the tools to understand these operational dynamics. As networks continue to get bigger and faster, they will require more power and cooling in an era of decreased energy resources. To address this pressing challenge, ESnet is leading a new generation of research aimed at understanding how networks can operate in a more energy-efficient manner. We are grateful for Baris’ significant contributions in leading the development of MAVEN and glad to see that his talent is being recognized by the R&E networking community through this award.

Baris is now back in school at UCSD, completing his Ph.D in computer science. Congratulations Baris!

The path to interoperability passes through Rio


Museu de Arte Moderna do Rio de Janeiro (MAM)

Foto: Embratur

This week, Inder Monga is representing ESnet at the 11th Annual Global LambdaGrid Workshop. The GLIF hosts a meeting of research & education (R&E) network operators, network vendors and researchers that support the paradigm of lambda networking. The GLIF worldwide network is based around a number of lambdas–dedicated high-capacity circuits based on optical wavelengths, and which terminate at exchange points known as GOLEs (GLIF Open Lightpath Exchanges). On Monday, a smaller subset of GLIF members, GLIF Americas, will meet to share the various developments in their own R&E networks. ESnet will present the exciting new developments in the Advanced Networking Initiative, including leading work on measuring and sharing network power consumption. 

On Tuesday, September 13, at the Museum of Modern Art, ESnet participates in a Network Services Interface (NSI) protocol “plugfest” with OSCARS, its award-winning On-Demand Secure Circuits and Advance Reservation System software, testing it against other bandwidth reservation software to determine its level of interoperability and find any issues with specifications. It is encouraging to note that seven independent implementations of NSI are participating in the “plugfest.” OSCARS currently implements the Inter-Domain Control protocol (IDCP) developed jointly with the DICE working group to accomplish inter-domain connections today. Converging on a standard NSI protocol will enable the larger GLIF community to participate in federated, multi-domain virtual circuits. For more information on OSCARS and NSI, you can reach Chin Guok, technical lead of OSCARS software development within ESnet, Evangelos Chaniotakis, developer of NSI protocol for the plugfest or Inder himself who is co-chair of the NSI working group in OGF.

Seven implementations and hard working NSI developers from around the world

On Wednesday September 14th, NSI session at GLIF will discuss the state of network services interface (NSI) 1.0 standards specifications today, and the work ahead to be tackled by the community in getting production instances of the protocol deployed. Up until now, NSI has been purely an academic exercise. But that is changing now with the plugfest. 

Also that day, Inder will be giving a talk titled “Networks & Power–ESnet’s Initiatives towards Green.” The talk will focus on the recent design and prototype of a network power measurement tool that was developed by Baris Aksanli, a UCSD summer intern, under Inder’s mentorship. It will also give a preview of joint theoretical network energy efficiency research with Baris and his advisor Tajana Rosing at UCSD that is currently being submitted as a conference paper. Research into energy-efficient networking is important to ESnet. Energy efficiency is an issue that will assume international importance as the volume of data carried by scientific networks is relentlessly expanding, putting greater demands on networks in an era of rising energy costs.