Patrick Dorn, a network engineer who joined ESnet in 2011, has been named the new leader of ESnet’s Network Engineering Group. He has held the job in an acting capacity since last September.
During his time with ESnet, Dorn spent a year at CERN in Switzerland, working to establish high speed links between CERN and the U.S. research community.
Before joining ESnet, Dorn was a senior network engineer at the National Center for Supercomputing Applications in Urbana-Champaign, Illinois. At NCSA he held both technical and management roles.
While at NCSA, Dorn served as the SC)08 conferencechair for SCinet, the high-speed network that provides wired and wireless connectivity for the thousands of attendees. SCinet is entirely volunteer-driven and takes more than a year to plan and then deploy.
Twice a year, ESnet staff meet with managers and researchers associated with each of the DOE Office of Science program offices to look toward the future of networking requirements and then take the planning steps to keep networking capabilities out in front of those demands.
Network engineers and researchers at DOE national labs take a similar forward-looking approach. Earlier this year, DOE’s SLAC National Accelerator Laboratory (SLAC) teamed up with AIC and Zettar and tapped into ESnet’s 100G backbone network to repeatedly transfer 1-petabyte files in 1.4 days over a 5,000-mile portion of ESnet’s production network. Even with the transfer bandwidth capped at 80Gbps, the milestone demo resulted in transfer rates five times faster than other technologies. The demo data accounted for a third of all ESnet traffic during the tests. Les Cottrell from SLAC presented the results at the ESnet Site Coordinators meeting (ESCC) held at Lawrence Berkeley National Laboratory in May 2017.
The test loop ran from 5,000-mile loop that goes from Department of Energy’s SLAC National Accelerator Laboratory (SLAC) in Menlo Park, Calif. across the country to Atlanta and then back to SLAC. The data transfers are part of the experiment to handle expected amounts of data generated by experiments at SLAC’s planned Linear Coherent Light Source II ( LCLS-II).
“Collaborations like this provide the networking community with an opportunity to use a production network for testing new technologies and seeing how they perform in a real-world scenario,” said ESnet Director Inder Monga. “At the same time, ESnet also gets to learn about leading-edge products as part of our future planning process.”
Soon after John Paul Jones moved from Idaho to California in 1983, he and his wife visited the Berkeley Hat Company, where he bought a royal blue beret. Since then, during his 33+ years at Lawrence Livermore and Lawrence Berkeley national labs, the flat blue hat has become part of Jones’ persona.
But when he retires from ESnet at the end of June 2017, Jones said he may also think about hanging up that hat. Around the house, he said, he usually wears his blue and gold Golden State Warriors cap.
In 1995, the Department of Energy made the decision to move ESnet and NERSC from Livermore to Lawrence Berkeley National Laboratory. Jones knew people who were part of the ESnet team at Livermore and it piqued his interest when ESnet’s then-manager Jim Leighton called him in to talk about joining the group.
“He unrolled this big network map and showed it to me,” Jones recalled. “I said, ‘What!? Oh yeah – I am definitely in!’”
When ESnet made the move in 1996, Jones joined the group that configured, installed, maintained and did troubleshooting on the routers that powered the national network.
As he prepares to retire this month after more than 28 years at Lawrence Berkeley National Laboratory, Brian Tierney, head of ESnet’s Advanced Network Technologies Group, still remembers the exact moment when he knew where his career path would lead.
“I met Bill Johnston at San Francisco State and on the very first day of his Computer Graphics class, he told us ‘Anybody who gets and A in my class gets an internship in my group,’” Tierney recalled. “A light bulb went off and I knew I was going to get an A. I literally thought “That’s what I might do for the next 30 years.’”
He started in Johnston’s Graphics Group as a graduate student assistant in 1988 and a year later Tierney had become a career staff member.
Among the key projects Tierney has either contributed to are perfSONAR, the network performance toolkit, and fasterdata.es.net, a collection of tips and tools for, well, faster data transfers.
Ten students from the IT Academy at Richmond’s Kennedy High School spent the first week of their summer vacation getting hands-on experience in high-speed networking and getting first-hand advice on planning their future.
The students and IT Academy lead teacher LaRue Moore participated in the June 12-16 pilot workshop introducing them to networking for science. The five-day workshop include a 30-minute instructional presentations followed by 30 minutes of hands-on work, a sequence developed by Sowmya Balasubramanian of ESnet. Topics included configuring IP addresses, tracing packets, assessing network performance and locating bottlenecks.
On the last day, students were given the assignment: You are a network administrator. You have five Raspberry Pis that serve as data transfer nodes. They are connected to a switch that can process at 1000 megabits/second. The Raspberry Pi themselves can transfer at 100 megabits/second. A user wants to use one of the data transfer nodes and has approached you for help in finding the best node. You need to run tests to find which is the best node.
Working in teams, the students measured the round-trip time for each node and then balance speed against packet loss to determine which performed best. Students then presented their findings to the group, as they would make a recommendation to an IT expert.
“This was extremely valuable for our students and now I want to see how we can scale it up to 40 students,” Moore said. “This week has given them both more knowledge and more confidence.”
ESnet, Indiana University, and Internet2 are hosting a virtual version of the next Operating Innovative Networks (OIN) workshop on Wednesday and Thursday, June 21-22. This completely online event offers hands-on training in:
The series is designed to help lab and campus network engineers deploy next-gen research networks that can effectively support data-intensive science. Sessions will be available throughout the day on the topics typically presented in an in-person workshop, but will be modified for the virtual audience. Due to time zone considerations, the presentations will run between 6 a.m. and 6 p.m. EDT over the two days, and will also be recorded for future use.
The two-day workshop will present material for building and deploying Science DMZs, Software Defined Networks, perfSONAR, Data Transfer Nodes, and Science Engagement. The content will be particularly useful for NSF Campus Cyberinfrastructure awardees that are being funded to upgrade their networks with these technologies, or those looking to prepare for the next CC* solicitation. By the end of the event, attendees will have a better understanding of the requirements for supporting scientific use of the network, architectural strategies that can simplify these interactions, and knowledge of tools that can mitigate problems users may encounter.
While the transatlantic high-speed links develop and expand, a group of network specialists from R&E networking organizations and exchange points operators from around the world are also collaborating to ensure researchers see the end-to-end performance results their science requires not just locally but globally. To achieve this, network architects from around the globe have developed a set of global principles and technical guidelines for collaboration, as well as sharing costs and aligning investments.
The news announcement also noted that the ANA Collaboration is in compliance with the Global Network Architecture (GNA) initiative’s Reference Architecture, first released in January 2017. That document is defining a reference architecture and creating a roadmap for both national and regional research & education networks to more seamlessly support research on an end-to-end basis.
On Monday, May 1, the Department of Veterans Affairs (VA) and the Department of Energy (DOE) announced the formation of a new partnership focused on the secure analysis of large digital health and genomic data, or so-called “big data,” from the VA and other federal sources to help advance health care for Veterans and others in areas such as suicide prevention, cancer and heart disease, while also driving DOE’s next-generation supercomputing designs.
Known as the VA-DOE Big Data Science Initiative, the partnership will be based within DOE’s national laboratory system, one of the world’s top resources for supercomputing. The effort will leverage the latest DOE expertise and technologies in big data, artificial intelligence and high-performance computing to identify trends that will support the development of new treatments and preventive strategies.
DOE’s high-speed Energy Sciences Network, or ESnet, will continue to work with the VA and national labs to create a secure, high-performance connection for data transfer and seamless networking. In particular, ESnet is sharing its expertise in the Science DMZ architecture and perfSONAR network performance characterization to help improve the end-to-end flow of data, which is often the largest obstacle to moving large data sets.
At present, combined DOE/VA teams, including scientists from ANL, LANL, LLNL and ORNL, are working with ESnet, DOE’s international research network that connects all of its science laboratories and facilities. Managed by Lawrence Berkeley National Laboratory, ESnet is coordinating with an existing protected health information (PHI) enclave at ORNL that will comprise the initial environment for scientific analyses.
Several VA electronic health record (EHR) and MVP data assets have been moved to this secure enclave allowing investigators to access the data and compute resources while ensuring the protection of our Veterans’ data. Direct, high-speed networks between the VA and DOE facilities are expected to be established by June 2017.
The Department of Energy Sciences Network (ESnet), Indiana University, Internet2 and GÉANT today announced the release of a new version of the open-source tool perfSONAR, which stands for Performance Service Oriented Network Monitoring Architecture.
perfSONAR is a jointly developed and widely deployed test and measurement infrastructure that is used by science networks and facilities around the world to measure and ensure network performance. As research and education institutions are increasingly reliant on networking, open-source tools like perfSONAR allow network engineers to test and measure network performance, with the ability to archive data in order to pinpoint and solve service problems that may span multiple networks and international boundaries
“As research is becoming increasingly data-intensive, the ability to pinpoint and eliminate network bottlenecks is critical in order to make the most effective use of networks,” said Inder Monga, ESnet director. “This new release leverages an open-source, time-series graphing package developed by ESnet that allows for easy exploration of measurement data.”
The updates in version 4.0 include:
new scheduling software called pScheduler that supports community-developed test tools
results archiving and analysis
new interactive time-series graphs for improved human analysis
Scientists around the world are increasingly collaborating to address global issues such as clean energy, medicine and protecting the environment. Their ability to share and analyse data is essential for advancing research, and as the size of those datasets grows, the need for high-speed global network connectivity becomes ever more critical.
Collaborating Across the Atlantic
That is why research and education (R&E) networks in Europe and North America have joined forces to find new ways to help facilitate and enable scientific collaboration. Between them, the R&E networks on the two continents have now deployed links providing a total bandwidth of 740 gigabits per second (Gbps).
This record-breaking connectivity and resilience is the work of the Advanced North Atlantic (ANA) Collaboration. Started in 2013, ANA consists of six leading R&E networks: CANARIE (Canada), ESnet (USA), GÉANT (Europe), Internet2 (USA), NORDUnet (European Nordics), and SURFnet (The Netherlands).
“We’ve seen a tremendous growth in transatlantic connectivity since we have set up the first 100 Gbps R&E transatlantic link at TNC 2013,” said Erwin Bleumink, CEO of SURFnet. ”I am very pleased with the success of this international collaboration, in which SURFnet has been involved from the beginning.”
“Collaborations between research and education networks are unique and enable us as a community to address the exponentially growing data needs of science collaborations worldwide,” said Inder Monga, director of the U.S. Department of Energy’s ESnet, which deployed four trans-Atlantic links comprising 340 Gbps in December 2014. “The combined capability offered to the research and education community far exceeds what any single organization can provide and moves us many steps forward towards accomplishing our vision of ‘scientific progress being completely unconstrained’.”