​Across the North Atlantic for Research and Education


36000774370_2ecc92d538_bAmsterdam, Berkeley, Bloomington, Copenhagen, Ottawa, Utrecht, Washington D.C., April 2018 – Addressing the exponentially growing needs of international science collaborations, connectivity providers for research and education (R&E) continue to find new ways to join forces.

In 2016, a six-party strong collaboration between North American and European R&E network providers secured a record-breaking 640 Gbit/s bandwidth for R&E across the North Atlantic. This alliance has now strengthened its cooperation to bring even more bandwidth, resiliency, and redundancy to networking for research and education, supporting global research collaborations that are advancing knowledge to help solve our most pressing problems.

MoU signed

A Memorandum of Understanding (MoU) has been signed between the partners involved in the Advanced North Atlantic (ANA) collaboration (CANARIE, ESnet, GÉANT, Internet2, NORDUnet, and SURF) and the National Science Foundation (NSF)-funded Networks for European, American, and African Research (NEAAR) Project. The MoU paves the way for a future cooperative partnership offering a combined capability to the R&E community that far exceeds what any single organization can provide. The collaboration is based on capacity sharing, reciprocal backup agreements, and joint operations of high-speed 100 Gbit/s interconnects through Global R&E Exchange Points (GXPs). Moreover, with the additional NEAAR link now available, the transatlantic R&E bandwidth has risen to 800 Gbit/s, with the new link further strengthening its resilience.The MoU between the ANA collaboration and the NEAAR Project takes global collaboration in R&E networking to a new and unprecedented level of interaction and resource sharing, in order to meet the international R&E community’s increasing need for high-speed global network connectivity.

The partners

The Advanced North Atlantic (ANA) collaboration started in 2012 as a technology project to challenge the market to deliver 100 Gbit/s trans-Atlantic circuits. Since then, it has grown into a strong collaboration between six R&E networking organizations from North America (CANARIE, ESnet, and Internet2) and Europe (GÉANT, NORDUnet, and SURF), with a steering group and working groups on operations, technology, cost sharing and legal.The NEAAR Project is a cross-organizational initiative providing services and bandwidth to connect researchers within the United States with their counterparts in Europe and Africa. Indiana University jointly leads the NEAAR collaboration with GÉANT, the European research and education network (REN), and with the African regional RENs. In addition to a 100 Gbit/s lambda between the United States and Europe, the NEAAR Project facilitates science engagement with researchers throughout Europe and Africa.

New level of collaboration

The ANA collaboration has inspired similar initiatives in other regions. Looking to the future, the ANA collaboration aims to inspire more intercontinental collaborations and will work with the telecommunications industry to create powerful, resilient and sustainable intercontinental transmission systems for research and education.

Howard Pfeffer, President and CEO, Internet2: “We believe collaboration has been, and will continue to be, the core mission of supporting global research and education. All the partners involved in this collaboration continue to demonstrate their unique ability to work collectively toward a shared vision. Internet2 looks forward to continuing this global effort in support of advancing knowledge and scholarship.”

René Buch, CEO, NORDUnet: “Collaboration through projects like ANA is the cornerstone in the creation of a global R&E infrastructure. Joining resources both financially and organizationally is the only viable way forward for the NRENs to provide researchers and students with a global infrastructure that has sufficient bandwidth and reach to support research and education, not only today but also in the future. NORDUnet has a strong commitment to this common goal and is investing significant resources facilitating this paradigm on a global scale.”

Inder Monga, Executive Director, ESnet: “This private-public collaboration has been exemplary in proving how multiple partners, in different countries, can work toward the common good for the science, research and education community. We welcome NEAAR as ANA’s new partner and continued collaborative efforts on science engagement across continents.”

Erwin Bleumink, Member, SURF Board: “Research and educational institutions are increasingly using large research facilities, educational content, and e-infrastructure services delivered by providers or fellow institutions all over the world. It is essential to ensure that our network is connected to the rest of the world. Therefore, we’re proud to work with our global partners to design and deliver a Global Network Architecture. The MoU between ANA and NEAAR show that the principles defined in this architecture lead to successful collaborations.”

Jim Ghadbane, President and CEO, CANARIE: “We are strong proponents of the power of collaborations to build and evolve the network infrastructure that enables global multidisciplinary research and innovation. We look forward to continuing to strengthen this partnership and to applying its principles to future collaborations.”

Erik Huizer, CEO, GÉANT: “The ANA collaboration is a prime example of how international research networks collaborate and share resources to obtain the best possible performance for research and education. GÉANT is proud to be part of that collaboration which delivers unprecedented data capacity to the European research and education community. We are very pleased to be part of the collaborative model of sharing capacity and increasing resilience through reciprocal back-up agreements which further strengthen GÉANT´s global reach. We welcome NEAAR as ANA´s new MoU partner.”

Jennifer Schopf, Principal Investigator, NEAAR Project: “We are pleased to be the latest partner in this wide-reaching collaboration. It is through partnerships and collaborations that we can best support R&E end users efficiently and reliably. We look forward to being a part of this team of community leaders.”

Global Collaboration:

The global ecosystem of Research & Education Networks has brought forward a number of initiatives that emphasizes the way these networks work together. Two prominent ones are:

 

ESnet Staff Take Expertise on the Road to Help Universities Operate Innovative Networks


OIN14-Lancaster-1
ESnet’s Jason Zurawski (standing center) presents at the OIN workshop held in March 2016 at Millersville University in Pennsylvania.

Although ESnet is well known for its expertise in supporting the transfer of datasets across the country and around the globe, for the past four years the facility’s staff has also been transferring their networking expertise to staff at other research and education organizations.

Partnering with Internet2 and Indiana University, ESnet co-led 23 workshops as part of the Operating Innovative Networks (OIN) series. Through both in-person and online workshops, the organizers reached and estimated 750 network employees at 360 institutions in 39 states and 38 other nations.

Each workshop was held at a different location and sites were usually chosen by working with regional research and education (R&E) networks. This allowed smaller organizations to tap into the combined expertise of the of the workshop leaders and also made the workshops more accessible to staff at institutions without large travel budgets, said Jason Zurawski, ESnet’s lead for the workshops.

The workshops followed a standard format. The first day covered the Science DMZ architecture, data transfer node tuning and the perfSONAR measurement software. The second day was devoted to some of the concepts behind software-defined networking.

“The OIN workshops have a good balance of lecture and hands-on practical experience. I got the most out of the exercises where the instructors, like Jason Zurawski, took real-world performance issues from the attendees’ live perfSONAR servers and demonstrated how to analyze, explain, and, in several cases resolve, throughput limitations, said Network Manager Brian Jemes of Information Technology Services at the University of Idaho. “In addition, the OIN workshops provided an opportunity to meet and share information with research information technology teams outside our institution.”

Zurawski said that on a couple of occasions, the instructors diverted from the planned instruction to address real-time issues, such as debugging network performance problems and rethinking a planned network architecture.

According to Zurawski, the workshop leads conducted surveys after every session and universally, the responses said the workshops were very timely and very useful. While the larger organizations could more quickly apply the information to their operations, the smaller schools were more likely to use the content for planning and then implement as resources were made available.

“We really tried to include as many institutions as we could in each session and optimize the location to reduce travel,” said Zurawski, who convened all 23 workshops. “We really saw this as an opportunity to talk to the people on the ground who were trying to implement these technologies.”

The value of the workshops was recognized by both the DOE Office of Science, which funded ESnet to participate, and the National Science Foundation, which provided funding under award number 1541421 for the last 10 workshops as part of its Campus Cyberinfrastructure program.

Jemes said his network team had found significant throughput limiting issues, such as microbursts of packet loss due to a faulty optical card in our service provider’s network, that none of their network management tools could identify. At the workshop, they learned that perfSONAR servers “are an indispensable tool for maintaining a high throughput network path for the hosts in our Science DMZ.”

“But despite being well-designed with good documentation and packaged for easy install, perfSONAR is not a turn-key solution,” Jemes said. “To set up and use perfSONAR effectively, you need to do network and server tuning.”

In all, the University of Idaho sent five people to OIN workshops over the past four years and “we found the workshops to be extremely valuable in quickly getting experienced network engineers and server administrators up-to-speed on the effective tuning and operation of a perfSONAR server and the science DMZ network,” Jemes said.

In September 2015, Clemson University hosted one of the workshops at the invitation of Kate Petersen Mace, then director of External Partnership Management at Clemson. In that role, she was project manager for the university’s NSF Campus Cyberinfrastructure award which funded installation of a Science DMZ and implementation of software-defined networking tools.

“As we completed the project, there was interest by both our network engineers and university researchers in learning more about the work and I also wanted to share this deeper knowledge about the Science DMZ with surrounding universities,” said Mace, who joined ESnet’s Science Engagement Team in December 2015. “It was very well attended and all of the people I talked to said it was very beneficial.”

After joining ESnet, Mace began helping teach at the workshops, in particular talking about the importance of science engagement and security best practices, “not to dictate exactly what to do, but to give them ideas on what to think about when implementing new technologies and capabilities. We discussed that the Science DMZ is meant to serve as a security architecture, not just a way to speed up data transfers.”

Although the workshop series is now on hiatus. Zurawski said the team is considering whether to continue them, adapting the content based on feedback from attendees. And while many universities have gotten up to speed after attending an OIN workshop, there is still a need for new information.

Damian Clarke, the CIO for South Carolina State University (SCSU), was the first representative from his university to attend one the workshops, was also the final one held in December 2017.

“As the CIO of SCSU, I was impressed by the knowledge base of the presenters and the right balance of lecture and hands-on exercises,” Clarke said. “I felt that many topics were covered without feeling overwhelmed or confused. I hope that the workshops continue to be funded as more HBCUs (Historically Black Colleges and Universities) and MSIs (Minority Serving Institutions) need to attend.”

 

Secretary of Energy Rick Perry Visits Berkeley Lab


Secretary of Energy Rick Perry Learns About ESnet
Under the guidance of ESnet Director Inder Monga and Network Engineer Eli Dart, Secretary Perry transferred 500GB of data in minutes from ALCF  to NERSC with Globus software. (Photo by Paul Mueller, Berkeley Lab)

On March 27, 2018, Secretary of Energy Rick Perry visited the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), getting a firsthand view of how Berkeley Lab combines team science with world-class facilities to develop solutions for the scientific, energy, and technological challenges facing the nation.

During his stop at Shyh Wang Hall, Perry learned about Berkeley Lab’s contributions to DOE’s High Performance Computing for Manufacturing program (HPC4Mfg) from Peter Nugent, Berkeley Lab Computational Research Division (CRD) Deputy for Scientific Engagement.

Under the guidance of Energy Sciences Network (ESnet) Director Inder Monga and Network Engineer Eli Dart, Perry also transferred 500GB of data from the Argonne Leadership Computing Facility in Lemont, Illinois to the National Energy Research Computing Center (NERSC) in Berkeley, California with Globus software in minutes.

NERSC Deputy Katie Antypas then took Perry on a tour of NERSC’s machine room, where he signed the center’s newest supercomputer Cori.

The Secretary’s visit is part of a three-day Bay Area tour that included stops at Lawrence Livermore National Laboratory, Sandia National Laboratories’ California, Berkeley Lab and SLAC National Accelerator Laboratory.

For more photos visit our Facebook Page.

Learn more about the visit: https://newscenter.lbl.gov/2018/03/27/secretary-of-energy-perry/

Into the Medical Science DMZ


iStock-629606180
Speeding research. The Medical Science DMZ expedites data transfers for scientists working on large-scale research such as biomedicine and genomics while maintaining federally-required patient privacy.

In a new paperLawrence Berkeley National Laboratory (Berkeley Lab) computer scientist Sean Peisert and Energy Sciences Network (ESnet) researcher Eli Dart and their collaborators outline a “design pattern” for deploying specialized research networks and ancillary computing equipment for HIPAA-protected biomedical data that provides high-throughput network data transfers and high-security protections.

“The original Science DMZ model provided a way of securing high-throughput data transfer applications without the use of enterprise firewalls,” says Dart. “You can protect data transfers using technical controls that don’t impose performance limitations.”

Read More at Science Node: https://sciencenode.org/feature/into-the-science-dmz.php 

Sean-and-Eli
Left: Eli Dart, ESnet Engineer | Right:  Sean Peisert, Berkeley Lab Computer Scientist

Women in IT Invited to Apply for WINS Program at SC18 Conference


WINS_logo_HorzApplications are now being accepted for the Women in IT Networking at SC (WINS) program at the SC18 conference to be held Nov. 11-16 in Houston. WINS seeks qualified female U.S. candidates in their early to mid-career to join the volunteer team to help build and run SCinet, the high-speed network created at each year’s conference. Here’s how to apply.

WINS was launched to expand the diversity of the SCinet volunteer staff and provide professional development opportunities to highly qualified women in the field of networking. Selected participants will receive full travel support and mentoring by well-known engineering experts in the research and education community.

For the second year in a row, Kate Mace of ESnet’s Science Engagement Team is the WINS chair for SCinet.

Applications are to be submitted using the WINS Application Form. The deadline to apply is 11:59 p.m. Friday, March 23 (Pacific time). More information can be found on the SC18 WINS call for participation.

Each year, volunteers from academia, government and industry work together to design and deliver SCinet. Planning begins more than a year in advance and culminates in a high-intensity, around-the-clock installation in the days leading up to the conference.

Launched in 2015, the success of the WINS program led to an official three-year award by the National Science Foundation (NSF) and DOE-ESnet. WINS is a joint effort between ESnet, the Keystone Initiative for Network Based Education and Research (KINBER), the University Corporation for Atmospheric Research (UCAR), and SCinet.

SC18_both_logos

CENIC Honors Astrophysics Link to NERSC via ESnet


unnamed-8
A star-forming region of the Large Magellanic Cloud (Credit: European Space Agency via the Hubble Telescope)

An astrophysics project connecting UC Santa Cruz’s Hyades supercomputer cluster to NERSC via ESnet and other networks won the CENIC 2018 Innovations in Networking Award for Research Applications announced last week.

Through a consortium of Science DMZs and links to NERSC via CENIC’s CalREN and the DOE’s ESnet, the connection enables UCSC to carry out the high-speed transfer of large data sets produced at NERSC, which supports the Dark Energy Spectroscopic Instrument (DESI) and AGORA galaxy simulations, at speeds up to five times previous rates. These speeds have the potential to be increased by 20 times the previous rates in 2018. Peter Nugent, an astronomer and cosmologist from the Computational Research Division, was pivotal in the effort. Read UC Santa Cruz’s press release.

ESnet’s Inder Monga Featured in Video Recapping Netwerkdag 2017 in the Netherlands


ESnet Director Inder Monga’s keynote talk is among the events highlighted in a new video recapping the events at “Netwerkdag 2017 (Network Day 2017)“, a daylong meeting organized by SURFnet, the national research and education (R&E) network of the Netherlands. The event was held Dec. 14, 2017 in Utrecht under the theme of making connections.

In his talk about the future of R&E networking, Monga talked about the vision for next-generation networks that includes the increasing importance of software/software expertise in building networks, security and increasing the telemetry and analytics capability (including research in machine learning for networking) to tackle the growth in data as well as number of data producing devices.

ESnet Workshop Report Outlines Data Management Needs in Metagenomics, Precision Medicine


bioinformatics1.jpg
William Barnett, the chief research informatics officer for the Indiana Clinical and Translational Sciences Institute (CTSI) and the Regenstrief Institute at Indiana University, discusses the promise of precision medicine at the workshop.

Like most areas of research, the bioinformatics sciences community is facing an unprecedented explosion in the size and number of data sets being created, spurred largely by the decreasing cost of genome sequencing technology. As a result there is a critical need for more effective tools for data management, analysis and access.

Adding to the complexity, two major fields in bioinformatics – precision medicine and metagenomics – have unique data challenges and needs. To help address the situation, a workshop was organized by the Department of Energy’s Energy Sciences Network (ESnet) in 2016 at Lawrence Berkeley National Laboratory. Organized as part of a series of CrossConnects workshops, the two-day meeting brought together scientists from metagenomics and precision medicine, along with experts in computing and networking.

A report outlining the findings and recommendations from the workshop was published Dec. 19, 2017 in Standards in Genomic Sciences. The report reflected the input of 59 attendees from 39 organizations.

One driver for publishing the report was the realization that although each of the two focus areas have unique requirements, workshop discussions revealed several areas where the needs overlapped, said ESnet’s Kate Mace, lead author of the report. In particular, the issue of data management loomed largest.

Read a summary of the findings and recommendations from the workshop.

ESnet, Globus Experts Design a Better Portal for Scientific Discovery


Globus, Science DMZ provide new architecture to meet demand for accessing shared data

These days, it’s easy to overlook the fact that the World Wide Web was created nearly 30 years ago primarily to help researchers access and share scientific data. Over the years, the web has evolved into a tool that helps us eat, shop, travel, watch movies and even monitor our homes.

Meanwhile, scientific instruments have become much more powerful, generating massive datasets, and international collaborations have proliferated In this new era, the web has become an essential part of the scientific process, but the most common method of sharing research data remains firmly attached to the earliest days of the web. This can be a huge impediment to scientific discovery.

That’s why a team of networking experts from the Department of Energy’s Energy Sciences Network (ESnet), with the Globus team from the University of Chicago and Argonne National Laboratory, have designed a new approach that makes data sharing faster, more reliable and more secure. In an article published Jan. 15 in Peer J Comp Sci, the team describes their “The Modern Research Data Portal: a design pattern for networked, data-intensive science.”

“Both the size of datasets and the quantity of data objects has exploded, but the typical design of a data portal hasn’t really changed,” said co-author Eli Dart, a network engineer with the Department of Energy’s Energy Sciences Network, or ESnet. “Our new design preserves that ease of use, but easily scales up to handle the huge amounts of data associated with today’s science.”

Read the full story.

MRDP
The Modern Research Data Portal design pattern from a network architecture perspective: The Science DMZ includes multiple DTNs that provide for high-speed transfer between network and storage. Portal functions run on a portal server, located on the institution’s enterprise network. The DTNs need only speak the API of the data management service (Globus in this case).

 

Berkeley Lab and ESnet Document Flow, Performance of 56 Terabytes Climate Data Transfer


Visualization by Prabhat (Berkeley Lab).
The simulated storms seen in this visualization are generated from the finite volume version of NCAR’s Community Atmosphere Model. Visualization by Prabhat (Berkeley Lab).

In a recent paper entitled “An Assessment of Data Transfer Performance for Large‐Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6,” experts from Lawrence Berkeley National Laboratory (Berkeley Lab) and ESnet (the Energy Sciences Network, document the data transfer workflow, data performance, and other aspects of transferring approximately 56 terabytes of climate model output data for further analysis.

The data, required for tracking and characterizing extratropical storms, needed to be moved from the distributed Coupled Model Intercomparison Project (CMIP5) archive to the National Energy Research Supercomputing Center (NERSC) at Berkeley Lab.

The authors found that there is significant room for improvement in the data transfer capabilities currently in place for CMIP5, both in terms of workflow mechanics and in data transfer performance. In particular, the paper notes that performance improvements of at least an order of magnitude are within technical reach using current best practices.

To illustrate this, the authors used Globus to transfer the same raw data set between NERSC and Argonne Leadership Computing Facility (ALCF) at Argonne National Lab.

Read the Globus story: https://www.globus.org/user-story-lbl-and-esnet
Read the paper: https://arxiv.org/abs/1709.09575