Secretary of Energy Rick Perry Visits Berkeley Lab


Secretary of Energy Rick Perry Learns About ESnet
Under the guidance of ESnet Director Inder Monga and Network Engineer Eli Dart, Secretary Perry transferred 500GB of data in minutes from ALCF  to NERSC with Globus software. (Photo by Paul Mueller, Berkeley Lab)

On March 27, 2018, Secretary of Energy Rick Perry visited the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), getting a firsthand view of how Berkeley Lab combines team science with world-class facilities to develop solutions for the scientific, energy, and technological challenges facing the nation.

During his stop at Shyh Wang Hall, Perry learned about Berkeley Lab’s contributions to DOE’s High Performance Computing for Manufacturing program (HPC4Mfg) from Peter Nugent, Berkeley Lab Computational Research Division (CRD) Deputy for Scientific Engagement.

Under the guidance of Energy Sciences Network (ESnet) Director Inder Monga and Network Engineer Eli Dart, Perry also transferred 500GB of data from the Argonne Leadership Computing Facility in Lemont, Illinois to the National Energy Research Computing Center (NERSC) in Berkeley, California with Globus software in minutes.

NERSC Deputy Katie Antypas then took Perry on a tour of NERSC’s machine room, where he signed the center’s newest supercomputer Cori.

The Secretary’s visit is part of a three-day Bay Area tour that included stops at Lawrence Livermore National Laboratory, Sandia National Laboratories’ California, Berkeley Lab and SLAC National Accelerator Laboratory.

For more photos visit our Facebook Page.

Learn more about the visit: https://newscenter.lbl.gov/2018/03/27/secretary-of-energy-perry/

Into the Medical Science DMZ


iStock-629606180
Speeding research. The Medical Science DMZ expedites data transfers for scientists working on large-scale research such as biomedicine and genomics while maintaining federally-required patient privacy.

In a new paperLawrence Berkeley National Laboratory (Berkeley Lab) computer scientist Sean Peisert and Energy Sciences Network (ESnet) researcher Eli Dart and their collaborators outline a “design pattern” for deploying specialized research networks and ancillary computing equipment for HIPAA-protected biomedical data that provides high-throughput network data transfers and high-security protections.

“The original Science DMZ model provided a way of securing high-throughput data transfer applications without the use of enterprise firewalls,” says Dart. “You can protect data transfers using technical controls that don’t impose performance limitations.”

Read More at Science Node: https://sciencenode.org/feature/into-the-science-dmz.php 

Sean-and-Eli
Left: Eli Dart, ESnet Engineer | Right:  Sean Peisert, Berkeley Lab Computer Scientist

Women in IT Invited to Apply for WINS Program at SC18 Conference


WINS_logo_HorzApplications are now being accepted for the Women in IT Networking at SC (WINS) program at the SC18 conference to be held Nov. 11-16 in Houston. WINS seeks qualified female U.S. candidates in their early to mid-career to join the volunteer team to help build and run SCinet, the high-speed network created at each year’s conference. Here’s how to apply.

WINS was launched to expand the diversity of the SCinet volunteer staff and provide professional development opportunities to highly qualified women in the field of networking. Selected participants will receive full travel support and mentoring by well-known engineering experts in the research and education community.

For the second year in a row, Kate Mace of ESnet’s Science Engagement Team is the WINS chair for SCinet.

Applications are to be submitted using the WINS Application Form. The deadline to apply is 11:59 p.m. Friday, March 23 (Pacific time). More information can be found on the SC18 WINS call for participation.

Each year, volunteers from academia, government and industry work together to design and deliver SCinet. Planning begins more than a year in advance and culminates in a high-intensity, around-the-clock installation in the days leading up to the conference.

Launched in 2015, the success of the WINS program led to an official three-year award by the National Science Foundation (NSF) and DOE-ESnet. WINS is a joint effort between ESnet, the Keystone Initiative for Network Based Education and Research (KINBER), the University Corporation for Atmospheric Research (UCAR), and SCinet.

SC18_both_logos

CENIC Honors Astrophysics Link to NERSC via ESnet


unnamed-8
A star-forming region of the Large Magellanic Cloud (Credit: European Space Agency via the Hubble Telescope)

An astrophysics project connecting UC Santa Cruz’s Hyades supercomputer cluster to NERSC via ESnet and other networks won the CENIC 2018 Innovations in Networking Award for Research Applications announced last week.

Through a consortium of Science DMZs and links to NERSC via CENIC’s CalREN and the DOE’s ESnet, the connection enables UCSC to carry out the high-speed transfer of large data sets produced at NERSC, which supports the Dark Energy Spectroscopic Instrument (DESI) and AGORA galaxy simulations, at speeds up to five times previous rates. These speeds have the potential to be increased by 20 times the previous rates in 2018. Peter Nugent, an astronomer and cosmologist from the Computational Research Division, was pivotal in the effort. Read UC Santa Cruz’s press release.

ESnet’s Inder Monga Featured in Video Recapping Netwerkdag 2017 in the Netherlands


ESnet Director Inder Monga’s keynote talk is among the events highlighted in a new video recapping the events at “Netwerkdag 2017 (Network Day 2017)“, a daylong meeting organized by SURFnet, the national research and education (R&E) network of the Netherlands. The event was held Dec. 14, 2017 in Utrecht under the theme of making connections.

In his talk about the future of R&E networking, Monga talked about the vision for next-generation networks that includes the increasing importance of software/software expertise in building networks, security and increasing the telemetry and analytics capability (including research in machine learning for networking) to tackle the growth in data as well as number of data producing devices.

ESnet Workshop Report Outlines Data Management Needs in Metagenomics, Precision Medicine


bioinformatics1.jpg
William Barnett, the chief research informatics officer for the Indiana Clinical and Translational Sciences Institute (CTSI) and the Regenstrief Institute at Indiana University, discusses the promise of precision medicine at the workshop.

Like most areas of research, the bioinformatics sciences community is facing an unprecedented explosion in the size and number of data sets being created, spurred largely by the decreasing cost of genome sequencing technology. As a result there is a critical need for more effective tools for data management, analysis and access.

Adding to the complexity, two major fields in bioinformatics – precision medicine and metagenomics – have unique data challenges and needs. To help address the situation, a workshop was organized by the Department of Energy’s Energy Sciences Network (ESnet) in 2016 at Lawrence Berkeley National Laboratory. Organized as part of a series of CrossConnects workshops, the two-day meeting brought together scientists from metagenomics and precision medicine, along with experts in computing and networking.

A report outlining the findings and recommendations from the workshop was published Dec. 19, 2017 in Standards in Genomic Sciences. The report reflected the input of 59 attendees from 39 organizations.

One driver for publishing the report was the realization that although each of the two focus areas have unique requirements, workshop discussions revealed several areas where the needs overlapped, said ESnet’s Kate Mace, lead author of the report. In particular, the issue of data management loomed largest.

Read a summary of the findings and recommendations from the workshop.

ESnet, Globus Experts Design a Better Portal for Scientific Discovery


Globus, Science DMZ provide new architecture to meet demand for accessing shared data

These days, it’s easy to overlook the fact that the World Wide Web was created nearly 30 years ago primarily to help researchers access and share scientific data. Over the years, the web has evolved into a tool that helps us eat, shop, travel, watch movies and even monitor our homes.

Meanwhile, scientific instruments have become much more powerful, generating massive datasets, and international collaborations have proliferated In this new era, the web has become an essential part of the scientific process, but the most common method of sharing research data remains firmly attached to the earliest days of the web. This can be a huge impediment to scientific discovery.

That’s why a team of networking experts from the Department of Energy’s Energy Sciences Network (ESnet), with the Globus team from the University of Chicago and Argonne National Laboratory, have designed a new approach that makes data sharing faster, more reliable and more secure. In an article published Jan. 15 in Peer J Comp Sci, the team describes their “The Modern Research Data Portal: a design pattern for networked, data-intensive science.”

“Both the size of datasets and the quantity of data objects has exploded, but the typical design of a data portal hasn’t really changed,” said co-author Eli Dart, a network engineer with the Department of Energy’s Energy Sciences Network, or ESnet. “Our new design preserves that ease of use, but easily scales up to handle the huge amounts of data associated with today’s science.”

Read the full story.

MRDP
The Modern Research Data Portal design pattern from a network architecture perspective: The Science DMZ includes multiple DTNs that provide for high-speed transfer between network and storage. Portal functions run on a portal server, located on the institution’s enterprise network. The DTNs need only speak the API of the data management service (Globus in this case).

 

Berkeley Lab and ESnet Document Flow, Performance of 56 Terabytes Climate Data Transfer


Visualization by Prabhat (Berkeley Lab).
The simulated storms seen in this visualization are generated from the finite volume version of NCAR’s Community Atmosphere Model. Visualization by Prabhat (Berkeley Lab).

In a recent paper entitled “An Assessment of Data Transfer Performance for Large‐Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6,” experts from Lawrence Berkeley National Laboratory (Berkeley Lab) and ESnet (the Energy Sciences Network, document the data transfer workflow, data performance, and other aspects of transferring approximately 56 terabytes of climate model output data for further analysis.

The data, required for tracking and characterizing extratropical storms, needed to be moved from the distributed Coupled Model Intercomparison Project (CMIP5) archive to the National Energy Research Supercomputing Center (NERSC) at Berkeley Lab.

The authors found that there is significant room for improvement in the data transfer capabilities currently in place for CMIP5, both in terms of workflow mechanics and in data transfer performance. In particular, the paper notes that performance improvements of at least an order of magnitude are within technical reach using current best practices.

To illustrate this, the authors used Globus to transfer the same raw data set between NERSC and Argonne Leadership Computing Facility (ALCF) at Argonne National Lab.

Read the Globus story: https://www.globus.org/user-story-lbl-and-esnet
Read the paper: https://arxiv.org/abs/1709.09575

30 Years Ago this Month ESnet Rolled Out its Rollout Plans


1988 ESnet map2

Although officially established in 1986, ESnet did not formally begin network operations until 1988, as the Department of Energy’s Magnetic Fusion Energy Network (MFEnet, affectionately known as MuffyNet) and High Energy Physics Network (HEPnet) were gradually melded into a single entity.

In January 1988, then-ESnet head Jim Leighton laid out the plans for the new network in the Buffer, the monthly user newsletter for the National Magnetic Fusion Energy Computing Center (known today as NERSC). At the time, ESnet was managed by the Networking and Engineering Group at the center.

After giving some background on the organization of ESnet, Leighton wrote “Now you are probably saying to yourself that this really is very exciting stuff, but it would be even more exciting if we knew when we could expect to see something running. Well, I just happen to be ready to outline our schedule for the next two years:

“January 1988: We believe that the new approach ESnet is taking will require much closer coordination with people responsible for the local area networking at each site. Accordingly, we are planning to convene a new committee in January, with sites involved in Phase I (see below) of ESnet deployment (“Boy, a new committee, that is exciting!” you are probably saying to yourself.). Additional site members will be added to the committee as the implementation continues.

“Phase 0 (January-March 1988): We expect to bring up all the sites on the X.25 backbone, including Brookhaven National Laboratory (BNL), CERN, Fermi National Accelerator Laboratory (FNAL), Florida State University (FSU), Lawrence Berkeley Laboratory (LBL), Lawrence Livermore National Laboratory (LLNL), and the Massachusetts Institute of Technology (MIT). Additional foreign sites will be added during the year.

“Demonstration (March 1988): During the MFESIG meeting to be held at LBL, we expect to demonstrate some ‘beta release’ capabilities of ESnet.

“Phase I (June-September 1988): We will begin deploying and installing a terrestrial 56-K bits per second backbone for ESnet. Sites affected include Argonne National Laboratory (ANL), FSU, GA Technologies, Los Alamos National Laboratory, LBL, MFECC, Princeton Plasma Physics Laboratory, and the University of Texas at Austin. No sites will be disconnected from MFEnet during this phase.

“Phase II (October-December 1988): We will complete the ESnet backbone and connect additional sites to the backbone. This phase will require some sites to be disconnected from MFEnet. The MFEnet to ESnet transition gateway must be installed during this phase. Additional sites affected include CEBAF, FNAL, MIT, Oak Ridge National Laboratory, and UCLA.

“Phase III (Calendar Year 1989): We will continue to switch major hub sites from MFEnet to MFEnet II, along with all secondary sites connected through those hub sites.”

Read more from the Buffer about  the 1988 ESnet launch.

ESnet’s DOE early-career awardee works to overcome roadblocks in computational network


XBD201706-00107-034.tif
Mariam Kiran, ESnet, speaks to Kennedy High School students and their teacher Dr. LaRue Moore during a networking camp at Lawrence Berkeley National Laboratory.

Like other complex systems, computer networks can break down and suffer bottlenecks. Keeping such systems running requires algorithms that can identify problems and find solutions on the fly so information moves quickly and on time.

Mariam Kiran – a network engineer for the Energy Sciences Network (ESnet), a DOE Office of Science user facility managed by Lawrence Berkeley National Laboratory – is using an early-career research award from DOE’s Office of Science to develop methods combining machine-learning algorithms with parallel computing to optimize such networks.

Read more: http://ascr-discovery.science.doe.gov/2017/12/thinking-networks/