National Science Foundation & Department of Energy’s ESnet Launch Innovative Program for Women Engineers

Women in Networking @SC (WINS) Kicks off this week in Salt Lake City!

WINS Participants
(Left to Right) Julia Locke (LANL), Debbie Fligor (SC15 WINS returning participant, University of Illinois at Urbana-Champaign), Jessica Schaffer (Georgia Tech), Indira Kassymkhanova (LBNL), Denise Grayson (Sandia), Kali McLennan (Univ. of Oklahoma), Angie Asmus (CSU). Not in photo:  Amber Rasche (N. Dakota State) and Julie Staats (CENIC).

Salt Lake City, UT – October 26, 2016 – The University of Corporation for Atmospheric Research (UCAR) and The Keystone Initiative for Network Based Education and Research (KINBER) together with the Department of Energy’s (DOE) Energy Science Network (ESnet) today announce the official launch of the Women in Networking at SC (WINS) program.

Funded through a grant from the National Science Foundation (NSF) and directly from ESnet, the program funds eight early to mid-career women in the research and education (R&E) network community to participate in the 2016 setup, build out and live operation of SCinet, the Supercomputing Conference’s (SC) ultra high performance network. SCinet supports large-scale computing demonstrations at SC,  the premier international conference on high performance computing, networking, data storage and data analysis and is attended by over 10,000 of the leading minds in these fields.

The SC16 WINS program kicked off this week as the selected participants from across the U.S., headed to Salt Lake City, the site of the 2016 conference to begin laying the groundwork for SCinet inside the Salt Palace Convention Center. The WINS participants join over 250 volunteers that make up the SCinet engineering team and will work side by side with the team and their mentors to put the network into full production service when the conference begins on November 12. The women will return to Salt Lake City a week before the conference to complete the installation of the network.

“We are estimating that SCinet will be outfitted with a massive 3.5 Terabits per second (Tbps) of bandwidth for the conference and will be built from the ground up with leading edge network equipment and services (even pre-commercial in some instances) and will be considered the fastest network in the world during its operation,” said Corby Schmitz, SC16 SCinet Chair.

The WINS participants will support a wide range of technical areas that comprise SCinet’s incredible operation, including wide area networking, network security, wireless networking, routing, network architecture and other specialties. 

Several WINS participants hard at work with their mentors configuring routers & switches

“While demand for jobs in IT continues to increase, the number of women joining the IT workforce has been on the decline for many years,” said Marla Meehl, Network Director from UCAR and co-PI of the NSF grant. “WINS aims to help close this gap and help to build and diversify the IT workforce giving women professionals a truly unique opportunity to gain hands-on expertise in a variety of networking roles while also developing mentoring relationships with recognized technical leaders.”

Funds are being provided by the NSF through a $135,000 grant and via direct funding from ESnet supported by Advanced Scientific Computing Research (ASCR) in DOE Office of Science. Funding covers all travel expenses related to participating in the setup and operation of SCinet and will also provide travel funds for the participants to share their experiences at events like The Quilt Member Meetings, Regional Networking Member meetings, and the DOE National Lab Information Technology Annual Meeting.

“Not only is WINS providing hands-on engineering training to the participants but also the opportunity to present their experiences with the broader networking community throughout the year. This experience helps to expand important leadership and presentations skills and grow their professional connections with peers and executives alike,” said Wendy Huntoon, president and CEO of KINBER and co-PI of the NSF grant.

The program also represents a unique cross-agency collaboration between the NSF and DOE.  Both agencies recognize that the pursuit of knowledge and science discovery that these funding organizations support depends on bringing the best ideas from people of various backgrounds to the table.  

“Bringing together diverse voices and perspectives to any team in any field has been proven to lead to more creative solutions to achieve a common goal,” says Lauren Rotman, Science Engagement Group Lead, ESnet. “It is vital to our future that we bring every expert voice, every new idea to bear if our community is to tackle some of our society’s grandest challenges from understanding climate change to revolutionizing cancer treatment.”

2016 WINS Participants are:

  • Denise Grayson, Sandia National Labs (Network Security Team), DOE-funded
  • Julia Locke, Los Alamos National Lab (Fiber and Edge Network Teams), DOE-funded
  • Angie Asmus, Colorado State (Edge Network Team), NSF-funded
  • Kali McLennan, University of Oklahoma (WAN Transport Team), NSF-funded
  • Amber Rasche, North Dakota State University (Communications Team), NSF-funded
  • Jessica Shaffer, Georgia Institute of Tech (Routing Team), NSF-funded
  • Julia Staats, CENIC (DevOps Team), NSF-funded
  • Indira Kassymkhanova, Lawrence Berkeley National Lab (DevOps and Routing Teams), DOE-funded

The WINS Supporting Organizations:
The University Corporation for Atmospheric Research (UCAR)

The Keystone Initiative for Network Based Education and Research (KINBER)

THe Department of Energy’s Energy Sciences Network (ESnet)

How the World’s Fastest Science Network Was Built


Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

Funded by DOE’s Office of Science and managed by the Lawrence Berkeley National Laboratory (Berkeley Lab), ESnet moves about 51  petabytes of scientific data every month. This is a 13-step guide about how ESnet has evolved over 30 years.

Step 1: When fusion energy scientists inherit a cast-off supercomputer, add 4 dialup modems so the people at the Princeton lab can log in. (1975)


Step 2: When landlines prove too unreliable, upgrade to satellites! Data screams through space. (1981)


Step 3: Whose network is best? High Energy Physics (HEPnet)? Fusion Physics (MFEnet)?  Why argue? Merge them into one-Energy Sciences Network (ESnet)-run by the Department of Energy!  Go ESnet! (1986)


Step 4: Make it even faster with DUAL Satellite links! We’re talking 56 kilobits per second! Except for the Princeton fusion scientists – they get 112 Kbps! (1987)


Step 5:  Whoa, when an upgrade to 1.5 MEGAbits per second isn’t enough, add ATM (not the money machine, but Asynchronous Transfer Mode) to get more bang for your buck. (1995)


Step 6: Duty now for the future—roll out the very first IPv6 address to ensure there will be enough Internet addresses for decades to come. (2000)


Step 7: Crank up the fastest links in the network to 10 GIGAbits per second—16 times faster than the old gear—a two-generation leap in network upgrades at one time. (2003)


Step 8: Work with other networks to develop really cool tools, like the perfSONAR toolkit for measuring and improving end-to-end network performance and OSCARS (On-Demand Secure Circuit and Advance Reservation), so you can reserve a high-speed, end-to-end connection to make sure your data is delivered on time. (2006)


Step 9: Why just rent fiber? Pick up your own dark fiber network at a bargain price for future expansion. In the meantime, boost your bandwidth to 100G for everyone. (2012)


Step 10: Here’s a cool idea, come up with a new network design so that scientists moving REALLY BIG DATASETS can safely avoid institutional firewalls, call it the Science DMZ, and get research moving faster at universities around the country. (2012)



Step 11: We’re all in this science thing together, so let’s build faster ties to Europe. ESnet adds three 100G lines (and a backup 40G link) to connect researchers in the U.S. and Europe. (2014)


Step 12: 100G is fast, but it’s time to get ready for 400G. To pave the way, ESnet installs a production 400G network between facilities in Berkeley and Oakland, Calif., and even provides a 400G testbed so network engineers can get up to speed on the technology. (2015)


Step 13: Celebrate 30 years as a research and education network leader, but keep looking forward to the next level. (2016)


Berkeley Lab Staff to Present Super-facility Science Model at Internet2 Conference

Berkeley Lab staff from five divisions will share their expertise in a panel discussion on “Creating Super-facilities: a Coupled Facility Model for Data-Intensive Science at the Internet2 Global Summit to be held April 26-30 in Washington, D.C. The panel was organized by Lauren Rotman of ESnet and includes Alexander Hexemer of the Advanced Light Source (ALS), Craig Tull of CRD, David Skinner of NERSC and Rune Stromsness of the IT Division.

The session will highlight the concept of a coupled science facility or “super-facility,” a new model that links together experimental facilities like the ALS with computing facilities like NERSC via a Science DMZ architecture and advanced workflow and analysis software, such as SPOT Suite developed by Tull’s group. The session will share best practices, lessons learned and future plans to expand this effort.

Also at the conference, ESnet’s Brian Tierney will speak in a session oh “perfSONAR: Meeting the Community’s Needs.” Co-developed by ESnet, perfSONAR is a tool for end-to-end monitoring and troubleshooting of multi-domain network performance. The session will give an overview of the perfSONAR project, including an overview of the 3.4 release, a preview of the 3.5 release, an overview of the product plan, and an overview of perfSONAR training plan.

Across the Universe: Cosmology Data Management Workshop Draws Stellar Crowd

CrossConnects1ESnet’s Eli Dart (left), Salman Habib (center) of Argonne National Lab and Joel Brownstein of the University of Utah compare ideas during a workshop break.

ESnet and Internet2 hosted last week’s CrossConnects Workshop on “Improving Data Mobility & Management for International Cosmology,” a two-day meeting ESnet Director Greg Bell described as the best one yet in the series. More than 50 members of the cosmology and networking research community turned out for the event hosted at Lawrence Berkeley National Laboratory, while another 75 caught the live stream from the workshop.

The Feb. 10-11 workshop provided a forum for discussing the growing data challenges associated with the ever-larger cosmological and observational data sets, which are already reaching the petabyte scale. Speakers noted that network bandwidth is no longer the bottleneck into the major data centers, but storage capacity and performance from the network to storage remain a challenge. In addition, network connectivity to telescope facilities is often limited and expensive due to the remote location of the facilities. Science collaborations use a variety of techniques to manage these issues, but improved connectivity to telescope sites would have a significant scientific benefit in many cases.

In his opening keynote talk, Peter Nugent of Berkeley Lab’s Computational Research Division said that astrophysics is transforming from a data-starved to a data-swamped discipline. Today, when searching for supernovae, one object in the database consists of thousands of images, each 32 MB in size. That data needs to be processed and studied quickly so when an object of interest is found, telescopes around the world can begin tracking it in less than 24 hours, which is critical as the supernovae are at their most visible for just a few weeks. Specialized pipelines have been developed to handle this flow of images to and from NERSC.

Salman Habib of Argonne National Laboratory’s High Energy Physics and the Mathematics and Computer Science Divisions opened the second day of the workshop, focused on cosmology simulations and workflows. Habib leads DOE’s Computation-Driven Discovery for the Dark Universe project. Habib pointed out that large-scale simulations are critical for understanding observational data and that the size and scale of simulation datasets far exceed those of observational data. “To be able to observe accurately, we need to create accurate simulations,” he said. Simulations will soon create 100 petabyte sets of raw data, and the limiting factor for handling these will be the amount of available storage, so smaller “snapshots” of the datasets will need to be created. And while one person can run the simulation itself, analyzing the resulting data will involve the whole community.

Reijo Keskitalo of Berkeley Lab’s Computational Cosmology Center described how computational support for the Planck Telescope has relied on HPC to generate the largest and most complete simulation maps of the cosmic microwave background, or CMB. In 2006, the project was the first to run on all 6,000 CPUs of Seaborg, NERSC’s IBM flagship at the time. It took six hours on the machine to produce one map. Now, running on 32,000 CPUs on Edison, the project can generate 10,000 maps in just one hour.

Mike Norman, head of the San Diego Supercomputer Center, offered that high performance computing can become distorted by “chasing the almighty FLOP,” or floating point operations per second. “We need to focus on science outcomes, not TOP500 scores.”

Over the course of the workshop, ESnet Director Greg Bell noted that observation and simulation are no longer separate scientific endeavors.

The workshop drew a stellar group of participants. In addition to the leading lights mentioned above, attendees included Larry Smarr, founder of NCSA and current leader of the California Institute for Telecommunications and Information Technology, a $400 million academic research institution jointly run by the University of California, San Diego and UC Irvine; and Ian Foster, who leads the Computation Institute at the University of Chicago and is a senior scientist at Argonne National Lab. Foster is also recognized as one of the inventors of grid computing.

The next step for the workshop organizers is to publish a report and identify areas for further study and collaboration. Looming over them will be the thoughts of Steven T. Myers of the National Radio Astronomy Observatory after describing the data challenges coming with the Square Kilometer Array radio telescope: “The future is now. And the data is scary. Be afraid. But resistance is futile.”

Register Now for Cross-Connects Workshop on Managing Cosmology Data

Registration is now open for a workshop on “Improving Data Mobility and Management for International Cosmology” to be held Feb. 10-11 at Lawrence Berkeley National Laboratory in California. The workshop, one in a series of Cross-Connects workshops, is sponsored the by the Dept. of Energy’s ESnet and Internet2.

Early registration is encouraged as attendance is limited and the past two workshops were filled and had waiting lists. Registration is $200 including breakfast, lunch and refreshments for both days. Visit the Cross-Connects Workshop website for more information.

Cosmology data sets are already reaching into the petabyte scale and this trend will only continue, if not accelerate. This data is produced from sources ranging from supercomputing centers—where large-scale cosmological modeling and simulations are performed—to telescopes that are producing data daily. The workshop is aimed at helping cosmologists and data managers who struggle with data workflow, especially as the need for real-time analysis of cosmic events increases.

Renowned cosmology experts Peter Nugent and Salman Habib will give keynote speeches at the workshop.

Nugent, Senior Scientist and Division Deputy for Science Engagement in the Computational Research Division at Lawrence Berkeley National Laboratory, will deliver a talk on “The Palomar Transient Factory” and how observational data in astrophysics, integrated with high-performance computing resources, benefits the discovery pipeline for science.

Habib, a member of the High Energy Physics and Mathematics and Computer Science Divisions at Argonne National Laboratory, a Senior Member of the Kavli Institute for Cosmological Physics at the University of Chicago, and a Senior Fellow in the Computation Institute, will give the second keynote on “Cosmological Simulations and the Data Big Crunch.”

Register now.

Popular Science Looks Ahead to ESnet’s Trans-Atlantic Links

As part of its look at things to expect in 2015, Popular Science magazine highlights ESnet’s new trans-Atlantic links which will have a combined capacity of 340 gigabits per second. The three 100 Gbps and one 40 Gbps connections are being tested and are expected to go live at the end of January.

Read the article at:

Final ESnet_Europe mag

Read the original announcement.

ESnet to Boost Big Data Transfers by Extending 100G Connectivity across Atlantic

ESnet, the Department of Energy’s (DOE’s) Energy Sciences Network, is deploying four new high-speed transatlantic links, giving researchers at America’s national laboratories and universities ultra-fast access to scientific data from the Large Hadron Collider (LHC) and other research sites in Europe.

ESnet’s transatlantic extension will deliver a total capacity of 340 gigabits-per-second (Gbps), and serve dozens of scientific collaborations. To maximize the resiliency of the new infrastructure, ESnet equipment in Europe will be interconnected by dedicated 100 Gbps links from the pan-European networking organization GÉANT.

Funded by the DOE’s Office of Science and managed by Lawrence Berkeley National Laboratory, ESnet provides advanced networking capabilities and tools to support U.S. national laboratories, experimental facilities and supercomputing centers.

Among the first to benefit from the network extension will be U.S. high energy physicists conducting research at the Large Hadron Collider (LHC), the world’s most powerful particle collider, located near Geneva, Switzerland. DOE’s Brookhaven National Laboratory and Fermi National Accelerator Laboratory—major U.S. computing centers for the LHC’s ATLAS and CMS experiments, respectively—will make use of the links as soon as they are tested and commissioned.

Read more.

Final ESnet_Europe mag

ESnet’s Science DMZ Model Speeds Flow of Cancer Data at University of New Mexico

Data from the University of New Mexico’s Cancer Center next-generation genome sequencers is now flying across a 10 Gbps link to the university’s Center for Advanced Research Computing, thanks to the Science DMZ model pioneered by ESnet, according to an article posted by UNM.

According to the article, “This point-to-point connection is a first step toward establishing a campus-wide research network at UNM. The connection is based on the “Science DMZ” model formalized by the Department of Energy’s ESnet in 2010. The new link delivers a low-latency, high-bandwidth, unfiltered connection via UNM’s campus network.”

The article states that the new 10 Gbps link enables fast, reliable, and secure transfer of enormous genome sequence files from the UNM Cancer Center for analysis and subsequent data warehouse archiving. And the model may pave the way for greater research collaborations across the state.

“This project is part of UNM’s larger direction to collaborate across campuses and expand network infrastructure for research here and statewide,” said Chief Information Officer Gil Gonzales. UNM IT works closely with departments and Centers at UNM, and with research institutions throughout New Mexico, to provide production, commodity, and research network services.

Read the full story at:

Learn more about ESnet’s Science DMZ architecture at:

Announcing the Keynotes for the Focused Technical Workshop on Climate!

 We are pleased to announce three influential keynote speakers for the upcoming Focused Technical Workshop titled “Improving Data Mobility and Management for International Climate Science”, which will be hosted by the National Oceanic and Atmospheric Administration (NOAA) in Boulder, CO from July 14-16, 2014.

The first keynote will be delivered by NOAA’s Dr. Alexander “Sandy” MacDonald, Chief Science Advisor and Director of the Earth System Research Laboratory (ESRL), who is known for his influential work in weather forecasting and high performance computing at NOAA.

Also from NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) and Princeton University, Dr. V. Balaji, Head of the Modeling Systems Group, will share the importance of bridging the worlds between science and software with workshop attendees.

And finally, Eli Dart, a highly-acclaimed network engineer from the Department of Energy’s ESnet who is credited with co-developing the Science DMZ model, will wrap up the workshop with the final keynote focused on how to create cohesive strategies for data mobility across computer systems, networks and science environments.

Inspired by each of the keynote speaker’s integral roles in climate science, computing and network architectures, the workshop intends to spark lively, interactive discussions between the research and education (R&E) and climate science communities to build long-term relationships and create useful tools and resources for improved climate data transport and management.

We look forward to seeing you in Boulder!

Don’t Forget… Still Time to Register!

Registration is only open to first 100 people!

Register at

Registration fee:  $200

Visit the FTW website for more information:

International Climate Community Kicks-Off a Year of Networking!

Starting this January, the Earth System Grid Federation (ESGF) has started a new working group—the International Climate Network Working Group—to help set up and optimize network infrastructures for their climate data sites around the world. They need network connections that can deal with petabytes of modeling and observational data, which will traverse more than 13,000 miles of networks (more than half the circumference of the Earth!), spanning two oceans.

By the end of 2014, this working group will aim to obtain at least 4Gbps of data transfer throughput at five of their climate data centers at PCMDI/LLNL (US), NCI/ANU (AU), CEDA/SFTC (UK), DKRZ (DE), and KNMI (NE). This goal runs in parallel with the Enlighten Your Research Global international networking program award that ESGF received this last November 2013. This initiative is lead by Dean Williams of Lawrence Livermore National Lab and ESnet’s Science Engagement Team, along with collaborating international network organizations in Australia (AARnet), Germany (DFN), the Netherlands (SURFnet), and the UK (Janet). We are helping to shepherd ESGF’s project and working group to make sure all their climate sites get up and running at proficient network speeds for the future peta-scale climate data that is expected within the next 5 years.

As we work closely with ESGF to pave the way for climate science, we look forward to developing a new set of networking best practices to help future climate science collaborations. In all, we are excited to get this started and see their science move forward!

The Enlighten Your Research Global program will set up, optimize and/or troubleshoot 5 ESGF locations in different countries throughout 2014.
The Enlighten Your Research Global program will set up, optimize and/or troubleshoot 5 ESGF locations in different countries throughout 2014.