ESnet to Demonstrate Science DMZ as a Service, Create Virtual Superfacility at GENI Conference


At the twenty-second GENI Engineering Conference being held March 23-26 in Washington, D.C., ESnet staff will conduct a demonstration of the Science DMZ as a service and show how the technique for speeding the flow of large datasets can be created on demand. The conference is tailor-made for the demonstration as GENI, the Global Environment for Network Innovations, provides a virtual laboratory for networking and distributed systems research and education.

The Science DMZ architecture, developed by ESnet, is a specialized network architecture to speed up the flow of large datasets. The Science DMZ is a portion of a network, usually at a university campus, that is configured to take optimal advantage of the campus’ advanced networks. A Science DMZ provides “frictionless” network paths that connect computational power and storage to scientific big data.

Read more.

ESnet’s Science DMZ Architecture is Foundation for New Infrastructure Linking California’s Top Research Institutions


The Pacific Research Platform, a cutting-edge research network infrastructure based on ESnet’s Science DMZ architecture, will link together the Science DMZs of dozens of top research institutions in California. The Pacific Research Platform was announced Monday, March 9, at the CENIC 2015 Annual Conference/

The new platform will link the sites via three advanced networks: the Department of Energy’s Energy Science Network (ESnet), CENIC’s California Research & Education Network (CalREN) and Pacific Wave. Initial results for the new infrastructure will be announced in a panel discussion during the conference featuring by Eli Dart (ESnet), John Haskins (UC Santa Cruz), John Hess (CENIC), Erik McCroskey (UC Berkeley), Paul Murray (Stanford), Larry Smarr (Calit2), and Michael van Norman (UCLA).  The presentation will be live-streamed at 4:20 p.m. Pacific Time on Monday, March 9, and can be watched for free at cenic2015.cenic.org.

Science DMZs are designed to create secure network enclaves for data-intensive science and high-speed data transport. The Science DMZ design was developed by ESnet and NERSC.

“CENIC designed CalREN to have a separate network tier reserved for data-intensive research from the beginning, and the development of the Science DMZ concept by ESnet has enabled that to reach into individual laboratories, linking them together into a single advanced statewide fabric for big-science innovation,” said CENIC President and CEO Louis Fox.  “Of course, CENIC itself also functions as a way to create a fabric of innovation by bringing researchers together to share ideas, making the timing of this announcement at our annual conference just right.”

Read more.

ESnet Takes Science DMZ Architecture to Pennsylvania R&E Community


Jason Zurawski of ESnet’s Science Engagement team will lead a March 4 webinar on “Upgrading Campus Cyberinfrastructure: An Introduction to the Science DMZ Architecture” for research and education organizations in Pennsylvania.

Zurawski will introduce ESnet’s Science DMZ Architecture, a network design pattern designed to streamline the process of science and improve the outcomes for researchers. Included in this design are network monitoring concepts via the perfSONAR framework, as well as functional components used to manage security and manage the transfer of data. This design pattern has roots in high speed networks at major computing facilities, but is flexible enough to be deployed and used by institutions of any size. This solution has been successfully deployed on numerous campuses involved in the NSF CC-IIE and CC-NIE programs, and is a focus area for the upcoming CC-DNI program.

The workshop is presented by the Keystone Initiative for Network Based Education and Research (KINBER), a not-for-profit membership organization that provides broadband connectivity, fosters collaboration, and promotes the innovative use of digital technologies for the benefit of Pennsylvania.

ESnet Opens 40G perfSONAR Host for Network Performance Testing


perfsonar

ESnet has deployed the first public 40Gbps production perfSONAR host directly connected to an R&E backbone network, allowing research organizations to test and diagnose the performance of network links up to 40 gigabits per second.

The host, located in Boston, Mass., is available to any organization in the R&E (research and education) networking community. More and more, organizations are setting up their own 40 Gbps data transfer nodes to help systems keep up with the increasing size of research datasets.

Read more.

Across the Universe: Cosmology Data Management Workshop Draws Stellar Crowd


CrossConnects1ESnet’s Eli Dart (left), Salman Habib (center) of Argonne National Lab and Joel Brownstein of the University of Utah compare ideas during a workshop break.

ESnet and Internet2 hosted last week’s CrossConnects Workshop on “Improving Data Mobility & Management for International Cosmology,” a two-day meeting ESnet Director Greg Bell described as the best one yet in the series. More than 50 members of the cosmology and networking research community turned out for the event hosted at Lawrence Berkeley National Laboratory, while another 75 caught the live stream from the workshop.

The Feb. 10-11 workshop provided a forum for discussing the growing data challenges associated with the ever-larger cosmological and observational data sets, which are already reaching the petabyte scale. Speakers noted that network bandwidth is no longer the bottleneck into the major data centers, but storage capacity and performance from the network to storage remain a challenge. In addition, network connectivity to telescope facilities is often limited and expensive due to the remote location of the facilities. Science collaborations use a variety of techniques to manage these issues, but improved connectivity to telescope sites would have a significant scientific benefit in many cases.

In his opening keynote talk, Peter Nugent of Berkeley Lab’s Computational Research Division said that astrophysics is transforming from a data-starved to a data-swamped discipline. Today, when searching for supernovae, one object in the database consists of thousands of images, each 32 MB in size. That data needs to be processed and studied quickly so when an object of interest is found, telescopes around the world can begin tracking it in less than 24 hours, which is critical as the supernovae are at their most visible for just a few weeks. Specialized pipelines have been developed to handle this flow of images to and from NERSC.

Salman Habib of Argonne National Laboratory’s High Energy Physics and the Mathematics and Computer Science Divisions opened the second day of the workshop, focused on cosmology simulations and workflows. Habib leads DOE’s Computation-Driven Discovery for the Dark Universe project. Habib pointed out that large-scale simulations are critical for understanding observational data and that the size and scale of simulation datasets far exceed those of observational data. “To be able to observe accurately, we need to create accurate simulations,” he said. Simulations will soon create 100 petabyte sets of raw data, and the limiting factor for handling these will be the amount of available storage, so smaller “snapshots” of the datasets will need to be created. And while one person can run the simulation itself, analyzing the resulting data will involve the whole community.

Reijo Keskitalo of Berkeley Lab’s Computational Cosmology Center described how computational support for the Planck Telescope has relied on HPC to generate the largest and most complete simulation maps of the cosmic microwave background, or CMB. In 2006, the project was the first to run on all 6,000 CPUs of Seaborg, NERSC’s IBM flagship at the time. It took six hours on the machine to produce one map. Now, running on 32,000 CPUs on Edison, the project can generate 10,000 maps in just one hour.

Mike Norman, head of the San Diego Supercomputer Center, offered that high performance computing can become distorted by “chasing the almighty FLOP,” or floating point operations per second. “We need to focus on science outcomes, not TOP500 scores.”

Over the course of the workshop, ESnet Director Greg Bell noted that observation and simulation are no longer separate scientific endeavors.

The workshop drew a stellar group of participants. In addition to the leading lights mentioned above, attendees included Larry Smarr, founder of NCSA and current leader of the California Institute for Telecommunications and Information Technology, a $400 million academic research institution jointly run by the University of California, San Diego and UC Irvine; and Ian Foster, who leads the Computation Institute at the University of Chicago and is a senior scientist at Argonne National Lab. Foster is also recognized as one of the inventors of grid computing.

The next step for the workshop organizers is to publish a report and identify areas for further study and collaboration. Looming over them will be the thoughts of Steven T. Myers of the National Radio Astronomy Observatory after describing the data challenges coming with the Square Kilometer Array radio telescope: “The future is now. And the data is scary. Be afraid. But resistance is futile.”

ESnet’s Tierney, Zurawski to Present at Workshop on perfSONAR Best Practices


ESnet’s Brian Tierney and Jason Zurawski will be the featured speakers at a workshop on “perfSONAR Deployment Best Practices, Architecture, and Moving the Needle.” The Jan. 21-22 workshop, one in a series of Focused Technical Workshops organized by ESnet and Internet2, will be held at the Ohio Supercomputer Center in Columbus. Read more (http://es.net/news-and-publications/esnet-news/2015/esnet-s-tierney-zurawski-to-present-at-workshop-on-perfsonar-best-practices/)

A joint effort between ESnet, Internet2, Indiana University, and GEANT, the pan-European research network, perfSONAR is a tool for end-to-end monitoring and troubleshooting of multi-domain network performance. In January 2014, perfSONAR reached a milestone with 1,000 instances of the diagnostic software installed on networking hosts around the U.S. and in 13 other countries. perfSONAR provides network engineers with the ability to test and measure network performance, as well as to archive data in order to pinpoint and solve service problems that may span multiple networks and international boundaries.

At the workshop, Tierney will give an introduction to perfSONAR and present a session on debugging using the software. Zurawski will talk about maintaining a perfSONAR node, describe some user case studies and success stories, discuss “Pulling it All Together – perfSONAR as a Regional Asset” and conclude with “perfSONAR at 10 Years: Cleaning Networks & Disrupting Operation.”

ESnet's Jason Zurawski and Brian Tierney
ESnet’s Jason Zurawski and Brian Tierney

Register Now for Cross-Connects Workshop on Managing Cosmology Data


Registration is now open for a workshop on “Improving Data Mobility and Management for International Cosmology” to be held Feb. 10-11 at Lawrence Berkeley National Laboratory in California. The workshop, one in a series of Cross-Connects workshops, is sponsored the by the Dept. of Energy’s ESnet and Internet2.

Early registration is encouraged as attendance is limited and the past two workshops were filled and had waiting lists. Registration is $200 including breakfast, lunch and refreshments for both days. Visit the Cross-Connects Workshop website for more information.

Cosmology data sets are already reaching into the petabyte scale and this trend will only continue, if not accelerate. This data is produced from sources ranging from supercomputing centers—where large-scale cosmological modeling and simulations are performed—to telescopes that are producing data daily. The workshop is aimed at helping cosmologists and data managers who struggle with data workflow, especially as the need for real-time analysis of cosmic events increases.

Renowned cosmology experts Peter Nugent and Salman Habib will give keynote speeches at the workshop.

Nugent, Senior Scientist and Division Deputy for Science Engagement in the Computational Research Division at Lawrence Berkeley National Laboratory, will deliver a talk on “The Palomar Transient Factory” and how observational data in astrophysics, integrated with high-performance computing resources, benefits the discovery pipeline for science.

Habib, a member of the High Energy Physics and Mathematics and Computer Science Divisions at Argonne National Laboratory, a Senior Member of the Kavli Institute for Cosmological Physics at the University of Chicago, and a Senior Fellow in the Computation Institute, will give the second keynote on “Cosmological Simulations and the Data Big Crunch.”

Register now.

Popular Science Looks Ahead to ESnet’s Trans-Atlantic Links


As part of its look at things to expect in 2015, Popular Science magazine highlights ESnet’s new trans-Atlantic links which will have a combined capacity of 340 gigabits per second. The three 100 Gbps and one 40 Gbps connections are being tested and are expected to go live at the end of January.

Read the article at: http://www.popsci.com/ultrafast-data-transfer-speeds-science

Final ESnet_Europe mag

Read the original announcement.

ESnet Connections Peak at 270 Gbps Flow In, Out of SC14 Conference


The booths have been dismantled, the routers and switchers shipped back home and the SC14 conference in New Orleans officially ended Nov. 21, but many attendees are still reflecting on important connections made during the annual gathering of the high performance computing and networking community.

Among those helping make the right connections were ESnet staff, who used ESnet’s infrastructure to bring a combined network capacity of 400 gigabits-per-second (Gbps) in the Ernest Morial Convention Center. Those links accounted for one third of SC14’s total 1.22 Tbps connectivity, provided by SCinet, the conference’s network infrastructure designed and built by volunteers. The network links were used for a number of demonstrations between booths on the exhibition floor and sites around the world.

A quick review of the ESnet traffic patterns at https://my.es.net/demos/sc14#/summary shows that traffic apparently peaked at 12:15 p.m. Thursday, Nov. 20, with 79.2 Gbps of inbound data and 190 Gbps flowing out.

Among the largest single users of ESnet’s bandwidth was a demo by the Naval Research Laboratory, which used ESnet’s 100 Gbps testbed to conduct a 100 Gbps remote I/O demonstration at SC14. Read the details at: http://www.nrl.navy.mil/media/news-releases/2014/nrl-and-collaborators-conduct-100-gigabit-second-remote-io-demonstration#sthash.RttfV8kw.dpuf

NRL and Collaborators Conduct 100 Gigabit/Second Remote I/O Demonstration

Screen Shot 2014-11-20 at 9.40.29 AM

The Naval Research Laboratory (NRL), in collaboration with the DOE’s Energy Sciences Network (ESnet), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, the Center for Data Intensive Science (CDIS) at the University of Chicago, the Open Cloud Consortium (OCC) and significant industry support, have conducted a 100 gigabits per second (100G) remote I/O demonstration at the SC14 supercomputing conference in New Orleans, LA.

The remote I/O demonstration illustrates a pipelined distributed processing framework and software defined networking (SDN) between distant operating locations. The demonstration shows the capability to dynamically deploy a production quality 4K Ultra-High Definition Television (UHDTV) video workflow across a nationally distributed set of storage and computing resources that is relevant to emerging Department of Defense data processing challenges.

Visit the My Esnet Portal at https://my.es.net/demos/sc14#/nrl to view real-time network traffic on ESnet.
Visit the My Esnet Portal at https://my.es.net/demos/sc14#/nrl to view real-time network traffic on ESnet.

Read more: http://www.nrl.navy.mil/media/news-releases/2014/nrl-and-collaborators-conduct-100-gigabit-second-remote-io-demonstration#sthash.35f9S8Wy.dpu