The Risks of Not Deploying IPv6 in the R&E Community


Observations from ESnet’s resident IPv6 Expert Michael Sinatra

When having discussions with CIOs of various colleges, universities, and national laboratories, I often hear about such issues as “risk,” “return on investment,” “up-front-costs,” “CAPEX/OPEX,” and the like. When the topic turns to IPv6, costs are cited as well as potential risks involved with adopting IPv6. However, any good risk assessment should include risks and costs of not doing something as well as doing it. Until recently, much of the risk of not deploying IPv6 was centered around running out of IPv4 addresses and not much more. Organizations that had a lot of IPv4 addresses (or thought they did) presumably didn’t have to consider such risks. In the discussion below, I note several more risks of not deploying IPv6, advantages of IPv6, and reasons to move forward. This discussion can be combined with more traditional risks and costs associated with deploying IPv6, to provide the seeds of a more complete risk assessment.

Adoption, not migration

It’s important to understand that adoption, not migration is of principal concern. It is widely understood that IPv4 will remain active for some time and that a dual-stack environment–where networks, computers, and other devices all run both IPv4 and IPv6 simultaneously–is still the “best” way to achieve an IPv6 transition. My concern is principally about the adoption portion, where we add IPv6 functionality to networks and hosts, in order to achieve a dual-stack environment. Indeed, this assumes a reasonable abundance of IPv4 addresses within an organization.

Risk 1: Security

Conventional wisdom has it that adopting IPv6 brings with it a range of security issues, namely owing to the traditionally poor support for IPv6 in security appliances. Although open-source firewalls have long had near-parity between IPv4 and IPv6, many proprietary firewall and IDS devices have lacked sufficient IPv6 features. While it is true that security equipment has in the past turned a blind eye to IPv6, this is changing rapidly as vendors move to support IPv6.

Nevertheless, there are risks to ignoring IPv6 on the campus or the lab site. Because of the widespread and largely “default-on” support of IPv6 tunneling technologies, such as 6to4 and Teredo, IPv6 tunnels can and do easily exist on IPv4-only networks. Security devices which don’t understand IPv6 are unlikely to understand these tunneling technologies, and they will be unable to peel open the tunnel layers to see what’s really going on inside the tunnels.

Many black-hats and grey-hats understand this and will attempt to use IPv6 to transport illegal peer-to-peer content and malware. If the bad guys are adopting IPv6, the good guys need to adopt it as well so they can see the bad stuff and clean it up. Some IDSes and firewalls do understand IPv6 and they even understand the tunneling protocols. While it’s possible to block wholesale some of the tunneling protocols, it isn’t easy to block them all–especially if there are legitimate users of tunnels on your network. Moreover, blocking protocols at the border or at a router doesn’t block it within your enterprise or within individual LANs.

If you haven’t been including IPv6 support (and, ideally, feature-parity between IPv4 and IPv6) in your purchasing decisions and RFPs for security equipment, you are exposing yourself to this risk. Ignoring IPv6 simply won’t keep it off your network. However, adopting IPv6 as a natively-routed protocol will bring most of the tunneled traffic out in the open as native IPv6 traffic, where it will be easier to detect anomalies. That, combined with making IPv6 a key consideration in purchasing decisions will help mitigate the security risk.

Of course, purchasing life-cycles often span three- or five-year periods. That’s why it’s crucial to start thinking about IPv6 now, so that you can get IPv6 requirements embedded into purchasing processes before you really need IPv6. This is true for both network (routers, switches) and security equipment. Realizing you need IPv6 after a purchasing cycle has completed is not a good position to be in.

Risk 2: Your eyeballs

Of course, I am speaking here of “eyeball networks”–networks with client computers that access content and data. For colleges and universities, these include your wireless nets, your residence hall networks, and lab and research networks that consume and process data. That last category is also prevalent at national lab sites. Many of these organizations feel that they have enough IPv4 address to satisfy network growth for several years. However, that does not mitigate the risks that these organizations face: That eyeball networks–even those with abundant IPv4 address space–will still need access to IPv6 content and data, and that “several years’” worth of IPv4 address space still may not be enough.

As is the case with security, ignoring the need for IPv6 on those eyeball networks also poses risks. While users may be perfectly happy consuming content and data over IPv4, there is no guarantee that that content and data will always be available over IPv4. Indeed, with the recent run-out of IANA’s IPv4 free-pool, and of the Asia-Pacific region’s IPv4 address space, IPv4 address space is becoming scarce in the larger Internet. Secondary markets are beginning to open in IPv4 address space, and the prices this far have been around $10 per IP address–far more expensive than most R&E organizations are used to spending (or for that matter, can afford). Government and foundation grants are unlikely to support shopping for IPv4 addresses on the secondary market, and they will tend to view IPv6 as a more viable (and cheaper) alternative.

New colleges and universities will not have access to an abundance of IPv4 addresses. Moreover, as new scientific sites and special instruments come on-line around the world, it is increasingly likely that those (especially in Europe and Asia) will have access to fewer and fewer IPv4 addresses. These research centers will have two options: Entirely forego IPv4 addresses or get a very small number of IPv4 addresses and run some sort of NAT and/or protocol translation to support IPv4. In this latter scenario, IPv6 can be supported end-to-end, due to address abundance, while the limited IPv4 space requires middleboxes.

ESnet’s work with the Science DMZ concept has revealed that middleboxes have a detrimental effect on network performance for data-intensive science applications. The lack of a clean end-to-end path for IPv4 will mean that IPv4 can only be supported as a legacy “slow-path” protocol. For real performance, IPv6 will be necessary. Even legacy support for IPv4 may not be available in certain regions for much longer.

A reasonable scenario that may be encountered within the IT departments of research institutions–universities and national labs–is as follows: Faculty members and research staff will need access to data from a particular instrument or reactor. They will either be unable to get the data they need over IPv4 or they will have to go through a middlebox or bastion to get to the data, and that will have serious performance implications for the researchers. They will approach the IT department and request IPv6. Knowing that lead times for such requests do not often extend past the order of hours or days begs the question, will you be ready when a researcher comes to you and asks for IPv6 connectivity in her lab “by the end of the week”?

Risk 3: Your Content

As the developing world mobilizes economically, corporations, foundations, entrepreneurs, benefactors, and prospective students will increasingly hail from countries such as China, India, Brazil, and other parts of Asia and Latin America. These prospective donors, collaborators, and scholars will need access to information resources at your university or lab. And increasingly, these people will have better access to IPv6 than to IPv4. The next-generation research network in China, CERNET2, is IPv6-only, for example.

Moreover, this is not solely true in the developing world. As ISPs in the US and Europe become further strapped for IPv4 resources, they will turn to large-scale NAT (LSN, aka “carrier-grade NAT”). LSN promises to reduce performance and increase troubleshooting headaches. Avoiding these pitfalls requires IPv6 support, which is why large ISPs like Comcast are proceeding aggressively with IPv6 adoption. As the effects of IPv4 run-out become more pronounced, more people will be trying to access your information resources via IPv6. Will they be able to reach you easily?

As campus development and public-affairs offices continue to push the outreach envelope, using social media and a variety of Internet-based technologies, they will want to ensure that they are reaching the maximum range of benefactors and prospective students. How will you answer the vice president of development when he asks you if your institution is doing all it can to make information resources available to the maximum range of prospective donors?  How will you respond when a researcher at your lab site asks about how to improve real-time collaboration experiences with partners in India?  How will you ensure the director of admissions that prospective students in China, India, and Brazil will have easy access to admissions materials and information about programs of study?  IPv6 plays an important role in answering these questions. How well you answer them depends on how well positioned you are for IPv6 adoption.

Risk 4: Even if you have a lot of IPv4 addresses, you don’t have enough

There are a lot of applications for which NAT, or the use of private IPv4 addresses without NAT, is “good enough.” Home networks frequently use NAT. Large high-performance compute clusters (HPCCs) frequently use private IPv4 space to number internal nodes. Small labs may use a consumer-grade NAT device at your campus or site. In many cases, these devices work fine. But often, the network could work much better if each device could be individually addressed.

In the case of HPCCs, I frequently encounter cases where two clusters at disparate sites use the same private IPv4 range (usually the lower end of 10.0.0.0/8). When the cluster owners decide to connect the HPCCs together via a private layer-2 or layer-1 (wave) link, they suddenly have address collisions. I have seen several cases where rounds of iterative negotiation are needed to properly renumber hosts into non-colliding ranges. Surprisingly, this is not infrequent in HPCCs, and it certainly has the potential to occur in many other applications. IPv6 solves this problem in two ways: First, because of its massive address space, a chunk of globally-routable address space can be used to number hosts with little impact. In the HPCC example, a single /64 can number all of the internal nodes in both clusters!  Even if a similar “private” address space is desired, IPv6 provides a mechanism called Unique Local Addressing, which allows different sites to “create” their own private address space. The algorithm specified by RFC 4193 allows for a high likelihood of uniqueness, so that if and when clusters are eventually merged, address collisions won’t occur. ULAs aren’t an exact replacement for IPv4 private addresses, but they are useful in certain circumstances, such as this HPCC example.

In this case, the use of IPv6 lowers the risk of escalating OPEX costs in maintaining a private address space. Moreover, the internal nodes could be numbered using EUI-64 based stateless autoconfiguration (SLAAC), further lowering costs. Because of the closed nature of the network, SLAAC may be a good candidate for an easy and maintainable configuration. Using EUI-64, which is based on the hardware addresses of the physical interfaces, makes documentation easy (hardware addresses are often easily known in these clusters, so hosts files and internal DNS can easily be generated from the known MAC addresses) and greatly reduces the likelihood of number collisions.

While large-scale HPCCs can benefit from IPv6, it can also help with small-scale NAT installations. Even in my own home network, I find IPv6 to be valuable over the existing IPv4 NAT system. I often need to manage individual hosts and sometimes, I need end-to-end transparency for such things as video and voice conferencing. Using IPv6 is much easier and more efficient than trying to poke holes or configure special redirects in my NAT box. I can access individual hosts at home directly over IPv6 without having to go through a NAT box. Now, some people may view this as a security risk–to them having my hosts “exposed” on the Internet is a big risk that NAT otherwise “solves.”

I view this security issue not as a risk but as a benefit. Instead of my security policy being dictated by the technology, I am able to develop my own security policy for my home network and use stateful firewalls to enforce that policy technically. Moreover, this produces a much cleaner security policy than having to place redirects and other kludges in my NAT configuration to support video conferencing and other needs.

IT readiness: The overarching risk

How many large-scale IT projects in your organization finish on-schedule, let alone can be completed on short notice? When that PR person or researcher comes to you and needs IPv6 “real soon now,” do you really want to be in the position of having never enabled IPv6 on any of your networks, or–worse yet–having no plan for IPv6 adoption?  You certainly don’t want to wait until a prominent member of your scientific staff or faculty is demanding IPv6 on her network before you even start thinking about IPv6, do you?

IT projects are hard. They have a lot of dependencies. They have a lot of unforeseen obstacles. And, of course, there are risks that arise from deploying IPv6, as there are with any large IT project. The big problem for IT managers will occur when the risks of not deploying IPv6 begin to outweigh the risks of deploying IPv6, and there’s suddenly a lot of pressure to move forward quickly. How do you ensure that things aren’t moving too quickly?  How do you mitigate all of the risks, or at least the majority?

There is a simple answer: Start before you really need to. Ideally, you should have already started and you may even be enabling IPv6 on networks and services right now. But if you haven’t begun yet, now is the time to start. There are a number of things you need to do just to get going, and ESnet has put together a useful checklist to get you started. You’re going to run into problems–all will definitely not be smooth sailing. That’s why you need to start adopting IPv6 before one of the risks that I have identified comes to bear. Even if you can’t deploy IPv6 on your production network just yet, you can get a feel for how it works and what the pitfalls are by creating a special “IPv6 DMZ.”  Better yet, if you plan to build a Science DMZ, make sure that it supports both IPv4 and IPv6 from day one. That will go a long way toward ensuring that you and your colleagues and staff fully understand IPv6, and it will provide improved connectivity options for the Science DMZ itself.

By now it should be clear that none of the risks I have identified are mitigated in any way by how much IPv4 address space you have. Simply put, having lots of IPv4 address space–even your own /8–is not reason enough to delay IPv6 implementation for one second. Your IPv4 address space no longer matters in the IPv6 equation. In this increasingly interconnected world, you need to be able to reach everyone else, and they need to be able to reach you. If you delay adopting IPv6, you make it less likely that your resources will be available to all, and that poses risks to you and your institution.

I.T. in-depth at DUSEL


This guest blog is contributed by Warren Matthews, Cyber-Infrastructure Chief Engineer at the Deep Underground Science and Engineering Lab (DUSEL).
 

Guest Blogger: Warren Matthews, DUSEL

The Deep Underground Science and Engineering Laboratory (DUSEL) is a research lab being constructed in the former Homestake gold mine in Lead, South Dakota, now resurrected to mine data about the earth, new life forms, and the universe itself. When finished, DUSEL will explore fundamental questions in particle physics, nuclear physics and astrophysics. Biologists will study life in extreme environments. Geologists will study the structure of the earth’s crust. Early science programs have already begun to explore some of these questions. In addition, DUSEL education programs are underway to inspire students to pursue careers in science, technology, engineering, and mathematics. This interdisciplinary collaboration of scientists and engineers is led by the University of California at Berkeley and the South Dakota School of Mines and Technology.

 

I am the cyberinfrastructure chief engineer for DUSEL. As such, my concern is the research environment and advanced services that will be needed to accomplish our scientific goals. To enable future discoveries, scientists will need to capture, analyze, and exchange their data. We will have to deploy and perhaps even develop new technologies to provide the scientists with the technical and logistical support for their research. We expect that the unique research opportunities and instrumentation that will be established at DUSEL will draw scientific teams from all over the world to South Dakota, so high-speed national and International network connectivity will also critical.

National laboratories have made many important contributions in the development of IT and networking technology.  I’m very pleased that DUSEL is the newest member of the ESnet community and I have no doubt that we’ll be leveraging their expertise.  In conversations with numerous colleagues at other labs it has become apparent that although DUSEL is starting with a clean slate and there are no legacy systems to support, we still have common issues and some difficult decisions to consider. All the labs have the challenges of meeting the needs of both large and small scientific collaborations. We all feel the budget crunch and are streamlining our support infrastructure. We are all wondering how we can optimize our use of the Cloud.

Delving into underground research

At DUSEL we have our own particular challenges, starting with an extreme underground environment. On the surface, the Black Hills of South Dakota may be freezing, but the further you go down in the mine, the hotter it gets. Rock temperatures at the 4850′ level, where the mid-level campus is under construction, are around 70F (21〫C) and humidity is around 88%. At the 7400′ level, where the deep-level campus is planned, temperatures hover around 120F (50〫C). The high levels of temperature and humidity have a significant impact on computer equipment.  We’ll figure out our challenges as we go, depending on shared expertise. After all, national labs were created to focus effort and move forward knowledge where no one university could marshal the resources required. Our goal is to provide a platform where science, technology, and innovation are able to flourish.

We anticipate technology partnerships with the many experiments are going underground at DUSEL. Currently we are expanding IPv6 and deploying perfSONAR. We are leveraging HD video conferencing. We are worrying about identity management and cyber security. We are establishing the requirements for dynamic network provisioning.  And at the same time we’re wondering what other technologies will emerge in the next 20 or 30 years and what will be required to dig for new discoveries. You can keep track of our progress here at the Sanford Laboratory Youtube Channel.

–Warren Matthews

Did you get the memo?


Last September, Federal CIO Vivek Kundra issued a memo mandating that government agencies take aggressive steps to adopt IPv6. IPv6 is the next-generation Internet Protocol, which allows for a vastly increased address space: 340 undecillion (340 followed by 36 zeroes) as compared to 4.3 billion addresses for the existing protocol, IPv4. The memo stipulates that agencies are to make all public-facing Internet resources IPv6-accessible by September 30, 2012, and that all internal connectivity within agencies and sites must use IPv6 by September 30, 2014.  Kundra’s aggressiveness appeared to be well-placed, when, on February 3, 2011, the Internet Assigned Numbers Authority (IANA), allocated the very last chunks of IPv4 address space to the Regional Internet Registries (RIRs). This officially signaled the beginning of the end of IPv4.

Time is running out on IPv4's billions of Internet addresses

It’s not about our addresses.  While many organizations, government agencies, universities, and companies feel that they have sufficient IPv4 address space so that they don’t need to implement IPv6, that’s not really the problem. The depletion of IPv4 addresses signals the advent of IPv6-only organizations and networks. New scientific research facilities, community organizations, and educational institutions around the world will soon find it much harder–and more expensive–to obtain IPv4 addresses. For these organizations, their only hope for having a presence on the Internet will be to make extensive–and possibly exclusive–use of IPv6.

Thus, even those who have abundant IPv4 resources may soon need to access IPv6-only resources.  Network staff need to be ready to act when IPv6-only users and collaborators elsewhere in the world need to access resources at their sites, or when their researchers request access to IPv6-only remote facilities.  This means that everyone needs to proceed in earnest with IPv6 adoption, so that the inevitable kinks can be worked out before IPv6 becomes a mission-critical requirement.

Many people wonder if Network Address Translation (NAT), which allows certain IPv4 addresses to be re-used throughout the Internet, can help stave off the need for IPv6. While this was a common argument in the 1990s and early 2000s, the acceleration of IPv4 depletion in the latter part of the last decade calls this assertion into question.  NAT technologies have been known to work–with some difficulty–on a small scale, but the kinds of large-scale NAT installations required to continue with an IPv4-only Internet are expensive and come with their own reliability and security issues.  Some, such as Lorenzo Colitti of Google, believe that large-scale NAT will make the Internet, as a whole, “slower and flakier.”

Currently, IPv6 is our best way forward when it comes to maintaining the reliability of the Internet.  Having adopted IPv6 many years ago, ESnet is well-positioned to provide help to others making the transition. But it’s important to get moving now.  The depletion of IANA IPv4 resources and the Federal mandate should provide good motivation.

–Michael Sinatra

ESnet 2010 Round-up: Part 2


Our take on ANI, OSCARS, perfSONAR, and the state of things to come.

ANI Testbed

In 2010 ESnet led the technology curve in the testbed by putting together a great multi-layer design, deploying specially tuned 10G IO Testers, became early investors in the Openflow protocol by deploying the NEC switches, and built a research breadboard of end-hosts leveraging open-source virtualization and cloud technologies.

The first phase of the ANI testbed is concluding. After 6+ months of operational life, with exciting research projects like ARCHSTONE, Flowbench, HNTES, climate studies, and more leveraging the facilities, we are preparing to move the testbed to its second phase on the dark fiber ring in Long Island. Our call for proposals that closed October 1st garnered excellent ideas from researchers and was reviewed by the academic and industry stalwarts in the panel. We are tying up loose ends as we light the next phase of testbed research.

OSCARS

This year the OSCARS team has been extremely productive. We added enhancements to create the next version (0.5.3) of currently production OSCARS software, progressed on architecting and developing a highly modular and flexible platform for the next-generation OSCARS (0.6), a PCE-SDK targeted towards network researchers focused on creating complex algorithms for path computation, and developing FENIUS to support the GLIF Automated GOLE demonstrator.

Not only did the ESnet team multitask on various ANI, operational network and OSCARS deliverables, it also spent significant time supporting our R&E partners like Internet2, SURFnet, NORDUnet, RNP and others interested in investigating the capabilities of this open-source software. We also appreciate Internet2’s participation by dedicating testing resources for OSCARS 0.6 starting next year to ensure a thoroughly vetted and stable platform during the April timeframe. This is just one example of the accomplishments possible for the R&E community by commiting to partnership and collaboration.

perfSONAR collaboration

perfSONAR kept up its rapid pace of feature additions and new releases in joint collaboration with Internet2 and others. In addition to rapid progress in software capabilities, ESnet is aggressively rolling out perfSONAR nodes in its 10G and 1G POPs, creating an infrastructure where the network can be tuned to hum. With multiple thorny network problems now solved, perfSONAR has proven to be great tool delivering value. This year we focused on making perfSONAR easily deployable and adding the operational features to transform it into a production service. An excellent workshop in August succinctly captured the challenges and opportunities to leverage perfSONAR for operational troubleshooting and also by researchers in understanding further how to improve networks. Joint research projects continue to stimulate further development with a focus on solving end-to-end performance issues.

The next networking challenge?

2011

Life in technology tends to be interesting, even though people keep warning about the commoditization of networking gear. The focus area for innovation just shifts, but never goes away.  Some areas of interest as we evaluate our longer term objectives next year:

  • Enabling the end-to-end world: What new enhancements or innovations are needed to deploy performance measurement, and control techniques to enable a seamless end-to-end application performance?
  • Life in a Terabit digital world: What network innovations are needed to fully exploit the requirement for Terabit connectivity between supercomputer centers in the 2015-2018 timeframe?
  • Life in a carbon economy: What are the low-hanging fruit for networks to become more energy-efficient and/or enable energy-efficiency in the IT ecosystem they play? Cloud-y or Clear?

We welcome your comments and contributions,

Happy New Year

Inder Monga and the folks at ESnet

ESnet gives CISCO Nerd Lunch talk, learns televangelism is harder than it seems


As science transitions from lab-oriented to a distributed computational and data-intensive activity, the research and education (R&E) networking community is tracking the growing data needs of scientists. Huge instruments like the Large Hadron Collider are being planned and built. These projects require global-scale collaborations and contributions from thousands of scientists, and as the data deluge from the instruments grows, even more scientists are interested in analyzing it for the next breakthrough discovery. Suffice it to say that even though worldwide video consumption on the Internet is driving a similar increase in commercial bandwidth, the scale, characteristics, and requirements of scientific data traffic is quite different.

And this is why ESnet got invited to Cisco Systems’ headquarters last week to talk about how we how we handle data as part of their regular Nerd Lunch talk series. What I found interesting although not surprising, was that with Cisco being a big evangelist of telepresence, more employees attended the talk from their desks than in person.  This was a first for me and I came away with a new appreciation for the challenges of collaborating across distances.

From a speaker’s perspective, the lesson learnt by me was to brush up my acting skills. My usual preparations are to rehearse the difficult transitions and  focus on remembering the few important points to make on every slide. When presenting, that slide presentation portion of my brain goes on auto-pilot, while my focus turns towards evaluating the impact on the audience. When speaking at a podium one can observe when someone in the audience opens a notebook to jot down a thought, when their attention drifts to email on the laptop in front of them, or when a puzzled look appears on the face of someone as they try to figure out the impact of the point I’m trying to make. But these visual cues go missing with a largely webcast audience, making it harder to know when to stop driving home a point or when to explain the point further to the audience.  In the future, I’ll have to be better at keeping the talk interesting without the usual clues from my audience.

Maybe the next innovation in virtual-reality telepresence is just waiting to happen?

Notwithstanding the challenges of presenting to a remote audience, enabling remote collaboration is extremely important to ESnet. Audio, video and web collaboration is a key service offered by us to the DOE labs. ESnet employees use video extensively in our day-to-day operations. The “ESnet watercooler”, a 24×7 open video bridge, is used internally by our distributed workforce to discuss technical issues, as well as, to have ad-hoc meetings on topics of interest. As science goes increasingly global, scientists are also using this important ESnet service for their collaborations.

With my brief stint in front of a stage now over, it is back to ESnet and then on to the 100G invited panel/talk at IEEE ANTS conference in Mumbai. Wishing all of you a very Happy New Year!

Inder Monga

Fenius takes another big step forward


As I wrote back in June, ESnet has been promoting global interoperability in virtual circuit provisioning via our work on the Fenius project. Recently this effort took another step forward by enabling four different provisioning systems to cooperate for the Automated GOLE demonstration at the GLIF workshop held at CERN in beautiful Geneva, Switzerland.

For the uninitiated, GOLE stands for GLIF Open Lightpath Exchange, a concept similar to an IP internet exchange, but oriented towards interconnecting lightpaths and virtual circuits. Several GOLEs already exist and collaborate in the GLIF forum, but until recently,  interconnecting has been a manual process initiated by the network administrators at each GOLE. Because of the lack of standards, any automation in the process  was only accessible through a proprietary interface. This lack of interoperability has hindered the development and use of virtual circuit services that cross more than a few GOLEs at a time.

Our objective in Geneva was to demonstrate that if there’s a will there is a way: that we can indeed have automated, dynamic GOLEs that can provision virtual circuits with no manual intervention initiated by the end-user through the Fenius common interface.

This project involved several different GOLEs and networks from around the world. In North America, both MANLAN and StarLight participated along with Internet2’s ION service and USLHCNet. The majority of GOLEs and networks were European: NorthernLight, CERNLight, CzechLight, and PSNC, as well as NetherLight and University of Amsterdam. Finally, AIST and JGN2+ participated from Japan, making this a demonstration that spanned sixteen (!) timezones and utilized four transoceanic links.

The demonstration was a complete success and resulted in what is to my knowledge a global first: a virtual circuit was completely automatically set up through five different networks and four different provisioning systems. And it was completed in a short amount of time – it only took about five minutes from the initiating request until packets were flowing from end to end.

During the weeks leading up to the demonstration, software developers and network engineers from almost every organization mentioned closely collaborated  to develop, test and deploy the Fenius interface on all the various GOLEs and networks. Several people worked day and night. This level of commitment can only mean good things for the long-term prospects of the Fenius and Automated GOLE efforts.

Our success is also worth noting particularly since the software, hardware, and network infrastructure set up for this demo has been committed to remain available for use and experimentation for the next year. We hope to replicate this success in Supercomputing 2010, only now extended with even more GOLEs and networks joining. Since Fenius + automated GOLE applications clearly demonstrated the value of interoperability, the next steps will be to help define and develop an open-source implementation of NSI, a standard protocol that will establish native interoperability between the various provisioning software systems like OSCARS, AutoBAHN, G-Lambda, Open-DRAC, Argia, and others.

These are exciting times and it’s great to see our efforts finally bearing fruit. I can’t wait to how the newly interoperable GOLEs can benefit our user community and scientific networking in general.

Poorly attended IPv6 conference belies urgency of Internet address depletion


The other week the Department of Veterans Affairs sponsored the 2010 InterAgency IPv6 Information Exchange. As a pioneer in IPv6, the most fundamental protocol of the Internet, ESnet was invited to present on how it uses and implements IPv6. Over 120 agencies were invited to attend but only a handful showed, almost all from various parts of the Department of Defense, the National Institute for Standards and Technology and the Department of Veterans Affairs.

This lacklustre attendance is curious, given that IPv6 is critical to everyone. It is slated to replace IPv4, the current protocol, lock, stock, and barrel. The question is when. What we do know is that address space for existing IP will be exhausted next year. According to Geoff Huston, Adjunct Research Fellow at the Centre for Advanced Internet Architectures, we are literally running out of IPv4 Internet addresses.

Supply of IPv4 Internet Addresses Drying Up, http://www.potaroo.com

The commercial world was in denial of the need for IPv6 until a year ago. Now they are scrambling. But how is the government doing? The level of interest seems to vary by agency.

At this particular conference, presentations ranged from technical discussion of IPv6 implementation from governmental representatives, commercial IPv6 networking providers, and companies selling IPv6 management tools. The VA is implementing IPV6 to facilitate communications between nurses and patients. While ESnet has been using IPv6 for years to link DOE scientists together, some of the other applied uses of this technology, such as improving medical care, are exciting.

It was very encouraging to see the progress the Department of Defense in transitioning to IPv6 while maintaining strict controls for security and reliability. It appears that the DOD is on target for completion of the transition by 2013.

The other area of discussion was in the area of procurement requirements and the approval of new requirements for more complete IPv6 capabilities in new gear.

On the whole, the agencies present seem to be moving on a well organized plan to get to IPv6. The low response from agencies does leave one hoping it was a result of their confidence in their ability to transition in a timely manner that led to so many not participating.

–Kevin Oberman

Scaling up – when computing meets optical transport


While we have been busy working towards a 100G ANI prototype wide area network (WAN), researchers at Intel are making sure that we have plenty to do in the future. Yesterday’s Wall Street Journal article (http://on.wsj.com/dcf5ko) on Intel demonstrating 50Gbps communication between chips with silicon-based lasers, is just the tip of the iceberg of competitive research looming in the arena of photon-electron integration.

50G Silicon Photonics Link (image from Intel white paper)

This demonstration from Intel (Kudos to them!) is a great reminder of how such innovations can revolutionize the computing model by making it easier to move large amounts of data between the chips on a motherboard or between thousands of multi-core processors, leading the way towards exascale computing.  Just imagine the multi-terabit fire hose of capacity ESnet would have to turn on to keep those chips satisfied! This seamless transition from electronics to photonics without dependence on expensive sets of photonic components has the potential to transform the entire computing industry and give an additional boost to the “Cloud” industry. Thomas J. Watson has been credited with saying “The world needs only five computers”. We look to be collecting the innovations to just prove him right one day.

While we do get excited about the fantastic future of silicon integration, I would like to point out the PIC (Photonic Integrated Chip) has been a great innovation by a company, Infinera, just down the Silicon Valley – they are actually mass-producing integrated lasers on a chip for a different application – long distance communication, by using a substrate material different than silicon. This technology is for real. You can get to play with the Infinera’s in our ANI testbed – you just need to come up with a cool research problem and write a proposal by October 1st, 2010.

Fire away!

—-

August 4th, 2010

Computing at the Speed of Light – Read MIT Technology Review’s take on the same topic.