Science DMZ Support for Advanced and Emerging Services
The Science DMZ provides the ideal entry point into a research institution for advanced networking services available in the wide area, such as virtual circuits and software-defined networking. If the Science DMZ is built properly with capable equipment and is free of the restrictions that come with the support of general-purpose business connectivity needs, it is typically straightforward to allow local resources in the Science DMZ to take advantage of advanced wide area network services.
Virtual circuit services, such as the ESnet-developed OSCARS platform, can connect to the Science DMZ switch directly, or by connecting a separate switch as needed. The campus or lab’s interdomain controller (IDC) can provision the local switch and initiate multi-domain wide area virtual circuit connectivity to enable the Data Transfer Nodes or other Science DMZ resources to access science services at remote institutions. An example of this configuration was the NSF-funded Internet2 DYNES project that is supporting the deployment of this architecture in 60+ university campuses across the U.S.
100 Gigabit Ethernet
100 Gigabit Ethernet (GE) technology is being deployed by science networks in the U.S. and internationally to support data-intensive science. While 100GE promises the ability to support next-generation instruments and facilities, and to conduct scientific analysis of distributed data sets at unprecedented scale, 100GE technology poses significant challenges for the general-purpose networks at research institutions. For example, the firewalls typically deployed in business environments are simply incapable of effectively supporting 100GE science services. The Science DMZ model provides a scalable, expandable platform for integrating 100GE services into the science mission of a research institution. The 100GE service can be connected directly to the Science DMZ to provide a "fast path" between the science resources deployed in the Science DMZ and the advanced services provided by science networks.
Software-defined networking capabilities can be supported by hardware in the Science DMZ – software defined networking and OpenFlow allow the flexible provisioning of policies to route science flows. Having Science DMZ components at a single location near the site border means there is a single location to install and configure new technologies such as OpenFlow and connect to services like the Internet2 Innovation platform, the GENI project, or other similar resources.
Software-defined networking concepts and production uses of OpenFlow are still in their early stages of adoption by the community. Many innovative approaches are still being investigated to develop best practices for the deployment and integration of these services in production environments. For instance, ESnet and its collaborators at Indiana University and University of Delaware have demonstrated an OpenFlow-based Science DMZ architecture that interoperates with a virtual circuit service like OSCARS. An example of how these services might be integrated into a production Science DMZ is outlined below.
Service Integration - From Test to Deployment
The Science DMZ model allows new services to be tested, validated, and rolled into production once they are proven operationally sound. Testing and deploying Software Defined Networking – particularly the use of OpenFlow as a platform – is a timely example of how this model could be used.
Initially, an OpenFlow-capable connection could be brought into the Science DMZ area (e.g. the same physical area of the data center as the production Science DMZ infrastructure), and connected to a stand-alone switch. A separate test host can be connected to the stand-alone switch for prototyping purposes.
Note that several aspects of the Science DMZ model are already at work here: the OpenFlow switch need only permit access to the minimum set of hosts necessary to test the prototype service, so the security of the production infrastructure is not put at risk. By provisioning the prototype in this manner, the service can be tested without the up-front requirement that stateful firewalls or security mechanisms support a cutting-edge service before it's ready for production deployment.
After the service is determined to be production-ready, and the security model for the new service has been vetted, the test host can be removed from the OpenFlow switch, and the OpenFlow switch connected to the production Science DMZ. By doing so, the Science DMZ is effectively expanded to include the OpenFlow-enabled services, while making only minimal changes to the existing production Science DMZ environment. Once the OpenFlow technology is available in equipment that also supports the other production Science DMZ functions, the Science DMZ core hardware can be upgraded on normal budget cycles to fully integrate the new OpenFlow-based services.