Menu

University of Oklahoma Cyberinfrastructure Plan

OneNet

OneNet, a division of the Oklahoma State Regents for Higher Education, has operated since 1996 as the research and education network for all Oklahoma public higher education institutions, and also serves nearly every private higher education institution as well as public and private K-12 schools, libraries, museums and government (state, local, federal, tribal). Over many years of operation, OneNet has continually refined its infrastructure to meet the ever-growing requirements of research institutions. Via public and private partnerships, optical fiber has either been built or acquired to span the distances between OU’s campuses in Norman and Oklahoma City (OKC) and OSU’s campuses in Stillwater and Tulsa, at ever increasing bandwidths, by deploying both evolutionary and revolutionary technology.

OneNet’s strong focus on Oklahoma’s Cyberinfrastructure (CI) needs began with the first Internet2 network in 1998. Via partnerships and experience with fiber construction, Oklahoma’s universities were among the first to connect to the new national network, and have since been provided very high levels of bandwidth to both regional and national research backbones. Recently, the NSF EPSCoR RII C2 grant (below) funded upgrades of OneNet’s Dense Wave Division Multiplexing (DWDM) network, leading to improvements in the ability to provide additional research bandwidth more cost effectively.

At the same time, Oklahoma was also awarded an NTIA BTOP CCI grant, providing nearly $74M to build fiber for middle-mile initiatives through areas underserved by broadband providers. The Oklahoma Community Anchor Network (OCAN), now completed, has directly served the 33 community anchors in the proposal, as well as citizens in over half the state’s counties. OneNet is responsible for ongoing OCAN operations and maintenance, and has been establishing new partnerships with telecommunications providers to meet the needs of OneNet’s constituents in areas of the state not directly served by OCAN.

In fall 2012, for an opportunity with Internet2’s newly unveiled Innovation Platform – the national 100 Gbps (100G) research backbone now known as Advanced Layer 2 Services (AL2S) – OneNet rapidly garnered research institution support and coordinated appropriate hardware and fiber, to become the first official member connection to AL2S. This new 100G Software Defined Networking (SDN) connection is already being leveraged for research and for the diverse needs of Oklahoma’s community anchors.

With OCAN complete, OneNet is moving into the next phase of CI upgrades. Recently, 100G service was extended from Tulsa to Stillwater and 100G service from Tulsa to Norman is expected shortly after this proposal is submitted (only a few power modules are pending). This expansion of 100G paths will continue over the next year. Beyond the traditional research transport ring, OneNet anticipates that the 10x to 100x increases in bandwidth being experienced by over a dozen higher education institutions directly served by OCAN in rural areas will lead to new collaborations and opportunities for partnership.

The University of Oklahoma’s Approach to Advanced Research Networking

OU treats advanced networking as a first class citizen of research CI, as the glue among other CI systems (see Facilities). Under Oklahoma’s RII C2 grant (below), OU gained both skill and leadership in research networking, strengthening and deepening ties with other Oklahoma institutions, both regarding research CI and in the C2’s workforce development initiative, the Oklahoma Information Technology Mentorship Program (OITMP, now part of the OneOklahoma STEM Mentorship Program). CC*NIE PI Neeman has already been PI on two networking grants, two cluster grants and one CI education grant; network lead Zane Gray (CC*IIE Senior Personnel) was a crucial contributor to, and one of the most deeply engaged participants in, C2 activities, including (a) consulting onsite with institutions; (b) delivering equipment; (c) helping to deploy at one of the Tribal Colleges; (d) serving on another Tribal College’s Technology Advisory Council; (e) as the single most prolific and crucial contributor to the OITMP (31 of 94 events for 22 of 38 institutions so far). At the OU Supercomputing Center for Education & Research (OSCER), a division of OU IT, three OSCER operations team members have served 3 to 4 times each on SCinet, the world’s fastest network for one week a year (part of the SC supercomputing conferences); OSCER Manager of Operations Brandon George (CC*IIE Senior Personnel) is SCinet’s 2014 Vice Chair.

NSF EPSCoR RII C2 (completed Aug 31 2013)

H. Neeman of OU served as PI, and OneNet Chief Technology Officer J. Deaton served as one of the Co-PIs, on Oklahoma’s recently completed NSF EPSCOR RII C2 grant (“Oklahoma Optical Initiative,” EPS-1006919, $1,176,470, 9/1/2010-8/31/2013), the networking aspects of which were:

Statewide ring upgrade: Overlapping the C2 grant, OneNet increased the statewide research transport ring from 3 sites (Tulsa, Stillwater, OKC, and a less robust spur to Norman) to 5 sites (including Norman and a second OKC site). The C2 grant funded a conversion of the hardware from routed-only mux/demux components to Reconfigurable Optical Add Drop Modules (ROADMs), transforming Oklahoma’s existing research ring from routed-only to optical, leveraging existing infrastructure – chasses and fibers – while advancing optical switching components, for substantial improvements in reliability, robustness and availability, and enabling the provisioning of dedicated lambdas straightforwardly and affordably.

Mux/demuxes displaced by ROADMs (a) provided the ability to increase bandwidth and capabilities from OKC to Ardmore and (b) were used in resilient infrastructure between OneNet's Tulsa Greenwood facility and the Tulsa Level3 Point of Presence (POP), for improved Internet2 connectivity.

Institutional upgrades (no increase in recurring costs to these institutions)

  • University of Oklahoma (OU): upgraded OU’s High Performance Computing (HPC) cluster connection to 10 Gbps (10G) from Gigabit Ethernet (GigE), a 10x increase, funded by OU Information Technology (IT) as institutional commitment to the C2 grant, and then funded by the C2 to 20 Gbps (20x) via 2 x 10G from OneNet’s Tulsa Greenwood facility, specifically the OneNet switch that connects to Tulsa’s Internet2 AL2S connection, to OU Norman, specifically to the One Partners Place (1PP) colocation space, across the street from Four Partners Place (4PP), where the data center that houses the bulk of OU’s research CI is located; also deployed a single-institution Science DMZ in the 4PP data center, a combination of C2-funded components, OU-funded components and Dell seed components (the last two at no cost to the C2).
  • Oklahoma State U (OSU): upgraded OSU’s HPC cluster connection to 10G from GigE (10x), though this was tempered by internal security components that provided a fraction of peak.
  • Langston U (LU, Oklahoma’s only Historically Black University): upgraded the High Energy Physics equipment and MRI-funded HPC cluster to 10G from 100 Mbps (100x).
  • U Tulsa: upgraded the campus connectivity for research to GigE from 200 Mbps (5x).
  • Samuel Roberts Noble Foundation (Ardmore): rural nonprofit research foundation; upgraded research networking to GigE (22x) and commodity Internet to 100 Mbps (2x), both from 45 Mbps.
  • College of the Muscogee Nation (Tribal College): provided networking components and deployment assistance for a new residence hall.
  • Bacone College (Minority Serving Institution with a Tribal mission): provided an internal campus backbone upgrade to 100 Mbps with a GigE core.
  • Pawnee Nation College (PNC, Tribal College): provided and deployed components for PNC’s first internal campus backbone (GigE) and PNC’s first WiFi network.
  • Comanche Nation College (Tribal College): provided a distance learning system.

NSF CC-NIE grant: OneOklahoma Friction Free Network (OFFN)

Oklahoma’s new NSF Cyber Connectivity - Networking Integration and Engineering (CC-NIE) grant (“OneOklahoma Friction Free Network,” ACI-1341028, $499,961, 10/1/13-9/30/15, PI Neeman) will deploy a multi-institutional Science DMZ for friction free and Software Defined Networking (SDN), shared among OU, OSU, LU and the Tandy Supercomputing Center (TSC), part of the Oklahoma Innovation Institute, a Tulsa nonprofit; the OFFN team recently voted unanimously to include U Central Oklahoma as an OFFN site (subject to NSF approval of a scope expansion). The initial domain science and engineering projects include numerical weather prediction, high energy physics, bioinformatics and weather radar, with expansion already to other research disciplines. OFFN seems to be the first CC-NIE project to provide a Science DMZ across several institutions within a state, especially an EPSCoR state. OFFN will deliver capability and capacity to research data sources at the (assumed) five sites, providing campus level scalability, encompassing new sources of research data as they are developed, and regional scalability, via deployment of a consistent and cost effective design model. Deployment goals include:

  • provide a proven, commercial off-the-shelf hardware platform backed with vendor support;
  • realize the Science DMZ goals through the use of a truly independent network at each campus site, via dedicated optical pathways to OneNet, as well as to the local campus backbone where desired;
  • deploy a fully virtualized infrastructure, to be used simultaneously by multiple research entities, presented to each entity as a dedicated “slice” of the overall resource;
  • leverage federation to provide oversight and visibility into the operations of the virtualized platform.

Site Design: Each of OFFN’s client site deployments will consist of the following resources, two of each per institution for redundancy and failover (the RII C2 grant deployed these components at OU):

  • SDN switches (10G) provide a virtualized data plane resource, to effectively and efficiently forward Ethernet traffic based on rules configured on the SDN controller.
  • Platform support switches (GigE) provide connectivity for out-of-band management of server lights-out, SDN switch components, and Virtual Machine (VM) hosts.
  • Servers provide multiple virtualized SDN controller resources, plus a virtualized platform for performance toolsets, management and monitoring utilities, and data transfer tools.
  • Software (all open source and/or free): the OS virtualization platform (Xen, VirtualBox or Qemu), Linux host and guest OS (Fedora or CentOS), SDN controller (Beacon or Floodlight), performance testing (iPerf and the PSPerformance Toolkit), as well as monitoring (Cacti or Nagios).

See Figure 1 for institutional and statewide layouts. NOTE: LU’s and OSU’s will be in OneNet sites.

   Figure 1: An institution’s OFFN deployment design.

The host institutions have committed space, power, cooling, physical security, public IP address space allocation, and inter-building fiber connectivity to achieve connectivity to the Internet Service Provider (ISP) when ISP colocation is not feasible. Collectively, these institutional assets provide a hybridized foundation both for production data and for multiple instances of SDN with associated OpenFlow control, allowing OFFN to be both flexible for rapid deployment needs and stable for always-on requirements.

Inter-site Connectivity: OFFN will build an OU-OSU-TSC ring, with an OU-to-(UCO-to-)LU spur due to fiber availability (Fig. 2), providing protected traffic flows, access to local campus resources by OFFN members, and low cost. Inter-site connectivity uses either dedicated point-to-point lambdas via OneNet or Layer 2 across 100G. This method provides high flexibility for provisioning network “slices” throughout OFFN, at low cost. Each path provides low latency, high bandwidth, unencumbered flow among sites. Overlaid onto the dedicated 10G circuits, the SDN sites have over 4000 Virtual LAN segments that can be programmed across OFFN, allowing any site to create multiple virtual circuits, point-to-point or point-to-multipoint, providing the greatest level of network collaboration among the institutions.

OFFN Sustainability: See Facilities, Equipment and Other Resources.

Shared Services

OU collaborates with OSU and OneNet on a Shared Services (S2) initiative – primarily a data center, application, and business process consolidation focused on enterprise and classroom needs, but also strongly leveraging regional optical pathways, Multiprotocol Label Switching services, statewide routing services, state fiber assets, and colocation facility hosting, for new levels of performance and innovation that will provide a competitive edge via collaboration and resource sharing for education and research initiatives. Specifically, S2 allows consolidation of all Internet connections on OU’s Norman, OKC and Tulsa campuses into two aggregation points, which are also carrier hotel sites that host major commodity ISPs, as well as entry points into national research and education networks.

S2 has 19 10G circuits, specifically DWDM lambdas, several on OU-owned DWDM hardware leveraging state fiber assets for transport, and the rest from OneNet. These circuits are mechanisms for high speed, low latency transport among campuses and into routing consolidation spaces and their associated ISPs. Overlaid are next generation protocols, including pre-standards TRILL and unified storage fabrics, to simplify network deployment while providing higher performance. Future S2 services will leverage the multi-10G research data transport among campus sites and onto AL2S at 100G, paving the way for introducing multi-100G links within S2, and, as the increasing commoditization of 100G drives down price, the phasing out of 10G as a primary means of cost effective, high speed transport.

IPv6

OU provides IPv6 pockets on some subnetworks, including OU IT, Computer Science, NOAA’s Radar Operations Center, and backbone DNS, DHCP, and Active Directory servers. OU currently has, as its address space blocks: 2620:0000:2B20:0000:0000:0000:0000:0000/48 (OU Norman ARIN allocation: provider independent); 2001:0468:0a02::/48 (Internet2 via OneNet); 2610:01d8:0a02::/48 (OneNet); 2610:20:8600::/40 (NOAA Norman). The long term plan is to consolidate OU’s into one block. For the deployment of (a) the new routing backbone (completion planned mid-2014) and (b) a security appliance lifecycle update (planned fall 2014), OU can put IPv6 in hardware networkwide. Once appropriate IPv6 security policies and practices are in place (anticipated spring 2015), OU will provide IPv6 campuswide. OneNet has fully supported IPv6 across state infrastructure for several years, and participates in World IPv6 Day. With a direct allotment (2610:1D8::/32) from ARIN, OneNet can serve all constituents and maintains IPv6 peering relationships with most external network transit providers and peers.

perfSonar

As part of the toolset for statistics gathering, performance monitoring, and health checking within OFFN, OneNet has deployed perfSonar devices at the borders of OFFN institutions (ps.onenet.net). Each instance of the toolset will connect to perfSonar instances at all OFFN sites, providing automated exchange of data path monitoring and presenting a true end-to-end analysis of network performance, with a holistic platform to simplify troubleshooting, including used bandwidth for all links on a defined path, resource consumption, and topological information. OneNet’s current perfSonar hardware is: IBM x3250M4 with Intel Xeon E3-1220 quad core, 8 GB RAM, and Intel X520-DA2 with Juniper DAC or optics.

InCommon

OU has been a member of the InCommon Federation for 2 years, including the InCommon Certificate Service. Both have benefitted OU in cost savings and for cross collaboration with other institutions. OU currently uses InCommon to federate with Educause and on OSCER resources via InCommon Secure Socket Layer certificates, and will continue to seek opportunities as the service list expands.

Globus Online

Globus Online is deployed at OU on the Oklahoma PetaStore (see Facilities), so data transfers to other Globus Online-enabled resources can be rapid and reliable. In 2014, OU plans and upgrade that includes file sharing, so that researchers can expose their data collections to data consumers at other institutions.

uRPF (instead of BCP 38)

OU’s Norman campus currently leverages two Internet-facing routers for high speed data transport. To alleviate spoof attacks on OU's IP address blocks, and to mitigate Denial of Service or other data traffic attacks, OU IT has configured unicast Reverse Path Verification (uRPF) on these routers, very similar to BCP 38: as a packet arrives on an Internet-facing router, uRPF looks at the router’s internal IP routing tables and ensures that the packet arrived on the expected interface. If the packet arrives on an appropriate interface, then the packet is checked against other access policies, then passed out of the proper interface toward its destination; if the packet fails its validation, then it is immediately dropped. A key difference between BCP 38 and uRPF is that BCP 38 requires Access Control Lists, but uRPF doesn’t. Along with uRPF, OU leverages static route entries to a “Null0” interface (“bit bucket”), on the same routers, to prevent exploits by known “bad” IP addresses or ranges, as well as to mitigate external complaints.

Current and Planned Layer 2 and Layer 3 Research Networking

C2’s enhancements to OneNet's optical network and the expansion of network reach and capacity via BTOP have enabled quick provisioning of 10G circuits among Oklahoma research institutions and from these institutions to endpoints across AL2S and beyond, with modest manual intervention. Future SDN efforts will include extending, enhancing and further automating such provisioning.

Via C2 funds, OU already has Science DMZ equipment, a pair of Dell Force10 S4810 switches with 48 10G ports each. Dell’s OpenFlow 1.3 rollout, expected in summer 2014 (personal communication, J. Robinson, Dell), will have Layer 3 and Layer 2 stacking. Thus, OU’s Science DMZ has 96 10G ports.

OU’s HPC cluster, Boomer, currently connects at 2 x 10G, via C2 funds, to a Cisco 4900M router in 4PP, which connects at both 2 x 10G Layer 2 to OneNet’s Tulsa Greenwood facility and at 2 x 10G to a Cisco 6509 switch in 4PP, as a pass-through to OU’s campus enterprise backbone Cisco 6509 switches,  which then connect, at 10G each, via a pair of Cisco 7604 routers on a longstanding friction free pathway to a pair of OneNet’s Juniper MX480 routers in Norman, and from there to the statewide transport ring.

Under OFFN, the planned configuration is to connect Boomer’s internal S4810 switches, each at 2 x 40G to each of OU’s Science DMZ S4810 switches, for resiliency and high bandwidth between Boomer and the Science DMZ S4810 switches, and also retain the extant 2 x 10G from Boomer directly to the Cisco 4900M and from there to the campus enterprise backbone friction free pathway to OneNet in Norman, as a combination of both (a) a live tertiary pathway for Layer 2 in the event of a failure of all Science DMZ pathways to Tulsa and (b) the primary pathway for Layer 3.

Via the C2 and BTOP grants, OneNet’s early adoption of AL2S, and the OFFN and Condo of Condos/ACI-REF proposals (for which OneNet provided commitment letters), OneNet has deployed a single 100G connection each from Tulsa Greenwood to each of (a) Internet2 AL2S, (b) OU Norman and (c) OSU Stillwater. At OU Norman, in 1PP, OneNet has (i) an Adva FSP 3000R9 SH9HU with 1 x 100G to Tulsa Greenwood and (ii) a Juniper MX480 with 1 x 100G to the Adva FSP3000 and ten 10G ports for connecting to systems in 1PP and 4PP. OneNet and OFFN will fund 10G optics for the Juniper MX480 to connect to (a) the Cisco 4900M router in 4PP, (b) OU’s Science DMZ S4810s, and (c) OSU Stillwater. OFFN and OU bench stock will provide more Cisco 4900M and Juniper MX480 optics.

Currently at Tulsa Greenwood, the C2 2 x 10G connects into a Cisco 15454 DWDM node, which connects at 2 x 10G into a Juniper MX480, which connects at 1 x 100G to an AL2S Brocade MLXe-16. The Adva FSP3000 at each of OU Norman and OSU Stillwater connects at 1 x 100G each into an Adva FSP3007and from there at 1 x 100G into the same Juniper MX480. OFFN will deploy a 1 x 10G Layer 1 connection from OU Norman to OSU Stillwater, completing a 3 site ring; other paths are Adva 1 x 100G between Tulsa Greenwood and each of OU Norman and OSU Stillwater.

All of this was triggered by C2 either directly or indirectly, and each advance either would not have happened, or would have happened much later, if not for the C2, not just the technical side but especially the development of a coherent core of CI professionals that matured from a disparate collection of local positions to an informal but focused statewide team, by fostering a culture of intra-state CI collaboration.