FIONA - Flash I/O Network Appliance

The University of California-San Diego was the recipeient of an NSF CC-NIE award in 2012 to create a researcher defined 10 and 40Gbit/s campus scale data infrastructure.  As a part of this effort, PIs Phil Papadopoulos and Larry Smarr developed a modestly-priced Science DMZ Data Transfer Node (DTN) for users of this infrastructure.  These Flash I/O Network Appliances (FIONAs) are PCs that UCSD constructs out of commodity parts, but are highly optimized for data-centric applications, acting as “data super-capacitors” for the Science Teams. These nodes were originally constructed to debug 40Gbps networking infrastructure, but are becoming more broadly used for data transfer activities

In December 2014 Stanford University hosted a one-day workshop which led to a decision for network engineers from a number of CENIC campuses to work intensively for the first 10 weeks of 2015 on a proof-of-principle demonstration of high-performance data transfers between Science DMZs over existing elements of the proposed infrastructure. This required extensive collaboration among the  campuses, led by CENIC’s John Hess. The resulting Pacific Research Platform (PRPv0) was presented at CENIC 2015 on March 9, 2015, at UC Irvine. The result involved DTN-to-DTN data transfers from within one campus Science DMZ to another. After iteration and tuning, tests demonstrated 9.6Gb/s out of 10 using GridFTP  (e.g., UCB, UCI, UCD, & UCSC to UCSD), with two transfers at 36Gb/s out of 40 using FDT (UCLA & Caltech to UCSD). Most of these DTNs were FIONA PCs, optimized for data-focused networking, built at UCSD by Calit2/QI’s John Graham and Joe Keefe and shipped to the campuses for this experiment.

FIONAs are single or clustered data transfer node (DTN) carefully optimized for store and forwarding, or directly using large amounts of data (10s to100s of terabytes) on 10-40-100Gb/s networks. FIONAs were developed as PC commodity versions of the SDSC Gordon supercomputer nodes. FIONAs have been carefully engineered and tested with scalable amounts of Flash Disk (2-16TB), rotating disk (5-120 TB) and 6-20 cores with maximum main memory, hosting two 40G/10G NICs and optional GPUs. They range from $2K for a perfSONAR 40G memory-to-memory network test box to $7K for one with enough flash disk to serve as a DTN on a fast network, to $15K or more for one with >100TB disk, which will be used as a substantial data transfer node and local server where warranted. We build them as desk side units for labs (quiet, big fan boxes, meant to act as data collectors and servers) and also rack-mounted 2U or 3U systems for network switch rooms. All come with 40G NICs which are easily adapted to 10G and 1G. 48v DC versions have been tested for use in a telco-type switch rooms without AC power. FIONAs run perfSONAR, GridFTP, and JAVA FDT that provide real-time performance capabilities, measurement visualizations of transfers, and means for archiving these metrics.

FIONA Hardware Reference

Previous builds of FIONA hardware can be found on the PRP web resource:

The following resources are provided as build snapshots.  

FIONA Hardware Specification (Summer 2017)

  • Gigabyte G25N-G51 Barebones

  • Dual E5-2650 v4 Intel Xeon Processors

  • 64GB DDR4 ECC/REG Server Memory

  • Two 960GB Samsung PM963 2.5" NVMe SSDs

  • Six 480GB Samsung SM863a SATA SSDs

  • Chassis with 2 x 2.5" Hot-swappable NVMe Bays and Six 2.5" Hot-swappable HDD/SSD

  • Eight NVIDIA GeForce GTX 1080 Ti 11GB GDDR5X Graphics Cards with Eight Gigabyte

  • CPDGDD2 Power Adapter Cards

  • Intel 82599ES Dual Port 10GbE SFP+ LAN

  • One Mellanox ConnectX-4 EN 40/56 GbE Single Port QSFP28 (MCX413A-BCAT)

  • One TPM 2.0 Module

3U DTN Hardware Specification (Summer 2017)

  • Chassis: Supermicro 3U Chassis 836BE1C-R1K03B (black) CSE-836BE1C-R1K03B

  • HDD Module: Supermicro MCP-220-83605-0N Rear window 2x 2.5" HDD module for 836B series chassis

  • Motherboard: SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3

  • CPU: Intel Xeon Processor XEON PROCESSOR E5-1680 V4 20M CACHE 3.40G FC-LGA14A 8 Core

  • RAM: 8 x Crucial CT7085508 16GB DDR4 PC4-19200•CL=17•Single Ranked•x8 based•Registered•ECC•DDR4-2400•1.2V•1024Meg x 72

  • SSD:
    • Storage:16 x 3.84TB SSD RAW Intel 535 SSD - 240 GB SSDSC2BW240H601
    • Boot: 2 x 240GB SSD BOOT Intel 535 SSD - 240 GB SSDSC2BW240H601
  • Drive adapter: 16 x Supermicro - storage bay adapter MCP-220-00118-0B

  • LSI HBA: LSI 9300-8i PCI-Express 3.0 SATA / SAS 8-Port SAS3 12Gb/s HBA - Single

  • CPU Cooler: Supermicro SNK-P0048AP4 CPU Cooling

  • SAS Cables: 4 x Supermicro CBL-SAST-0532 Cable

  • TPM: SuperMicro AOM-TPM-9655V 1

  • NIC:

    • 40GbE NIC Mellanox ConnectX-3 Pro EN Single-Port 40/56 Gigabit Ethernet Adapter Card - Part ID: MCX313A-BCCT
    • 40GbE NIC Mellanox ConnectX-3 VPI InfiniBand Adapter Card, Single-Port QSFP QDR IB (40Gb/s) and 10GbE - Part ID: MCX353A-QCBT
    • 2 x 40GbE - 10GbE SFP+ Adapter Quad to Serial Small Form Factor Pluggable (QSA) Adapter

perfSONAR Hardware Specification (October 2015)

  • Chassis: Supermicro CSE-813MTQ-R400CB Black 1U Rackmount Server Case 400W Redundant
  • Motherboard: Supermicro MBD-X10SRL-F Server Motherboard (N.B. the X10SRA-F is also an acceptable choice)
  • Processor: Intel Xeon E5-1620 v3 Haswell-EP 3.5GHz 4 x 256KB L2 Cache 10MB L3 Cache (N.B. the Intel Xeon E5-2643 works as well)
  • RAM: 4 x (32MB Total) Samsung 8GB 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4-17000) M393A1G40DB0-CPB
  • 40GbE NIC: Mellanox ConnectX-3 Pro EN Single-Port 40/56 Gigabit Ethernet Adapter Card
  • 40GbE to 10GbE SFP+ Adapter: Quad to Serial Small Form Factor Pluggable (QSA) Adapter
  • Storage: 2 x (480GB Total) Intel 535 Series SSDSC2BW240H601 2.5" 240GB SATA III MLC Internal Solid State Drive (SSD) - OEM
  • Storage Adapter: 2 x Supermicro - storage bay adapter MCP-220-00043-0N
  • PCIe Riser: Supermicro RSC RR1U-E16 - riser card
  • CPU Cooler: Supermicro SNK-P0057PS CPU Cooling
  • TPM: SuperMicro AOM-TPM-9655V

Contact if you have updates or corrections for information on fasterdata.