Software Defined Storage in hybrid cloud mode at Nebulon
Alumni of 3Par, who have therefore left HPE, are joining forces again to put forward a bold storage concept based on a hybrid cloud architecture. Cloud Defined Storage in fact as the Nebulon team advances.
Founded in 2018 in San Francisco by storage veterans and in particular 3Par alumni (David Scott, Siamak Nazari, Craig Nunes and Sean Etaati), Nebulon relies on Software Defined Storage but in hybrid cloud mode. The start-up is therefore talking about Cloud Defined Storage, which combines a management layer in the cloud (plane control) and PCIe cards installed in customers’ storage servers. Each server is therefore equipped with a Nebulon card which offloads processing from storage and which connects to other Nebulon cards in the datacenter. Up to 32 cards can connect via a 10/25 Gbps Ethernet link to form a storage pool (an Npod) which can be split into provisioned volumes, via the Nebulon ON cloud control plane. These cards with an 8-core ARM chip (3 GHz) supported by 32 GB of NVRAM behave like RAID cards and host their own OS – with compression and encryption features – in order to remain independent of servers, and work with critical applications accustomed to SAN or HCI mode.
The PCIe card therefore provides the functionality of a mini storage bay in each server or node. Storage within each node is connected through the AIC or Snippet Processing Unit (SPU), with the SPU, in turn, emulating a local storage controller to the host. SPUs appear as standard SAS devices for the installed operating system. This means that a single cluster can support bare-metal or hypervisor installations, including mixed clusters if needed. Servers in a cluster or “NPod” are connected via 10/25 Gbps Ethernet between SPUs, which allows mirroring for data protection. In the event of an SPU failure, the replacement is simply provided with the configuration of the previous card and the data is immediately accessible again. A GPU slot is required, as the SPU draws 85 watts, slightly more than the standard 75W allowed for a PCIe slot (GPUs have separate additional power connectors).
Nebulon’s solution is interesting because it shifts the work of the storage controllers from an array to the SPU cards, thus limiting memory and CPU requirements. And because monitoring and provisioning functionality that is normally the responsibility of the controller is offloaded to Nebulon ON on the cloud, CPU requirements are lower. Nebulon ON is where topology is defined, storage provisioned, telemetry collected, and management functions such as updates performed. The cloud portal manages all metadata and pushes configuration information to SPUs. Using data collected by the SPU, the company plans to enable AI-based analytics tools through its Nebulon ON control panel. If cloud connectivity is lost, the storage continues to operate as configured, with the local SPUs acting as the cache of the controller to which the configuration settings will have been sent. For now, Nebulon ON works in AWS and GCP clouds.
Full-length, full-height, and double-width, Nebulon’s SPU cards plug into a standard server’s GPU slot and are PCIe 3.0 compliant.
Based in Silicon Valley, but also in Seattle, Belfast and London, Nebulon employs around fifty people worldwide and intends to rely on partners such as HPE or SuperMicro to sell servers equipped with its SPUs.