Tuesday 22 July 2014

The Design

I lifted this diagram from the "brasil" directory found at hg.9grid.fr. Now, I'm not going to attest to owning a 3D torus of compute nodes (cause I don't), but I will attest to owning a 2D fabric of nodes. Yes, yes, theres only 16 nodes in there, but still...!

Replace:

* Torus, with fabric
* Tree, with e-Link
* WAN, with WAN/ LAN

The rest should pretty much fall into line.

A terminal is like my MacBook Pro, equipped with Mac9P and of course, FileMaker for the database'y type stuff. Realistically, any system with FileMaker or even self developed software (say, a new, custom, version of QGroundControl) can be a client to this network, as the interface is agnostic XML.

A front end is a node co-ordinator. Its job is to make sense of the jobs coming in, and assign them to free compute nodes. I see it almost like the software BotQueue which will assign a 3D printing job to the next available 3D printer in the fleet. This is integrated into BotQueue, and each printer (by way of an intelligent board, say, a RaspberryPi) is able to communicate job status to the queue controller. This facility of job status communication exists in the /proc filesystem of each of the compute nodes.

An I/O node is a front end processor to connect the grid fabric to the outside world - in our case, the dual core Zynq ARM chip with FPGA. It is able to mediate from gigabit ethernet to e-Links, and back again, and do needed housekeeping in between.

A C (Compute node) is a Epiphany RiSC core in the fabric. It'll take jobs from the front end, process them, and come back with a result. These results will be available over the 9P networked filesystem interface, back at the terminals.

For each CM-5 prototype, there will be eight Parallella boards, and two Intel NUCs. Of course, for the prototype of the prototype, it'll only be one Parallella, and one NUC. They'll be connected over 100Mbps ethernet (final prototypes will use gigabit ethernet). The basic software will be developed on the Parallella and the NUC to work in synchronisation. At some point, a Stratum-1 NTP server may be needed by the network, and I already have plans for a DIY server. The NTP server will also work well with SyncFS.

I will create an updated graphical diagram in Dia later.

No comments:

Post a Comment