Abstract
Many computing related tasks today require a lot of hardware infrastructure to fulfill requirements and expectations of its users. Physical infrastructure used to serve systems are often organized in several geographically separate computer clusters. In this thesis, we have investigated and developed a working prototype which enables nodes in a PCI Express based computer cluster to connect with, and transfer data a node in remote PCI Express based cluster. Central to our design is the cluster gateway, or proxy node. Each cluster consists of endpoint nodes and one proxy node. The proxy acts as a gateway for incoming and outgoing data traffic to and from nodes in the local cluster. Every data transfer is relayed via the proxies which carries the responsibility of forwarding outgoing data to a destination remote cluster, and forwarding incoming data to the recipient node. The system is implemented on PCI Express based clusters using Ethernet as the medium connecting remote clusters together. The cluster interconnect technology enables nodes to connect to memory segments in a node within the cluster and perform read and write operations on it using either programmed I/O or Remote Direct Memory Access. We have implemented functionality intended to supplement an already existing API, that can be used to execute inter-cluster data transmissions.
Many computing related tasks today require a lot of hardware infrastructure to fulfill requirements and expectations of its users. Physical infrastructure used to serve systems are often organized in several geographically separate computer clusters. In this thesis, we have investigated and developed a working prototype which enables nodes in a PCI Express based computer cluster to connect with, and transfer data a node in remote PCI Express based cluster. Central to our design is the cluster gateway, or proxy node. Each cluster consists of endpoint nodes and one proxy node. The proxy acts as a gateway for incoming and outgoing data traffic to and from nodes in the local cluster. Every data transfer is relayed via the proxies which carries the responsibility of forwarding outgoing data to a destination remote cluster, and forwarding incoming data to the recipient node. The system is implemented on PCI Express based clusters using Ethernet as the medium connecting remote clusters together. The cluster interconnect technology enables nodes to connect to memory segments in a node within the cluster and perform read and write operations on it using either programmed I/O or Remote Direct Memory Access. We have implemented functionality intended to supplement an already existing API, that can be used to execute inter-cluster data transmissions.