@@ -61,7 +61,32 @@ The DM is divided into two submodules the **Dispatcher** and the **Dispatch Daem
...
@@ -61,7 +61,32 @@ The DM is divided into two submodules the **Dispatcher** and the **Dispatch Daem
- For Worker nodes modify the config.json in dispatch_daemon to provide an unique ID to each node.
- For Worker nodes modify the config.json in dispatch_daemon to provide an unique ID to each node.
- Run the Master and Worker server as `npm start` or `node index.js`
- Run the Master and Worker server as `npm start` or `node index.js`
### Inter-component Communication Interface
The Dispatcher sends a request to the Resource Management Server (Arbiter), detailing what resources it needs, on the Kafka topic `REQUEST_DISPATCHER_2_ARBITER`.
Format:
```javascript
{
"id":"unique-transaction-id",
"memory":1024,// in MiB
...// Any other resources
}
```
The Arbiter finds a list of machines that will satisfy those resource demands and return it to the Dispatcher on the Kafka topic `RESPONSE_ARBITER_2_DISPATCHER`.
Format:
```javascript
{
"id":"unique-transaction-id",
// "port": 2343 --- NOT IMPLEMENTED YET
"grunts":["a","b",...]// List of machine IDs
}
```
### Internal Communication Interfaces
### Internal Communication Interfaces
#### Dispatcher
Internally DM uses Apache Kafka for interaction between the Dispatcher and the Dispatch Agents, while the messages are in JSON format.
Internally DM uses Apache Kafka for interaction between the Dispatcher and the Dispatch Agents, while the messages are in JSON format.
Every Dispatch Agent listens on a topic which is its own UID (Currently the primary IP Address), the Dispatcher listens on the topics *"response"* and *"heartbeat"*.
Every Dispatch Agent listens on a topic which is its own UID (Currently the primary IP Address), the Dispatcher listens on the topics *"response"* and *"heartbeat"*.
-**Request Message:** When a request is received at the Dispatcher, it directs the Dispatch Agent to start a worker environment. A message is sent via the chose Worker's ID topic. \
-**Request Message:** When a request is received at the Dispatcher, it directs the Dispatch Agent to start a worker environment. A message is sent via the chose Worker's ID topic. \
...
@@ -85,6 +110,56 @@ Format:
...
@@ -85,6 +110,56 @@ Format:
{address:'UID of the worker'}
{address:'UID of the worker'}
```
```
#### Resource Manager
Upon being launched, each Grunt sends a JOIN message to the Arbiter on the Kafka topic `JOIN_GRUNT_2_ARBITER`.
Format:
```javascript
{
"id":"unique-machine-id",
}
```
After this, Grunts send a heartbeat message to the Arbiter periodically on topic `HEARTBEAT_GRUNT_2_ARBITER`. These messages contain the current state of all the resources being tracked by Grunts on each machine. This data is cached by the Arbiter.
Format:
```javascript
{
"id":"unique-machine-id",
"memory":1024,// in MiB
...// Any other resources
}
```
The Arbiter, upon recieving the request from the Dispatcher, checks its local cache to find a suitable machine. If it finds some, it sends a message back to the Dispatcher on topic `RESPONSE_ARBITER_2_DISPATCHER`.
```javascript
{
"id":"unique-transaction-id",
// "port": 2343 --- NOT IMPLEMENTED YET
"grunts":["a","b",...]// List of machine IDs
}
```
If -- on the other hand -- the Arbiter can't find any such machine in its cache, it sends a message to all the Grunts requesting their current status. This message is posted on the topic `REQUEST_ARBITER_2_GRUNT`
Format:
```javascript
{
"id":"unique-machine-id",
"memory":1024,// in MiB
...// Any other resources
}
```
The Grunts, recieve this message and send back their state on topic `RESPONSE_GRUNT_2_ARBITER`.
```javascript
{
"id":"unique-machine-id",
"memory":1024,// in MiB
...// Any other resources
}
```
The Arbiter waits for a certain amount of time for the Grunts; then, it sends a list of however many Grunts have replied affirmatively to the Dispatcher on topic `RESPONSE_ARBITER_2_DISPATCHER`, as described above.
### Interaction API
### Interaction API
The platform works via a HTTP API based interface, the interface is divided into two parts:
The platform works via a HTTP API based interface, the interface is divided into two parts: