The LifeChamps partner Dell Technologies in the context of Tasks 3.1 and 3.2  is working on the data analytics and processing for modelling and insights‘ extraction regarding cancer survivors‘ treatment, monitoring and follow-up, focusing on frailty analysis and QOL improvement. Moreover, Tasks 3.1 and 3.2 includes the development and deployment of management tools for monitoring, migration and on-demand scale-up of Big Data applications based on the HPC infrastructure.

In more detail, Dell Technologies constructs the LifeChamps HPC cloud engine using the container and libraries developed in Task 3.1. This structure will host core data analytics and processing for modelling and insights’ extraction regarding cancer survivors’ treatment, monitoring and follow-up, focusing on frailty analysis and QOL improvement. This task also includes developing and deploying management tools for monitoring, migration, and on-demand scale-up of Big Data applications. For this, Dell Technologies will also provide a job submission and management tool with a portal that administers clusters as a single entity, provisioning the hardware, the operating system, and the workload manager from a unified on-demand interface. It will dynamically apply load-balancing resource allocations with multiple tenants, track every aspect of every node, and report any problems it detects. It will enable accounting/billing support as well as remote visualisations using OpenMP/MPI.

Dell Technologies working on encapsulating a Big Data analytics stack into healthcare-focused container formats that offer seamless communication between the interdependent components in an encapsulated, defined, and portable environment.

Progress Status:
The HPC platform is already fully operational. Dell Technologies is working on replacing 2 CPU nodes with GPU nodes while working on software components of the solution, that is, the integration of anaconda with Horovod. The  Horovod  is a framework developed by Uber for deep learning propose. It consists of Keras, TensorFlow and PyTorch, which are python-based deep learning libraries.  Horovod  was chosen based on the fact that it is a multi-node distribution module. The  distribution  environment  means  across  multiple nodes / services.  The Anaconda, which has a notion of a virtual environment, will be the package manager. Therefore, each team of data scientists that will access the HPC cluster will have a limited number of environments. Horovod can be installed  in one specific environment, and in another environment can be installed the pyro TensorFlow  or the PyTorch.

LifeChamps Admin

Author LifeChamps Admin

More posts by LifeChamps Admin