The solution is mainly aimed at easily implementation of the analytics needs of an enterprise by focussing the ability for data scientists to create custom Spark jobs.
Users can fetch data from multiple data sources continuously, incrementally or fully at a given time interval. They can specify the type of jobs to be applied and in select the type of sequence for a selected table, and build custom workflows, which drives analytics dashboards.
Configure multiple data sources both relational (Oracle, MySQLi) and NoSQL (MongoDB, CouchDB).
Incrementally and full fetch data in preconfigures intervals to HDFS in a fail free distributed manner.
Run Map reduce on spark jobs in fixed intervals in fail free manner.
Visualize your data using any visualization software.