ComTec has partnered with Apervi in providing the Big Data Integration and Orchestration solutions and services.

Apervi Conflux– Industry’s first web-based big data integration and orchestration platform providing full support for batch, streaming and micro-batch data pipelines.

Apervi Conflux is an integration platform that accelerates designing and deploying data pipelines and IoT applications in the Hadoop ecosystem with zero coding. It enables users to integrate data from various sources and create reusable data pipelines for data at rest (batch processing) and real-time data (stream processing). Engineers can design big data applications using an HTML5 drag-n-drop UI within minutes, and deploy on any execution engine in the Hadoop ecosystem including Storm, Spark and Tez. The platform is powered by a highly scalable and optimized data orchestration engine. Apervi Conflux drastically reduces the need to write custom code using technologies like MapReduce, Pig, SCALA, Hive etc., by packaging all those features in the platform. It is architected to coexist and interact with your existing enterprise infrastructure via standardized REST interfaces.

Features of Apervi Conflux
Designer
  • An elegant, modern, and extremely intuitive HTML5-based development studio
  • A workspace with complete availability of source connectors, filters and transformations, processing environments, and data stores
  • Build and configure complex batch and streaming data workflows with simple drag-n-drop operations
  • Redesign, extend or update data workflows, replacing or adding connectors, transformations, processing technologies, or data stores within minutes
  • Eliminate design complexity by combining workflows with out-of-the-box commands
  • Collaborate and share your workflows across the enterprise.
Scheduler
  • A flexible scheduler, leveraging the Designer interface, to create jobs that can execute big data workflows on technologies like Hadoop, Storm, Spark or Tez
  • A workspace with complete availability of items to build and configure jobs with drag-n-drop operations
  • More extensive and powerful that Apache Oozie
  • Ability to run a job based on time which could be immediate, recurring or one time
  • Extensible and can be easily integrated with third party schedulers.
Monitor
  • An integrated module to monitor jobs from virtually any browser
  • Monitor jobs based on the number of records that have been processed at any given time
  • Monitor job progress, health, performance, and key workflow statistics in real time
  • Drilldown job results and log files to identify and troubleshoot issues within seconds
  • Access technology-specific operational statistics from this module. For instance, users can access Storm UI from this module.
Dashboard
  • Gain visibility into statistics, metrics and KPIs in a fully customizable screen
  • View alerts and notifications on running jobs and workflows
  • Configure temporal charts and graphs to view job statistics
  • Build custom dashboards with pre-built widgets relevant to user roles
  • Use the extensible framework to integrate custom dashboard widgets into the platform.
Connectors
  • Ready to use connectors to unlock and extract data from a variety of data sources including RDBMS, EDW, NoSQL DBs, queuing systems, CRM and similar products
  • Support for most major industry standard data exchange formats
  • Prebuilt custom connectors for Twitter, EDI formats, and other well-known technologies
  • Build and deploy custom connectors through the platform using simple Java-based API extensions
  • Connect and extract insights from data in real-time as easily as you would from stored batch data.