Americas

  • United States

Next Pathway launches an automated cloud migration tool

News Analysis
Dec 02, 20203 mins
Cloud Computing

Crawler360 will scan your data center to see what can be moved to the cloud.

cloudmigrationpath
Credit: istock

Next Pathway has announced the next-generation of its cloud-migration-planning technology, called Crawler360, which helps enterprises shift legacy data warehouses and data lakes to the cloud by telling them exactly how to cost, size, and start the journey.

Data warehouses and especially data lakes can get out of control with poorly managed, siloed data and different forms of structured and unstructured data turning the warehouse and lake into a swamp.

Crawler360 addresses this problem by scanning data pipelines, database applications, and business-intelligence tools to automatically capture the end-to-end data lineage of the legacy environment. By doing so, Crawler360 defines relationships across siloed applications to understand their interdependencies, identifies redundant data sets that have swelled over time that can be consolidated, and pinpoints “hot and cold spots” to define which workloads to prioritize for migration.

“Enterprise customers realize the cloud will help to address many of their key business imperatives and drivers, but migrating legacy systems efficiently is a complex task,” said Chetan Mathur, CEO of Next Pathway in a statement. “The reason we developed Crawler360 was to simplify migration planning, by enabling customers to define the most efficient, cost effective and expedient migration path to modern platforms like Snowflake and AWS Redshift.”

Next Pathway cited an Accenture report that enterprises have on average only 20% to 40% of their workloads in the cloud, most of which are low complexity. Its 2020 study of senior IT executives cites legacy infrastructure and/or application sprawl as a key barrier to fully achieving the promise of cloud.

Crawler360 analyzes three components of the data infrastructure:

  • ETL Pipelines, to understand the end-to-end data flow, from ingestion to consumption in order to capture lineage and orchestration sequencing
  • Data applications and tables from data-lake and data-warehouse applications to understand object counts and workload dependencies across disparate applications
  • BI and analytics consumers to capture downstream consumption lineage and dependencies between consumer and data source

Crawler360 supports migration to AWS, Azure, Google Cloud, Apache Snowflake, and Yellowbrick. It is available for download from Next Pathway.

For those new to Next Pathway, migration is its sole focus. Its Shift Analyzer is like Crawler360 but with a focus on Teradata migration to the cloud. It creates an inventory of all code objects, defines complexity, and provides automation rates when moving to the translation phase of the migration.

Shift Translator automates the translation of complex workloads when executing your migration to the cloud–including SQL, Stored Procedures, ETL, and various other code types–for various source and target platforms.

Finally, it has Shift Tester, which automates data validation and hash-level comparison between the legacy application and cloud environment to accelerate testing cycles with test-ready code that is optimized for the cloud environment and built-in automation to execute user-defined test cases.