Hi there.
Is there a way of replicating data in batches.
We are looking at a lambda approach with small incremental replication, followed up with a weekly full reload to catch late-arriving data and physical/hard deletes.
The challenge is that the full reload data volumes are too large for the server hosting our CData Sync. Even with the JAVA heap adjustments.
I was wondering if we could do the reload/replication in a looping process.
The source is Snowflake the target is Azure ADLS2 parquet files.