Skip to main content

is it possible for us to configure CData Sync to use more RAM of the VM? For example, when I do a initial replication of 1mil records from Redshift (RA3.16xlarge) to Snowflake (medium wh).. it takes roughly 3 mins.. I noticed Cdata is using only ¼ of the total 32GB RAM available. 

Hi @worldwidehow 

I believe you are looking for utilizing more RAM and wish to increase the performance of the job. Is that correct assumption? If yes, you can increase the Batch Size of the job. 

I have mentioned this here: 

To increase the Batch Size of the Job, refer to the Advanced Job Options here: https://cdn.cdata.com/help/ASH/sync/Advanced-Job-Options.html.

 

Let me know if this helps!


hi @Ankit Singh  yes, i increased it to 1mil.. and it is taking 3mins. the entire table is 229mil records.. any other ways to make CData Sync go faster or utilise more RAM of the VM? 


Just to confirm… where is Sync deployed, in your scenario?  In a VM on AWS, or, on your company on-premises network?  The latter would introduce some notable delays in a cloud-to-cloud scenario.


Just to confirm… where is Sync deployed, in your scenario?  In a VM on AWS, or, on your company on-premises network?  The latter would introduce some notable delays in a cloud-to-cloud scenario.


hi, Sync is deployed in Azure VM. My data source is on AWS Redshift (RA3.16xlarge) and my destination is Azure Snowflake (Medium WH).

Do you have any notes or links to show how CData Sync and its VM has been set up to replicate very large tables (i.e. 50GB, 100GB) for initial data loading purpose and there is no attribute to do incremental loading.


Reply