Skip to main content

I’m having an error with Sync timing out Databricks clusters when starting up, I’ve tried modifying the job timeout and connection timeout but haven’t been able to solve it.


I’m also getting Failed to check the directory status” 500 errors, what’s going on?

“Failed to check the directory status” 500 errors usually occur if you do not have your Destination schema set for your Databricks. Below are the steps to solve this:

  1. Open your CData Sync
  2. Go to the job that is failing in your web application
  3. Go to ‘Advanced’ settings
  4. type ‘sfdc’ in the Destination Schema option(or if you have a specific schema that you need instead, use that)


If you’re having an error occur with your job starting, and your Databricks cluster is automatically timed out but starting up for this job.

Connection timeout is used to handle runtime timeouts, but in this case your cluster has not started yet, so runtime has not occurred.

 

This specifically occurs with Databricks, due to long startup times of up to even 5 minutes. Below are the steps to solve:

  1. Open CData Sync
  2. Go to your ‘Connections’ tab
  3. Click into your Databricks connection
  4. Go to your Advanced settings
  5. Scroll down until you see the ‘Other(Optional)’ entry
  6. Set a ‘ConnectRetryWaitTime’ option(by default this will attempt it 4 times)

In the example below, you can see I’ve set this option to 60 seconds, so it will attempt 4 times for a total of 240 seconds or 4 minutes:
 

 

If you have any more issues with Databricks and are unsure, most errors that occur within your Databricks job on Sync will require you to get Driver logs from your Databricks cluster.

Below are steps to do so:
 

  1. Login to your Azure Microsoft account at portal.azure.com
  2. Navigate to your Databricks workspaces and launch the workspace
  3. Go to Compute>YourCluster>Driver Logs

  4. Download the top 3 logs

 

These logs will tell you more about your error.

 

If you’re ever unsure about your logs or if you need help with your errors, make sure to always grab Verbose logs, Databricks logs, and ask here or email us at [email protected]!


Reply