Skip to main content

44 Ideas

noebrownEmployee

Sage Intacct Job Configuration Best PracticesNew

The following is intended for jobs that use Sage Intacct as a source connection.PrerequisitesBefore configuring any Sage Intacct replication job, verify the following in the tables/tasks of your job. You can verify each of the following prerequisites in the UI by navigating to the Overview tab of the task and reviewing the Source Information section. You will see the Incremental Check Column (ICC), Key/Index Column, and Capture Deletes fields:Incremental replication support: Determine whether the source table supports incremental replication. By default, Sync detects if a table has a good incremental check column. If the field below 'Incremental Check Column' is blank, there is no detected ICC. If an ICC is not detected and settings are editable, you must determine if there is a viable column. A good ICC is either a datetime or integer-based column in the table that can be used to identify new or modified records. The 'Whenmodified' column that is typically used is a great example of a good ICC. Primary key availability: Confirm whether a primary key (PK) exists for the table.  If the field below 'Key/Index Column' is blank, there is no primary key. Capture Deletes supported: Confirm that Capture Deletes is either true or false for the table. If false capturing deletes are not supported, no deletes in the source will be deleted in the destination.Reference(s)CData Sync - Features | 23.4.8843 CData Sync - Tasks | 23.4.8843 When to Use Drop Table Enabling drop mode for a task/table deletes the entire table before proceeding with the normal replication process. This requires a new table to be formed for each job run. Enable drop table in the following scenarios:Example ScenariosTables without ICC: When no Incremental Check Column exists and no substitute column can be found, incremental replication is not possible. Enabling drop table is required.  Tables with ICC but no primary key: When a table has an Incremental Check Column but lacks a primary key, Sync can detect changed records but cannot perform merge operations. Without a primary key to uniquely identify rows, updates from the source appear as duplicate rows in the destination rather than updating existing records. Enabling drop table is necessary to prevent data duplication. Tables that do not 'Capture Deletes': When capture deletes is false for a table, no records will be removed from the destination table. It is necessary to enable drop table each run to ensure records deleted in the source are removed. Exceptions to this are the Gldetail and Glentry tables; both of these tables have their own logic implemented to ensure that deletes are captured.Schema ModificationsEnable drop table when adding or removing columns from the table schema.When to Avoid Drop Table ModeDo not enable drop table for large tables that support incremental replication. Repeatedly dropping and recreating large tables increases the risk of HTTP 502 errors from Sage Intacct due to increasing request. Configuration for Large, Frequently Updated TablesFor large tables with frequent updates:Enable incremental replication: Reduces load on Sage Intacct API and prevents 502 errors. Set replication interval: The recommended interval is a 15-day interval. This interval can vary based on the date range of the data you are replicating. Handling API Latency for High-Volume TablesThe following recommendations apply specifically to large tables with frequent updates, such as Gldetail.MinLastModTimeOffset PropertySage Intacct treats certain field updates as record deletions rather than modifications. When the majority of changes are processed as deletions, we have noticed cases of extreme API latency. Implementing the MinLastModTimeOffset property improves replication reliability.ConfigurationSpecify the offset in seconds using the advanced job options. For example, you can set the property 'MinLastModTimeOffset=21600'. This example sets a 6-hour offset (21,600 seconds = 6 hours).Recommended ScheduleExecute jobs at least twice daily for tables that are being modified often. Set minimum offset (MinLastModTimeOffset) of 3–4 hours. Avoid scheduling jobs between 2:00 PM–3:00 PM EST: Sage Intacct experiences high request volume during this period.Reference(s)CData Sync - Advanced Job Options | 25.3.9396 Controlling Min and Max LastModTime Values in Incremental Jobs (CData Sync) | Community CHECKCACHE Validation JobsA CHECKCACHE job validates and repairs destination tables by identifying and correcting missing or extraneous records. This provides an additional data accuracy layer. For example, if you just updated a record and ran the job by itself, the record may not be picked up due to the latency of the API. A CHECKCACHE job will compare the source and destination and repair the destination tab to include the newly added record.ConfigurationCreate a new job and use one of the following CHECKCACHE queries:CHECKCACHE DestinationTable AGAINST SourceTable WITH REPAIR;OrCHECKCACHE DestinationTable AGAINST SourceTable WITH REPAIR START '2024-01-01' END '2025-01-01'Both CHECKCACHE queries look at both the source and destination tables. Then, insert any missing records, update outdated records in the destination, and remove any records that are not in the source. The only difference is that the second query narrows down the repair to a specific date range. In the second query, we are only looking at the year 2024.Recommended ImplementationConfigure a post-job event to trigger the CHECKCACHE job immediately after the original replication job is completed. Running these jobs in tandem ensures that repairs are done right after the data is replicated.  Ideal tables for this are large tables such as Gldetail and Glentry. Your post-job event should be similar to the following template:<!-- Start Executing different Job --><api:set attr="job.JobName" value="Job2"/><api:set attr="job.WorkspaceId" value="default"/><api:set attr="job.ExecutionType" value="Run"/><api:set attr="job.WaitForResults" value="true"/> <!--Setting to true will wait for Job to complete --><api:call op="api.rsc/executeJob" httpmethod="post" authtoken="<YourAPIToken>" in="job"/>Reference(s)CData Sync - Custom Querying: CHECKCACHE Command | 25.3.9396 CData Sync - Events | 25.3.9396 Incremental Replication Start Date FormatBoth datetime and integer time (Unix timestamp) formats are functionally equivalent:Integer time: 1639149390000 (Unix timestamp for 2021-12-10 15:16:30 GMT) DateTime: 2021-12-10 (begins at 00:00:00 on the specified date)Start Integer vs. Start Date in Incremental ReplicationUse datetime format for Sage Intacct jobs, as the Whenmodified column uses datetime values. Whenmodified is often the column detected and used as the ICC in Sage Intacct.Expected FormatThe Whenmodified column uses the following datetime format '2010-08-12 08:11:58.000000'. This format includes date, time, and microsecond precision.Reference(s)CData Sync - Configuring Your First Replication Job | 25.3.9396  

Jaclyn
JaclynEmployee

🚨IF YOU USE API Access Control: Salesforce change may stop DBAmp on Nov 4, 2025 🚨New

TL&DRSalesforce announced on Wednesday, Oct 15, 2025 that it’s deprecating the “Use Any API Client” permission within API Access Control. If your Salesforce org has API Access Control enabled, only allow-listed Connected Apps will be able to use the API going forward. This may stop DBAmp connections as early as Nov 4, 2025 unless you take action. Who is Affected?Organizations that: Use DBAmp to connect to Salesforce, and Have API Access Control enabled in Salesforce (this setting restricts API access to an allow-list of Connected Apps). Not sure whether API Access Control is enabled? Ask your Salesforce admin or internal support to confirm. Timeline Oct 15, 2025: Salesforce announces deprecation of “Use Any API Client.” Nov 4, 2025 (as early as): Impact may begin. If your org relies on “Use Any API Client,” DBAmp connections can fail without an approved Connected App in place.  What to do now (Action Items) Confirm your org state Ask your Salesforce admin: Is API Access Control enabled? Tell us you’re impacted Email [email protected] with: Your org name(s) and whether each is Prod or Sandbox Whether API Access Control is enabled The critical DBAmp workloads at risk (backups, syncs, reporting, etc.) Your preferred maintenance window(s) / timeline Prepare for Connected App allow-listing We’ll guide you through using an approved Connected App and share early-access builds as needed so your DBAmp jobs continue uninterrupted. The DBAmp team will be sending brief reminders weekly through Nov 4. If you’re impacted, please reply today so we can prioritize you. The DBAmp Team

IrdiDEmployee

Controlling Min and Max LastModTime Values in Incremental Jobs (CData Sync)New

🔄How Incremental Replication Works (in simple terms) ⚙️What These Offsets Do 📊 Examples 🗂️TL;DR Logic FlowWhen running incremental replication jobs in CData Sync, Sync saves a timestamp (LastModTime) that determines where the next run should pick up. You can fine-tune how this value is calculated using two advanced options:MaxLastModTimeOffset         MinLastModTimeOffset                These settings affect what gets saved to the internal status table, and therefore what data gets pulled on the next run. 🔄How Incremental Replication Works (in simple terms)Sync uses a column like LastModifiedDate or UpdatedAt to check what changed. Each run saves the latest known value from this column as LastModTime. On the next run, Sync only fetches records newer than that value.           If your data source returns inconsistent timestamps or has delays (clock skew, API caching, etc.), you can use offsets to shift LastModTime forward or backward slightly to avoid missing records or reprocessing too many.📘 Learn more:👉 How Incremental Replication Works in CData Sync ⚙️What These Offsets DoMaxLastModTimeOffset Default: 300 seconds (5 mins) Used to move LastModTime forward Avoids pulling too much old data when the source returns outdated timestamps                 MinLastModTimeOffset Default: 0 seconds Used to move LastModTime backward Ensures slight overlap in case of clock skew or API delays                 📘Full option details:👉 Advanced Job Options – Min/MaxLastModTimeOffset 📊 Examples Max (LastModTime) LastRunTime Saved LastModTime Offset Config 2024-01-01 00:00:00 2024-01-01 01:00:00 2024-01-01 00:55:00 Max=300, Min=0 None (Pseudocolumn) 2024-01-01 01:00:00 2024-01-01 00:55:00 Max=300, Min=0 2024-01-01 00:58:00 2024-01-01 01:00:00 2024-01-01 00:58:00 Max=300, Min=0 2024-01-01 00:00:00 2024-01-01 01:00:00 2024-01-01 00:54:00 Max=300, Min=360 2024-01-01 00:58:00 2024-01-01 01:00:00 2024-01-01 00:54:00 Max=300, Min=360 🧠 Note: If MaxLastModTimeOffset < MinLastModTimeOffset, Sync automatically corrects the Max value to match Min. 🗂️TL;DR Logic FlowMaxLastModTimeOffset: Moves LastModTime forward to avoid old data or empty sets.    MinLastModTimeOffset: Ensures overlap by pulling records slightly earlier than LastRunTime.            Sync saves the resulting value into CDATA_SYNC_STATUS to track what records should be pulled next.         📣 Have you tuned these offsets in your jobs? Share how adjusting them helped reduce duplicates, late record misses, or processing errors. We'd love to hear your tips!

Marijan
MarijanEmployee

Connecting TIBCO Spotfire to CData Virtuality using JDBCNew

Connecting TIBCO Spotfire Information Services to a CData Virtuality server requires a custom JDBC data source template. Below we outline the driver details and provide a correctly structured XML template (rooted at <jdbc-type-settings>) for Spotfire’s latest version (Spotfire 12+). This template defines the JDBC driver class, connection URL pattern, and authentication properties needed to connect to a CData Virtuality server on localhost with default credentials. Driver and Connection Details for CData VirtualityBefore creating the template, ensure the JDBC driver JAR for CData Virtuality is installed on the Spotfire Server. Copy the datavirtuality-jdbc.jar into Spotfire Server’s classpath (for example, the tomcat/lib directory). The CData Virtuality JDBC driver class name and URL format are as follows: Driver Class: `com.datavirtuality.dv.jdbc.Driver` JDBC URL Format: `jdbc:cdatavirtuality:<vdb-name>@mm[s]://<host>:<port>` <vdb-name> is the name of the Virtual Database (VDB) to connect to. The primary VDB is typically named “datavirtuality” mm vs. mms: Use @mm for an unencrypted connection or @mms for SSL. The default ports are 31000 for mm and 31001 for mms For example, to connect to the primary VDB on a local server over the default port, the JDBC URL would be:jdbc:cdatavirtuality:datavirtuality@mm://localhost:31000 with Username admin and Password admin We will use these defaults in the template (placeholders can be adjusted for your environment). Template Structure and ConfigurationSpotfire data source templates define a unique <type-name>, the JDBC driver, URL pattern, and various settings/capabilities for the data source. The structure follows the same pattern as the CData MongoDB example, adjusted for the Virtuality driver. Key elements include: <type-name>: A label for this data source type (e.g. “DataVirtuality”). <driver>: The fully qualified JDBC driver class (as given above). <connection-url-pattern>: The JDBC URL prefix including protocol and any static parts. We include the jdbc:cdatavirtuality: prefix (and default VDB and protocol) here, then supply host/port as separate properties. <ping-command>: A simple test query Spotfire can use to verify the connection. We use SELECT 1 (which the Virtuality engine will execute as a trivial query). <connection-properties>: Individual connection parameters (key/value pairs). For the Virtuality driver we provide: These properties will be combined with the URL pattern to form the complete JDBC connection string at runtime. (The CData driver accepts Server and Port as properties similarly to other CData connectors.) Server – the hostname (e.g. localhost). Port – the port (e.g. 31000 for default non-SSL). User – the username for the Virtuality server (e.g. admin). Password – the password (e.g. admin). JDBC Capabilities and SQL Settings Additional tags define how Spotfire should interact with this source. We include typical settings similar to other templates, for example: Fetch and batch sizes for data retrieval (<fetch-size>10000</fetch-size>, <batch-size>100</batch-size>). Supported metadata features (<supports-catalogs>true</supports-catalogs>, <supports-schemas>true</supports-schemas>, etc.). Data Virtuality’s virtual DB supports schemas (for underlying data sources and INFORMATION_SCHEMA) but operates within a single catalog (the VDB), so it’s safe to leave catalogs enabled (Spotfire will see one catalog corresponding to the VDB). Procedures can be marked unsupported (false) since we typically query tables/views. Quoting and naming patterns for identifiers (using $$name$$ placeholders wrapped in quotes, as in other CData templates) to ensure Spotfire generates correct SQL. Temp table patterns and creation/drop commands for Spotfire’s internal use (same conventions as other JDBC templates). <data-source-authentication>false</data-source-authentication> to indicate that the template will supply its own credentials (via the User/Password properties) rather than using Spotfire’s stored credentials. All these elements are wrapped in the <jdbc-type-settings> root tag. Below is the complete XML template incorporating the above settings. XML Template for CData Virtuality Data SourceUse the following XML as the data source template in the Spotfire Server Configuration (under Configuration > Data Source Templates > New). This template is designed for the CData Virtuality JDBC driver on a local server with default admin/admin credentials – adjust host/port or credentials placeholders as needed for your environment. The template is compatible with Spotfire Server 12 (and later):<jdbc-type-settings>     <type-name>DataVirtuality</type-name>     <driver>com.datavirtuality.dv.jdbc.Driver</driver>     <connection-url-pattern>jdbc:cdatavirtuality:datavirtuality@mm://</connection-url-pattern>     <ping-command>SELECT 1</ping-command>     <connection-properties>         <connection-property>             <key>Server</key>             <value>localhost</value>         </connection-property>         <connection-property>             <key>Port</key>             <value>31000</value>         </connection-property>         <connection-property>             <key>User</key>             <value>admin</value>         </connection-property>         <connection-property>             <key>Password</key>             <value>admin</value>         </connection-property>     </connection-properties>     <fetch-size>10000</fetch-size>     <batch-size>100</batch-size>     <max-column-name-length>32</max-column-name-length>     <table-types>TABLE, VIEW</table-types>     <supports-catalogs>true</supports-catalogs>     <supports-schemas>true</supports-schemas>     <supports-procedures>false</supports-procedures>     <supports-distinct>true</supports-distinct>     <supports-order-by>true</supports-order-by>     <column-name-pattern>"$$name$$"</column-name-pattern>     <table-name-pattern>"$$name$$"</table-name-pattern>     <schema-name-pattern>"$$name$$"</schema-name-pattern>     <catalog-name-pattern>"$$name$$"</catalog-name-pattern>     <procedure-name-pattern>"$$name$$"</procedure-name-pattern>     <column-alias-pattern>"$$name$$"</column-alias-pattern>     <string-literal-quote>'</string-literal-quote>     <max-in-clause-size>1000</max-in-clause-size>     <condition-list-threshold>10000</condition-list-threshold>     <expand-in-clause>false</expand-in-clause>     <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>     <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>     <procedure-table-jdbc-type>0</procedure-table-jdbc-type>     <procedure-table-type-name></procedure-table-type-name>     <date-format-expression>$$value$$</date-format-expression>     <date-literal-format-expression>'$$value$$'</date-literal-format-expression>     <time-format-expression>$$value$$</time-format-expression>     <time-literal-format-expression>'$$value$$'</time-literal-format-expression>     <date-time-format-expression>$$value$$</date-time-format-expression>     <date-time-literal-format-expression>'$$value$$'</date-time-literal-format-expression>     <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions>     <temp-table-name-pattern>$$name$$#TEMP</temp-table-name-pattern>     <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command>     <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command>     <data-source-authentication>false</data-source-authentication>     <lob-threshold>-1</lob-threshold>     <use-ansii-style-outer-join>false</use-ansii-style-outer-join>     <credentials-timeout>86400</credentials-timeout> </jdbc-type-settings> This XML can be saved and uploaded as a new data source template in Spotfire Server’s configuration. After adding the template, restart the Spotfire Server, then create an Information Services data source using this template. You should be able to connect to the CData Virtuality server and build information links against the virtual databases.

ninkenApprentice

Cdata Sync - 10 Suggested features.New

Greetings as I use more and more of CData Sync in our environment. I’ve jotted down features I would like to see in future editions of CData Sync. Jobs1. Feature: New tab in the Overview, Task, Job History... Called [Notes]Use case: Admins have a dedicated section to store notes, instructions, or command-related information about the overall job. This can be particularly useful for maintaining documentation, troubleshooting, or sharing important details with other team members working on the same job.2. Feature: Under Task, a new button called [Advanced Task Editor/Viewer]. This feature works like the "Add Task" functionality, allowing users to enter multiple task statements separated by semicolons.Use case: Enabling bulk editing or viewing of all task commands in a single interface provides users with a comprehensive overview of the tasks within a job. This can be beneficial for identifying potential issues, making coordinated changes across multiple tasks, or reviewing the overall logic and flow of the job's tasks.3. Feature: Under Task, a new button called [Row Count Check]. This feature counts the number of rows in the Source and Destination systems and compares the difference in count.Use case: Allowing users to quickly verify and compare the current row counts between the Source and Destination systems can aid in identifying potential data discrepancies or issues with the data transfer process. This feature can be particularly useful for validating data integrity and ensuring that the job is operating as expected.4. Feature: In the Last Run status section, allow admins to set a [Cleared] status within the UI [...] ellipsis menu.Use case: For non-scheduled jobs that may have encountered a task failure, admins can mark the status as "Cleared" to indicate that the issue has been addressed or acknowledged. This can help prevent other admins from unnecessarily investigating the same issue and ensure better communication and coordination within the team.5. Feature: Add the ability to sort and filter jobs based on their Enabled or Disabled Schedule status.Use case: Users may want to quickly locate and focus on either enabled or disabled jobs, depending on their specific needs or priorities. Providing sorting and filtering options based on the schedule status can streamline the job management process and improve overall workflow efficiency.Jobs >> Task Edit6. Feature: New tab in the Overview, Columns, Advanced... Called [Add/Alter Columns]Use case: When using operations like "Drop Table" or "Truncate Table," the final table may not be created correctly, potentially leading to issues with column configurations (e.g., collations, computed columns). The "Add/Alter Columns" feature would allow users to have more granular control over the destination table creation process, enabling them to modify column properties or add new columns as needed. This can help prevent problems that may arise when the table needs to be created or loaded, ensuring better data integrity and compatibility.Connections7. Feature: New tab in the Settings, Advanced called [Dependencies] which shows what Jobs are dependent on the current Source/Destination connection, with hyperlinks to the respective Jobs for easy navigation.Use case: By providing visibility into the Jobs that are dependent on a particular Source or Destination connection, users can better understand the impact of any changes or maintenance activities related to that connection. The hyperlinks to the dependent Jobs can further facilitate quick access and management of those Jobs.Settings8. Feature: New Tab, [Sessions] allows you to set the session logoff timeout and displays a list of currently logged-in users.Use case: The default session timeout may not always align with user preferences or organizational requirements. Providing the ability to adjust the session logoff timeout can enhance user experience and ensure that users' sessions remain active for the desired duration. Additionally, displaying a list of currently logged-in users can aid in monitoring and managing active sessions, promoting better security and resource management.Transformations9. Feature: Under Queries, a new button called [Advanced Query Editor/Viewer]. This feature works like the "Add Query" functionality, allowing users to enter multiple query statements separated by semicolons.Use case: Enabling bulk editing or viewing of all query commands in a single interface provides users with a comprehensive overview of the queries within a transformation. This can be beneficial for identifying potential issues, making coordinated changes across multiple queries, or reviewing the overall logic and flow of the transformation's queries. Additionally, users can easily copy and paste the queries into an SQL Editor Manager for testing and validation purposes.10. Feature: Add the ability to sort and filter transformations based on their Enabled or Disabled status.Use case: Users may want to quickly locate and focus on either enabled or disabled transformations, depending on their specific needs or priorities. Providing sorting and filtering options based on the transformation status can streamline the management process and improve overall workflow efficiency.

ninkenApprentice

Enhancing Data Sync with Validation and Reconciliation FeaturesNew

I am reaching out to propose an enhancement to our data sync product, specifically regarding the introduction of validation and reconciliation functionalities. As our data source requirements caters to various data synchronization needs, it's imperative to offer users tools necessary for ensuring data integrity and peace of mind during synchronization processes. Run Validation: Introduce a validation feature that allows users to compare the data between the source and destination systems. Provide a simple comparison of the number of records in both the source and destination, enabling users to quickly identify any discrepancies. This basic validation functionality serves as a valuable troubleshooting tool and provides users with immediate insights into the synchronization status. Advanced Validation Mode: Offer an advanced validation mode that provides detailed insights into differences between the source and destination data. Highlight discrepancies in terms of Inserts, Deletes, and Updates, enabling users to pinpoint specific data changes. Provide users with the option to initiate reconciliation controls for Inserts, Deletes, and Updates directly from the administration screen. Validation Job Reporting: Enable users to schedule validation tasks as jobs, allowing for automated validation processes at specified intervals. Automatically generate and send out comprehensive validation reports to users, detailing the differences between the source and destination data. These reports serve as valuable documentation and facilitate proactive identification and resolution of data synchronization issues. Thank you for considering this ideal.  

Jaclyn
JaclynEmployee

🚀 Exciting News from CData! 🚀 Data Sync Q1 2024 Release HighlightsNew

We're thrilled to announce the latest enhancements to CData Sync, now with even more powerful features to supercharge your data integration and management:  🌐 Sync Cloud - Our renowned Sync product, previously only available on-premise, has now ascended to the cloud! Managed directly by us and unlimited volume allow... experience the full power and scalability of cloud hosting.  💾 S3: Enhanced Incremental Replication - We've significantly enhanced our S3 Incremental Replication Load from Folder feature by fully supporting directory paths in S3. This update allows for more precise and flexible data management, enabling you to efficiently manage and synchronize your data across even the most complex folder structures.  🔄 CDC from Dynamics 365 - Keep your data fresh with our new Change Data Capture (CDC) support for Dynamics 365. Capture and sync changes in real-time, ensuring your data remains current and accurate.  📚 Full Feature Documentation - For a comprehensive understanding of all new features, visit our Current Release Documentation. Ready to Try These Features?  🆓 Start with a Free Trial.- Experience the new features firsthand and see the difference they make.  🔝 Already a CData user? Upgrade now to enhance your data integration and management capabilities!  Here’s to a transformative 2024 with continuous innovation! 🚀 The CData Sync Team