<?xml version="1.0"?>
<rss version="2.0">
    
                    <channel>
        <title>Join the conversation</title>
        <link>https://community.cdata.com</link>
        <description>On the Forum you can ask questions or take part in discussions.</description>
                <item>
            <title>Update a Google Calendar with a Microsoft Access Linked Table</title>
            <link>https://community.cdata.com/how-to-with-cdata-72/update-a-google-calendar-with-a-microsoft-access-linked-table-146</link>
            <description>CData ODBC drivers connect your data to any database management tool that supports Open Database Connectivity (ODBC). This includes many of the most popular productivity tools, adding new capabilities for document sharing and collaboration. Using the CData ODBC driver for Google Calendar, you can update live Google data in Microsoft Access; for example, you can make updates to a calendar that can instantly be seen by other users.Create a Google DSNIf you have not already, you will need to save Google connection properties in a data source name (DSN). Microsoft Access will use the DSN to connect. You can create and configure a DSN in the Microsoft ODBC Data Source Administrator. For a guide to set the required properties in the ODBC Data Source Administrator, see the &quot;Getting Started&quot; section in the help documentation.Create a Linked Table to Google Calendar EventsFollow the steps below to create a linked table to Google Calendar Events.In Access, click New Data Source &amp;gt; From Other Sources &amp;gt; ODBC Database.	 		Select the option to link to the data source. A linked table will enable you to read and write data to the calendar.	 	 			Select the CData Google Apps data source from the Machine Data Source tab.	 	 			Select the table that corresponds to the Calendar you wish to view (e.g. user@domain.com).	 			Double-click the linked table to make edits. The linked table will always have up-to-date data and any changes made to the table will be reflected back to the underlying calendar.	 	 </description>
            <category>How-to with CData</category>
            <pubDate>Fri, 03 Apr 2026 22:31:56 +0200</pubDate>
        </item>
                <item>
            <title>How to Keep Microsoft Access Working After an ERP Cloud Migration</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-keep-microsoft-access-working-after-an-erp-cloud-migration-1817</link>
            <description>Migrating to cloud ERPs like Acumatica, NetSuite, or Business Central often removes direct SQL access, disrupting Microsoft Access-based workflows many finance teams rely on.Since SaaS ERPs use API-only architectures, traditional database connections no longer work. CData ODBC Drivers bridge this gap by enabling seamless connectivity from Access to cloud ERP data without major changes to your existing files or queries.In this article, you’ll learn how to:Navigate the shift from SQL to API-based ERP systems	Maintain Access workflows after cloud migration	Connect Access to cloud ERP data using CData ODBC Drivers	Continue using existing queries with minimal modificationsTo learn more, read:How to Keep Microsoft Access Working After an ERP Cloud Migration</description>
            <category>CData Drivers</category>
            <pubDate>Mon, 30 Mar 2026 12:42:07 +0200</pubDate>
        </item>
                <item>
            <title>New in Connect AI: Custom MCP Tools</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/new-in-connect-ai-custom-mcp-tools-1816</link>
            <description>You can now build custom MCP tools directly in Connect AI for agentic workflows.With Custom Tools, each operation is pre-defined with scoped inputs, validated logic, and structured output. This means your AI takes action on your terms — consistently, across every team that uses it. </description>
            <category>CData Connect AI</category>
            <pubDate>Fri, 27 Mar 2026 20:08:36 +0100</pubDate>
        </item>
                <item>
            <title>Consultant work</title>
            <link>https://community.cdata.com/developers-52/consultant-work-1812</link>
            <description>I am looking for a person/company to help me setup the ODBC for Rest driver for a specific site.  Any recommendations?</description>
            <category>Developers</category>
            <pubDate>Fri, 27 Mar 2026 15:37:19 +0100</pubDate>
        </item>
                <item>
            <title>Connect Supabase to AI Agents with CData Connect AI</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/connect-supabase-to-ai-agents-with-cdata-connect-ai-1815</link>
            <description>Connect Supabase to CData Connect AI to give AI tools secure, live access to your data—no SQL, no pipelines.What you can do:Query Supabase using natural language (ChatGPT, Claude, Copilot, MCP agents)	Write back to your database without direct SQL access	Query across 350+ connected data sources in a single request	Maintain governed, real-time access to your dataRead the full guide: https://www.cdata.com/kb/articles/cloud-supabase.rst</description>
            <category>CData Connect AI</category>
            <pubDate>Thu, 26 Mar 2026 06:27:27 +0100</pubDate>
        </item>
                <item>
            <title>ServiceNow → SQL Sync: Deletes Not Always Detected on Small, Stable Table</title>
            <link>https://community.cdata.com/cdata-sync-47/servicenow-sql-sync-deletes-not-always-detected-on-small-stable-table-1811</link>
            <description>We are experiencing inconsistent delete replication behavior when syncing a very small and stable ServiceNow table (≈10 rows) into SQL Server using CData Sync v26.Despite following all recommended configurations and upgrading both Sync and the ServiceNow driver, record deletions in ServiceNow are occasionally not detected by CData Sync, resulting in extra rows remaining in the SQL destination.This has caused incorrect data to appear in executive-level dashboards, impacting trust in reporting. Observed BehaviorRecords are deleted in ServiceNow	No incremental sync runs successfully with no errors	The deleted records remain in SQL Server	Dashboard shows more rows than exist in ServiceNow	Manual SQL cleanup is required to restore accuracyExample:Source (ServiceNow): 8 rows	Target (SQL): 10 rows	The extra 2 records no longer exist in ServiceNowExpected BehaviorDeleted records in ServiceNow should be reliably removed in SQL	Sync behavior should be consistent regardless of table sizeWhat We Have Already Done Upgraded to the latest CData Sync v26 build Upgraded to the latest ServiceNow connector version Verified schema stability (no column or structure changes) Tested multiple GetColumnsMetadata settings (OnUse) Reviewed replication logs (no delete-related errors reported) We are not using incremental replication because it is a small table Confirmed issue occurs even on a small table (~10 rows)Questions for the Community / CData TeamAre there known limitations with delete detection in the ServiceNow connector?	Does CData Sync rely on:	sys_updated_on		soft-delete behavior		tombstone records		or another ServiceNow mechanism for deletes?		Are there additional settings required to reliably detect deletions?	Is this a known issue that requires an engineering fix?	What is the recommended and supported way to guarantee delete consistency?</description>
            <category>CData Sync</category>
            <pubDate>Wed, 25 Mar 2026 14:34:34 +0100</pubDate>
        </item>
                <item>
            <title>DBAmp Massive API Calls after CPQ upgrade</title>
            <link>https://community.cdata.com/cdata-dbamp-49/dbamp-massive-api-calls-after-cpq-upgrade-1813</link>
            <description>After upgrading to the latest version of CPQ in salesforce the DBAMP software started sending tens of thousands of API calls to the platform.There are no sync jobs running, there are no daemon sessions in task manager.I cant figure out what is consuming all fo these API calls.</description>
            <category>CData DBAmp</category>
            <pubDate>Tue, 24 Mar 2026 02:28:42 +0100</pubDate>
        </item>
                <item>
            <title>Can I use key created using PuTTYgen in the SFTP features in CData Arc?</title>
            <link>https://community.cdata.com/editions-90/can-i-use-key-created-using-puttygen-in-the-sftp-features-in-cdata-arc-1810</link>
            <description>A common form of authentication used in SFTP transfers is Public Key authentication. In this form of authentication, the SFTP client creates a public and private key pair, and shares the public key with the SFTP Server admin. When authenticating, the SFTP client sends a small message signed with its private key that is verified by the server. This form of authentication is generally considered stronger than password authentication.  One of the most common tools for creating the key pair is through a free tool called PuTTYgen, which allows the user to create a new key pair:   Solution: Yes, key pairs generated via PuTTYgen can be used for both public key authentication from the client perspective in the SFTP connector and to authenticate inbound clients using the SFTP Server connector in CData Arc.  Public Key Authentication in client-side connections (SFTP connector)  To use a private key generated in PuTTYgen as your Client Certificate for public key authentication in an SFTP client, you can simply select the ppk file that was exported from PuTTYgen using the Save Private Key button:   NOTE: It is recommended, but not required that you set a passphrase on your private key when exporting it for additional security.  The same ppk file and passphrase can be selected in the SFTP connector when Public Key authentication is chosen:   Pubilc Key Authentication in server-side connections (SFTP Server)  To use a public key generated via PuTTYgen to authenticate a connecting client with the SFTP Server connector, you would simply save the certificate provided to you by the client to a file with a .pub extension.  You can either accept the certificate exported using the Save Public Key button to a file with a .pub extension:    Or, if you are given the cut/paste contents of the key, similarly save that into a file with a .pub extension:  Either format can be configured as the Client Certificate in the SFTP Server’s user configuration: NOTE: The warning “Selected certificate file only contains key but no certificate info.” is to be expected for SSH public keys. For other public key formats such as X.509 (.cer files), you would see additional details about the certificate subject that are not pertinent to the SSH keys generated via PuTTYgen. </description>
            <category>Editions</category>
            <pubDate>Fri, 20 Mar 2026 15:41:04 +0100</pubDate>
        </item>
                <item>
            <title>Resolving NetSuite Schema Replication Failures in Microsoft Fabric using CData</title>
            <link>https://community.cdata.com/cdata-drivers-45/resolving-netsuite-schema-replication-failures-in-microsoft-fabric-using-cdata-1809</link>
            <description>Tired of broken NetSuite schema replication in Microsoft Fabric?Learn how the CData JDBC Driver solves connector limitations and ensures accurate, complete data replication.Read the full guide: Resolving NetSuite Schema Replication Failures in Microsoft Fabric using CData</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 20 Mar 2026 05:31:36 +0100</pubDate>
        </item>
                <item>
            <title>Soumitra Dutta Oxford: How do I connect my database to CData?</title>
            <link>https://community.cdata.com/cdata-partners-35/soumitra-dutta-oxford-how-do-i-connect-my-database-to-cdata-1808</link>
            <description>Hi Everyone, I&#039;m Soumitra Dutta – Entrepreneur &amp;amp; Photographer Based in Oxford. I’m trying to connect my database to CData, but I’m not sure of the best approach. Could anyone share their tips, recommended connectors, or best practices for setting up a connection efficiently? I’d really appreciate your advice! RegardsSoumitra Dutta </description>
            <category>CData Partners</category>
            <pubDate>Wed, 18 Mar 2026 08:37:31 +0100</pubDate>
        </item>
                <item>
            <title>Migrate legacy flows to Trading Partner Console</title>
            <link>https://community.cdata.com/cdata-arc-48/migrate-legacy-flows-to-trading-partner-console-1802</link>
            <description>Hello,I just recently migrated my Arc instance to a new server.  I had been running v2023 with the old UI and I’m trying to figure out how I can ‘move’ my EDI connections (which are currently spread out across multiple workspaces) to the new Trading Partner Console.  Any help is appreciated.Thanks!</description>
            <category>CData Arc</category>
            <pubDate>Mon, 09 Mar 2026 06:28:27 +0100</pubDate>
        </item>
                <item>
            <title>Dynamic bearer token refresh for CData Sync API connector with bearer token authentication</title>
            <link>https://community.cdata.com/cdata-sync-47/dynamic-bearer-token-refresh-for-cdata-sync-api-connector-with-bearer-token-authentication-1798</link>
            <description>In CData Sync API connector with Bearer token authentication, currently it&#039;s not possible to provide and configure an endpoint from which a new token can be acquired. What are the other possible ways then to update the token dynamically then? This thread discusses a solution in CData Arc but I am not sure if that will be applicable here in CData Sync.This another thread discusses adding a manual bearer token in custom header for CData Sync, but how can the token value be updated dynamically?Is it possible to acquire the token in the Pre-Job event and then set it as an environment variable? Are environment variables (set up in the Pre-Job event) accessible in the Connection configuration?</description>
            <category>CData Sync</category>
            <pubDate>Fri, 20 Feb 2026 18:39:46 +0100</pubDate>
        </item>
                <item>
            <title>Access live WooCommerce data in LibreOffice via JDBC</title>
            <link>https://community.cdata.com/data-sources-91/access-live-woocommerce-data-in-libreoffice-via-jdbc-1797</link>
            <description>Connecting LibreOffice to a CData JDBC connector is a relatively smooth process that can be done in just a few steps. In this walkthrough I will be using the WooCommerce JDBC connector. 	First, launch LibreOffice and in the top left, navigate from Tools -&amp;gt; Options -&amp;gt; LibreOffice -&amp;gt; Advanced, and ensure Libre has access to your 64 bit java run time environment installed on your machine.  	Click “Add” in the top right, and navigate to JRE’s installed directory, which defaults to C:\Program Files\Java\jre&amp;lt;jre version here&amp;gt;   	Next, we must let Libre know where the CData JDBC driver is by adding a Class Path on that same advanced settings page. 	Click “Add Archive…” and navigate to the driver’s jar file. The driver installer defaults to something like C:\Program Files\CData\CData JDBC Driver for WooCommerce 2025\lib\cdata.jdbc.woocommerce.jar Click “Ok” on the Class Path window, apply the Advanced Settings changes we made and then click “Ok” again.     	The next step is defining the JDBC database, which can be reached by clicking “Base Database” near the bottom left of LibreOffice. Then we specify we are connecting to an existing JDBC database and click next. 	  	To set up the connection, we must define the JDBC connection string. This can be easily done by using the same cdata.jdbc.woocommerce.jar file we pointed to earlier for an interactive string building experience: 	Be sure to copy the string after “jdbc:” as Libre automatically adds this to the string.  Because Libreoffice appends a connection property of its own named type to the connection string, as a workaround we can define the JDBC URL in an alternative way to skip the validation of the connection properties. The JDBC URL should look like below:  jdbc:woocommerce:novalidation:URL=&quot;&amp;lt;url_here&amp;gt;&quot;;ConsumerKey=&quot;              &amp;lt;ck_here&amp;gt;&quot;;AuthScheme=Basic;ConsumerSecret=&quot;&amp;lt;cs_here&amp;gt;&quot;;   	Next, we specify which driver to use by setting Class Name to the specific name for the JDBC driver. In this example case, it would be: cdata.jdbc.woocmmerce.WoocommerceDriver.  	  	The “Set up user authentication” page can be skipped since authentication is generally handled by our driver’s connection string, but we can test to make sure the connection string is correct and Libre can connect: 	  	On the “Save and proceed” page, use the default settings and click “Finish”. 	LibreOffice will open a file explorer window, allowing you to name your new database and specify a directory to save it.   	The last step is a quick configuration change. Libre defaults to trying to read data in both directions, while CData drivers only support forward reading. To change this, right click “CData” in the Tables window and go to Database -&amp;gt; Advanced Settings. 	Finally, check the last “Respect the result set type from the database driver” box and click “Ok”.    You can now query information from the tables and views exposed by the connection within LibreOffice: </description>
            <category>Data Sources</category>
            <pubDate>Thu, 19 Feb 2026 17:19:33 +0100</pubDate>
        </item>
                <item>
            <title>Establishing Azure AD SSO Authentication in CData API Server</title>
            <link>https://community.cdata.com/data-sources-91/establishing-azure-ad-sso-authentication-in-cdata-api-server-1792</link>
            <description>Single Sign-On (SSO) with Azure Active Directory (Azure AD) improves security, simplifies access management, and enhances the login experience for CData API Server users. By integrating Azure AD using the OpenID Connect (OIDC) standard, users can authenticate using corporate credentials while API Server securely validates identity tokens issued by Azure AD.  Overview  CData API Server supports Single Sign-On (SSO) via the OpenID Connect (OIDC) standard. Identity providers that implement OpenID, such as Azure Active Directory, can be configured as the SSO platform for API Server.  Once SSO is configured: - Users are redirected to Azure AD for authentication - Azure AD issues a signed JWT token - API Server validates the token signature and issuer - Users are authenticated using a Federation ID mapping  Note: Currently, API Server supports only individual users, not groups of users. If an SSO platform provides access for a group of users, each individual user within that group must be added as a user on the API Server Settings page in order to log in. Each user should reference the federation Id from the identity provider.  Configuration Overview  The configuration process consists of three main sections:  1. Configuring Azure Active Directory 2. Configuring SSO in CData API Server 2. Configuring Users in CData API Server  Section 1: Configuring Azure Active Directory  Step 1: Register an Application  1. Log in to the Azure Portal 2. Navigate to Azure Active Directory 3. Select App registrations and click New registration 4. Enter a name (for example, CData API Server) 5. Choose the appropriate supported account type 6. Under Redirect URI, configure:     https://&amp;lt;your_apiserver_host&amp;gt;:&amp;lt;port&amp;gt;/src/ssoCallback.rst     Example: http://localhost:8080/src/ssoCallback.rst    Note: Please check the localhost port as per your instance.    7. Click Register  Step 2: Copy Application (Client) ID  After registration, copy the Application (Client) ID. This value is used in CData API Server as: - Audience URI - OAuth Client ID   Step 3: Generate a Client Secret  1. Navigate to Certificates &amp;amp; secrets 2. Click New client secret 3. Provide a description and expiration 4. Copy and securely store the client secret value  Important: This value is displayed only once and is required for OAuth configuration.  Step 4: Retrieve OpenID Metadata Document  1. Navigate to Endpoints in the Azure AD application 2. Copy the OpenID Connect metadata document URL 3. Replace &#039;common&#039; with your Tenant ID  This URL will be used as the Import Settings URL in CData API Server.  Step 5: Copy User Object ID  1. Navigate to to Owners in the left pane of the window 2. Select the user who will access API Server 3. Click on owner to copy and save its Object ID.  This value will be used as the Federation ID in API Server. Section 2: Configuring SSO in CData API Server  1. Navigate to Settings → SSO 2. Enable Single Sign On Settings 3. Click Configure  SSO Settings for Azure AD  Audience URIs: - Azure AD Application (Client) ID  Key Claim: - oid  OAuth Client ID: - Azure AD Application (Client) ID  OAuth Client Secret: - Azure AD Client Secret  Import Settings URL: - Azure AD OpenID Metadata Document URL    (After setting the Import Settings URL, click the Import button. The system will automatically create the certificate and use it in the next setting, Issuer Certificate.)  Issuer Certificate: - Automatically generated when you click the Import button (for example, SSOIssuerCertificate.cer) and used for SSO authentication.  Authorization URL: - https://login.microsoftonline.com/&amp;lt;tenant-id&amp;gt;/oauth2/v2.0/authorize  Default Scopes: - openid profile email offline_access  Token Issuer Identifier: - https://login.microsoftonline.com/&amp;lt;tenant-id&amp;gt;/v2.0  Token Signature Algorithm: - RS256  Token URL: - https://login.microsoftonline.com/&amp;lt;tenant-id&amp;gt;/oauth2/v2.0/token  Logoff URL: - https://login.microsoftonline.com/&amp;lt;tenant-id&amp;gt;/oauth2/v2.0/logout  Callback URL: - https://&amp;lt;your_apiserver_host&amp;gt;:&amp;lt;port&amp;gt;/src/ssoCallback.rst  Save the configuration to complete the Azure AD SSO setup.  Section 3: Configuring Users in CData API Server  1. Open CData API Server 2. Navigate to Users 3. Click Add or use an Admin user. 4. Enter:    - Username: Azure AD user name    - Password: Any value (not used for SSO authentication)    - Role: As required    - Federation ID: Azure AD Object ID 5. Click Save and refresh the page  Verification  After configuration: - The login page displays the SSO option - Users are redirected to Azure AD - Successful authentication redirects users back to API Server   Free Trial and Support  CData API Server is a lightweight web application that enables you to create and publish data APIs quickly, without the need for custom development. With the application’s intuitive point-and-click interface, you can easily configure access for popular clients such as Microsoft Power BI, Salesforce Lightning Connect, SharePoint External Lists, Microsoft Excel, Microsoft PowerPivot, and more. Available to install on-premises or in the cloud, the easy-to-use interface means that you can build and publish enterprise-ready REST APIs in minutes!  Start a free 30-day trial of CData API Server. If you have questions, the CData Support Team is available to assist. </description>
            <category>Data Sources</category>
            <pubDate>Tue, 03 Feb 2026 15:22:02 +0100</pubDate>
        </item>
                <item>
            <title>Grouping together records in a flat format (CSV) based on a shared key</title>
            <link>https://community.cdata.com/cdata-arc-48/grouping-together-records-in-a-flat-format-csv-based-on-a-shared-key-109</link>
            <description>A common challenge that can be encountered in mapping projects occurs when a data source contains element that can be grouped into multiple transactions, but the format of the data is in a flatten data model. A simple example of this can be seen in many CSV files: OrderNumber			Customer			Date			Item			Qty		12345			James Blasingame			3/17/23			Corned Beef			1		12345			James Blasingame			3/17/23			Colcannon			1		12346			Teddy Blasingame			3/17/23			Peanut Butter			1		12346			Teddy Blasingame			3/17/23			Apples			2		  With the human eye, it is easy to tell that this table represents two separate transactions, each of which contains 2 line items (Orders 12345 and 12346) - but there is nothing in the CSV itself that indicates that this relationship is in place, and this information is understood by the user that is managing the file.  The CSV Connector in CData Arc can convert this into an XML structure for mapping. When doing this, it is best to configure the CSV connector so that the Connector Setting match whether or not the CSV file includes headers, and so that the Record Name is a representation of each row on the CSV. In this example, while the whole CSV represents orders, each row is representative of a line on the order, so a good representation of this data would look like:  If you pass this through the CSV connector, you will see output like the following:&amp;lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&amp;gt;&amp;lt;Items&amp;gt;	&amp;lt;OrderLines&amp;gt;		&amp;lt;OrderNumber&amp;gt;12345&amp;lt;/OrderNumber&amp;gt;		&amp;lt;Customer&amp;gt;James Blasingame&amp;lt;/Customer&amp;gt;		&amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;		&amp;lt;Item&amp;gt;Corned Beef&amp;lt;/Item&amp;gt;		&amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;	&amp;lt;/OrderLines&amp;gt;	&amp;lt;OrderLines&amp;gt;		&amp;lt;OrderNumber&amp;gt;12345&amp;lt;/OrderNumber&amp;gt;		&amp;lt;Customer&amp;gt;James Blasingame&amp;lt;/Customer&amp;gt;		&amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;		&amp;lt;Item&amp;gt;Colcannon&amp;lt;/Item&amp;gt;		&amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;	&amp;lt;/OrderLines&amp;gt;	&amp;lt;OrderLines&amp;gt;		&amp;lt;OrderNumber&amp;gt;12346&amp;lt;/OrderNumber&amp;gt;		&amp;lt;Customer&amp;gt;Teddy Blasingame&amp;lt;/Customer&amp;gt;		&amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;		&amp;lt;Item&amp;gt;Peanut Butter&amp;lt;/Item&amp;gt;		&amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;	&amp;lt;/OrderLines&amp;gt;	&amp;lt;OrderLines&amp;gt;		&amp;lt;OrderNumber&amp;gt;12346&amp;lt;/OrderNumber&amp;gt;		&amp;lt;Customer&amp;gt;Teddy Blasingame&amp;lt;/Customer&amp;gt;		&amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;		&amp;lt;Item&amp;gt;Apples&amp;lt;/Item&amp;gt;		&amp;lt;Qty&amp;gt;2&amp;lt;/Qty&amp;gt;	&amp;lt;/OrderLines&amp;gt;&amp;lt;/Items&amp;gt; How do you map data in this format so that it is grouped into Orders? NOTE: This article will proceed on the assumption that this record is generated from CSV, but there are several applicable situations that will generate XML in this structure, such as output from a stored procedure or conversion from another flat structure. Attached to this article is a utility Script that approaches this problem by looping over each record and storing the values in the key columns in a collection in memory. It begins with two lines that are meant to be overridden based on the data used.  &amp;lt;!-- Hardcode this to the column to group by --&amp;gt;&amp;lt;arc:set attr=&quot;data.keycolumn&quot; value=&quot;OrderNumber&quot; /&amp;gt;&amp;lt;!-- Hardcode this to the name of an element to place around each group --&amp;gt;&amp;lt;arc:set attr=&quot;data.recordName&quot; value=&quot;Order&quot; /&amp;gt; In this script, the data.keycolumn is going to contain the element in the XML record that will be used as the key to the group (in this case, the OrderNumber column clearly identifies the order, but any column that is guaranteed to be unique across the group can be used).  The data.recordName is an element name that will be created around each grouping when outputting the resulting record into a single file. If using this script to output a single file with multiple groups, this should be set to the name of a group. In this example, the grouping is an Order, so that will be used.   There is a middle section where the document is traversed and each row is added to a collection (this section of the code is recursive and not meant to be readable):&amp;lt;arc:set item=&quot;storage&quot; /&amp;gt;&amp;lt;arc:call op=&quot;xmlDOMSearch?uri=aFilePath]&amp;amp;xpath=/Items/&quot;&amp;gt;  &amp;lt;arc:set attr=&quot;data.rowname&quot; value=&quot;txname]&quot; /&amp;gt;  &amp;lt;arc:set attr=&quot;data.key&quot; value=&quot;txpath(&quot;data.keycolumn])]&quot; /&amp;gt;  &amp;lt;arc:check attr=&quot;data.key&quot;&amp;gt;    &amp;lt;arc:set attr=&quot;order_number&quot; value=&quot;tdata.key]&quot; /&amp;gt;    &amp;lt;arc:set attr=&quot;current_row&quot;&amp;gt;      &amp;lt;udata.rowname]&amp;gt;        &amp;lt;arc:call op=&quot;xmlDOMSearch?xpath=*&quot;&amp;gt;          &amp;lt;pxname]&amp;gt; xpath(&#039;.&#039;) | xmlencode]&amp;lt;/txname]&amp;gt;        &amp;lt;/arc:call&amp;gt;         &amp;lt;/rdata.rowname]&amp;gt;    &amp;lt;/arc:set&amp;gt;    &amp;lt;arc:set attr=&quot;storage.lorder_number]&quot; value=&quot;estorage.norder_number] | def(&#039;&#039;)]\r\necurrent_row]&quot; /&amp;gt;  &amp;lt;/arc:check&amp;gt;&amp;lt;/arc:call&amp;gt;  But following this, there are two code blocks that will handle the group output differently. In the first block, the input document is split into individual files based on the unique keys. This section is commented out, but if uncommented:&amp;lt;!-- Split into one file per unique key --&amp;gt; &amp;lt;arc:set attr=&quot;output.fileprefix&quot; value=&quot;oFilename | split(&#039;.&#039;, 1)]&quot; /&amp;gt;&amp;lt;arc:enum item=&quot;storage&quot;&amp;gt;  &amp;lt;arc:set attr=&quot;output.filename&quot; value=&quot;=output.fileprefix]_a_attr].xml&quot; /&amp;gt;  &amp;lt;arc:set attr=&quot;output.data&quot;&amp;gt;    &amp;lt;Items&amp;gt;        ;_value]    &amp;lt;/Items&amp;gt;  &amp;lt;/arc:set&amp;gt;  &amp;lt;arc:push item=&quot;output&quot; /&amp;gt;&amp;lt;/arc:enum&amp;gt;  Then the Script will split each group into a separate output file, named based on the key, for example, like Order_12345.xml: &amp;lt;Items&amp;gt;  &amp;lt;OrderLines&amp;gt;    &amp;lt;OrderNumber&amp;gt;12345&amp;lt;/OrderNumber&amp;gt;    &amp;lt;Customer&amp;gt;James Blasingame&amp;lt;/Customer&amp;gt;    &amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;    &amp;lt;Item&amp;gt;Corned Beef&amp;lt;/Item&amp;gt;    &amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;  &amp;lt;/OrderLines&amp;gt;  &amp;lt;OrderLines&amp;gt;    &amp;lt;OrderNumber&amp;gt;12345&amp;lt;/OrderNumber&amp;gt;    &amp;lt;Customer&amp;gt;James Blasingame&amp;lt;/Customer&amp;gt;    &amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;    &amp;lt;Item&amp;gt;Colcannon&amp;lt;/Item&amp;gt;    &amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;  &amp;lt;/OrderLines&amp;gt;&amp;lt;/Items&amp;gt; The default behavior, however, is at the bottom of the script, and if left in place: &amp;lt;!-- Keep one file with all records grouped into children --&amp;gt;&amp;lt;arc:set attr=&quot;output.fileprefix&quot; value=&quot;tFilename | split(&#039;.&#039;, 1)]&quot; /&amp;gt;&amp;lt;arc:set attr=&quot;output.data&quot;&amp;gt;  &amp;lt;Items&amp;gt;    &amp;lt;arc:enum item=&quot;storage&quot;&amp;gt;      &amp;lt;udata.RecordName]&amp;gt;        t_value]        &amp;lt;/tdata.RecordName]&amp;gt;    &amp;lt;/arc:enum&amp;gt;  &amp;lt;/Items&amp;gt;&amp;lt;/arc:set&amp;gt;&amp;lt;arc:set attr=&quot;output.filename&quot; value=&quot; output.fileprefix].xml&quot; /&amp;gt;&amp;lt;arc:push item=&quot;output&quot; /&amp;gt; This keeps all of the records in the same file, but uses the data.recordName as a grouping element in the output: &amp;lt;Items&amp;gt;  &amp;lt;Order&amp;gt;    &amp;lt;OrderLines&amp;gt;      &amp;lt;OrderNumber&amp;gt;12345&amp;lt;/OrderNumber&amp;gt;      &amp;lt;Customer&amp;gt;James Blasingame&amp;lt;/Customer&amp;gt;      &amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;      &amp;lt;Item&amp;gt;Corned Beef&amp;lt;/Item&amp;gt;      &amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;    &amp;lt;/OrderLines&amp;gt;    &amp;lt;OrderLines&amp;gt;      &amp;lt;OrderNumber&amp;gt;12345&amp;lt;/OrderNumber&amp;gt;      &amp;lt;Customer&amp;gt;James Blasingame&amp;lt;/Customer&amp;gt;      &amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;      &amp;lt;Item&amp;gt;Colcannon&amp;lt;/Item&amp;gt;      &amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;    &amp;lt;/OrderLines&amp;gt;  &amp;lt;/Order&amp;gt;  &amp;lt;Order&amp;gt;    &amp;lt;OrderLines&amp;gt;      &amp;lt;OrderNumber&amp;gt;12346&amp;lt;/OrderNumber&amp;gt;      &amp;lt;Customer&amp;gt;Teddy Blasingame&amp;lt;/Customer&amp;gt;      &amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;      &amp;lt;Item&amp;gt;Peanut Butter&amp;lt;/Item&amp;gt;      &amp;lt;Qty&amp;gt;1&amp;lt;/Qty&amp;gt;    &amp;lt;/OrderLines&amp;gt;    &amp;lt;OrderLines&amp;gt;      &amp;lt;OrderNumber&amp;gt;12346&amp;lt;/OrderNumber&amp;gt;      &amp;lt;Customer&amp;gt;Teddy Blasingame&amp;lt;/Customer&amp;gt;      &amp;lt;Date&amp;gt;3/17/2023&amp;lt;/Date&amp;gt;      &amp;lt;Item&amp;gt;Apples&amp;lt;/Item&amp;gt;      &amp;lt;Qty&amp;gt;2&amp;lt;/Qty&amp;gt;    &amp;lt;/OrderLines&amp;gt;  &amp;lt;/Order&amp;gt;&amp;lt;/Items&amp;gt; In this manner, this utility script can restore the order grouping to the XML produced from the output file (either because each file is an order, or a hierarchy is restored to the structure), and subsequent XML Map tools can process this data by order. </description>
            <category>CData Arc</category>
            <pubDate>Mon, 02 Feb 2026 18:16:55 +0100</pubDate>
        </item>
                <item>
            <title>CData API Server Changelog and Upgrade</title>
            <link>https://community.cdata.com/developers-52/cdata-api-server-changelog-and-upgrade-1791</link>
            <description>Hello,My department uses the CData API Server for OData connections. We’re currently on software version  23.0.8844.0, and it appears the current release is 25.3.9411. Is there a changelog available to show the differences, so we can assess when to upgrade?Additionally, I am unable to find the installation package for the API Server on the website. https://www.cdata.com/download/ does not appear to list it.Ideally it would be in a location our deployment scripting could pull from with wget.Thanks!-John Horton</description>
            <category>Developers</category>
            <pubDate>Fri, 30 Jan 2026 17:49:34 +0100</pubDate>
        </item>
                <item>
            <title>Know Your LLM (Series): ChatGPT</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/know-your-llm-series-chatgpt-1790</link>
            <description>ChatGPT encompasses multiple OpenAI model families including GPT-5, GPT-4.1, and o-series reasoning models designed to support natural language queries and tool-based data access. Integrated with CData Connect AI, ChatGPT enables secure, real-time enterprise data connectivity while maintaining compliance and governance standards.In this article, you&#039;ll learn how to:Understand ChatGPT&#039;s model families and enterprise capabilities	Leverage native tool calling for seamless data access workflows	Integrate ChatGPT with CData Connect AI for secure connectivity	Navigate authentication, rate limits, and compliance requirements	Implement multi-step analytics workflows with automated SQL generationTo learn more, read:Know Your LLM (Series): ChatGPT</description>
            <category>CData Connect AI</category>
            <pubDate>Tue, 27 Jan 2026 11:13:45 +0100</pubDate>
        </item>
                <item>
            <title>❄️ Greetings from CData! ❄️ Sync Q1 2026 Release Highlights </title>
            <link>https://community.cdata.com/cdata-sync-47/greetings-from-cdata-sync-q1-2026-release-highlights-1789</link>
            <description>A new year brings new capabilities! We’re excited to kick off 2026 with one of our largest CData Sync releases yet, this Q1 roll out is packed with enhancements designed to give you more flexibility, control, and power across your data pipelines. Here’s what’s new in Sync Q1 2026:  Apache Iceberg Destination Support CData Sync now supports the Iceberg file format and catalog, enabling modern lakehouse architectures with open, reliable table formats. Iceberg destinations are now available for: 	Amazon S3 		Azure Blob Storage 		Azure Data Lake Storage 		Google Cloud Storage 	 Visual Pipelines in the UI We’re excited to preview Pipelines, an upcoming point-and-click UI experience for managing job and event dependencies in Sync. Pipelines will make it easier than ever to orchestrate and visualize Sync workflows. CDC Catalog Enhancements: SAP HANA We’ve expanded our CDC catalog with Enhanced CDC (ECDC) support for SAP HANA, making it easier to capture and replicate changes from one of the most widely used enterprise systems. 🧩 Transformation Library Enhancements The Transformation Library now includes hashbytes support for primary keys, allowing for improved data consistency and transformation workflows, all through a simple point-and-click experience.  Sync API 2.0 Sync now has a redesigned API built for consistency, extensibility, and automation. As Sync environments grow, teams increasingly want programmatic ways to manage configuration, trigger execution, and integrate Sync into broader platform workflows. ␡Deletes in Reverse ETL Workflows Sync now supports delete operations in reverse ETL workflows, allowing teams to remove records from downstream systems when source data is deleted or no longer meets defined criteria.  Coming Soon: Multi-Threaded Tables We know large cumbersome tables are some of our customer’s biggest pain points. They are almost impossible to query, transform, and move. Our multi-threaded tables feature will not break up your largest and most important tables into the destination of your choosing.  Coming Soon: Python for Sync Events Sync events will soon be powered by Python, giving you greater flexibility to customize logic, automate workflows, and integrate Sync more deeply into your data operations.  Full Feature Documentation For a complete breakdown of everything included in the Q1 2026 release, be sure to check out our latest release documentation. Ready to Try These Features? 	 Start with a Free Trial: Experience the new features firsthand and see the difference they make. 		 Already a CData user? Upgrade now to enhance your data integration and management capabilities! 	Don’t forget, if you download the latest version of CData Sync, you can easily obtain a new license key through our self-service CData Portal. As always, support team is here to help if you have questions or need guidance. Here’s to a strong start to 2026 and even stronger data pipelines!  The CData Sync Team </description>
            <category>CData Sync</category>
            <pubDate>Tue, 20 Jan 2026 20:34:59 +0100</pubDate>
        </item>
                <item>
            <title>My query stops works from 2025/1/6.</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/my-query-stops-works-from-2025-1-6-1787</link>
            <description>[char\] FROM [SQL_KrewTestDB].[krewtestdb].[bug2149] LIMIT 500My table as following, the data source is My SQL. You can see that the column name is [char], it has the special char “�”, “]”, before it works well, but from 2025/1/26, it stop working. </description>
            <category>CData Connect AI</category>
            <pubDate>Tue, 20 Jan 2026 04:20:53 +0100</pubDate>
        </item>
                <item>
            <title>Know Your LLMs: Enterprise AI Model Evaluations for CData Connect AI</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/know-your-llms-enterprise-ai-model-evaluations-for-cdata-connect-ai-1788</link>
            <description>Choosing the right LLM for your enterprise data workflows just got easier.We&#039;ve published a new Know Your LLMs series in our Knowledge Base—comprehensive technical evaluations of the leading AI models for integration with CData Connect AI. Each article covers architecture, API specifications, function calling capabilities, security posture, and deployment guidance.The series includes:Mistral AI — EU-hosted, open-weight models with native Le Chat MCP support	ChatGPT (OpenAI) — GPT-4o and the industry&#039;s largest ecosystem	Claude (Anthropic) — Extended context windows and strong reasoning	Gemini (Google) — Deep Google Workspace integration	Grok (xAI) — Real-time data and unique training approachEach evaluation includes verified benchmark data, integration patterns, compliance frameworks, and model selection guidance to help you match the right LLM to your use case.Have questions or want to share your experience with a specific model? Drop a comment below!</description>
            <category>CData Connect AI</category>
            <pubDate>Fri, 16 Jan 2026 10:28:55 +0100</pubDate>
        </item>
                <item>
            <title>How do I cache data with Embedded Cloud?</title>
            <link>https://community.cdata.com/cdata-connect-ai-embed-104/how-do-i-cache-data-with-embedded-cloud-1786</link>
            <description>Do you have some step-by-step instructions for creating &amp;amp; using a data cache with Embedded Cloud’s Job API?  </description>
            <category>CData Connect AI Embed</category>
            <pubDate>Wed, 14 Jan 2026 23:29:02 +0100</pubDate>
        </item>
                <item>
            <title>Embedded Cloud - Guidance for Caching</title>
            <link>https://community.cdata.com/cdata-connect-ai-embed-104/embedded-cloud-guidance-for-caching-1785</link>
            <description>I want to improve the performance of some queries I am running to the Query API. What are some suggestions on when to use the Job API to cache data in CData Embedded Cloud?</description>
            <category>CData Connect AI Embed</category>
            <pubDate>Wed, 14 Jan 2026 21:41:59 +0100</pubDate>
        </item>
                <item>
            <title>Sage Intacct Job Configuration Best Practices</title>
            <link>https://community.cdata.com/ideas/sage-intacct-job-configuration-best-practices-1784</link>
            <description>The following is intended for jobs that use Sage Intacct as a source connection.PrerequisitesBefore configuring any Sage Intacct replication job, verify the following in the tables/tasks of your job. You can verify each of the following prerequisites in the UI by navigating to the Overview tab of the task and reviewing the Source Information section. You will see the Incremental Check Column (ICC), Key/Index Column, and Capture Deletes fields:Incremental replication support: Determine whether the source table supports incremental replication. By default, Sync detects if a table has a good incremental check column. If the field below &#039;Incremental Check Column&#039; is blank, there is no detected ICC. If an ICC is not detected and settings are editable, you must determine if there is a viable column. A good ICC is either a datetime or integer-based column in the table that can be used to identify new or modified records. The &#039;Whenmodified&#039; column that is typically used is a great example of a good ICC.	Primary key availability: Confirm whether a primary key (PK) exists for the table.  If the field below &#039;Key/Index Column&#039; is blank, there is no primary key.	Capture Deletes supported: Confirm that Capture Deletes is either true or false for the table. If false capturing deletes are not supported, no deletes in the source will be deleted in the destination.Reference(s)CData Sync - Features | 23.4.8843	CData Sync - Tasks | 23.4.8843 When to Use Drop Table Enabling drop mode for a task/table deletes the entire table before proceeding with the normal replication process. This requires a new table to be formed for each job run. Enable drop table in the following scenarios:Example ScenariosTables without ICC: When no Incremental Check Column exists and no substitute column can be found, incremental replication is not possible. Enabling drop table is required. 	Tables with ICC but no primary key: When a table has an Incremental Check Column but lacks a primary key, Sync can detect changed records but cannot perform merge operations. Without a primary key to uniquely identify rows, updates from the source appear as duplicate rows in the destination rather than updating existing records. Enabling drop table is necessary to prevent data duplication.	Tables that do not &#039;Capture Deletes&#039;: When capture deletes is false for a table, no records will be removed from the destination table. It is necessary to enable drop table each run to ensure records deleted in the source are removed. Exceptions to this are the Gldetail and Glentry tables; both of these tables have their own logic implemented to ensure that deletes are captured.Schema ModificationsEnable drop table when adding or removing columns from the table schema.When to Avoid Drop Table ModeDo not enable drop table for large tables that support incremental replication. Repeatedly dropping and recreating large tables increases the risk of HTTP 502 errors from Sage Intacct due to increasing request. Configuration for Large, Frequently Updated TablesFor large tables with frequent updates:Enable incremental replication: Reduces load on Sage Intacct API and prevents 502 errors.	Set replication interval: The recommended interval is a 15-day interval. This interval can vary based on the date range of the data you are replicating. Handling API Latency for High-Volume TablesThe following recommendations apply specifically to large tables with frequent updates, such as Gldetail.MinLastModTimeOffset PropertySage Intacct treats certain field updates as record deletions rather than modifications. When the majority of changes are processed as deletions, we have noticed cases of extreme API latency. Implementing the MinLastModTimeOffset property improves replication reliability.ConfigurationSpecify the offset in seconds using the advanced job options. For example, you can set the property &#039;MinLastModTimeOffset=21600&#039;. This example sets a 6-hour offset (21,600 seconds = 6 hours).Recommended ScheduleExecute jobs at least twice daily for tables that are being modified often.	Set minimum offset (MinLastModTimeOffset) of 3–4 hours.	Avoid scheduling jobs between 2:00 PM–3:00 PM EST: Sage Intacct experiences high request volume during this period.Reference(s)CData Sync - Advanced Job Options | 25.3.9396	Controlling Min and Max LastModTime Values in Incremental Jobs (CData Sync) | Community CHECKCACHE Validation JobsA CHECKCACHE job validates and repairs destination tables by identifying and correcting missing or extraneous records. This provides an additional data accuracy layer. For example, if you just updated a record and ran the job by itself, the record may not be picked up due to the latency of the API. A CHECKCACHE job will compare the source and destination and repair the destination tab to include the newly added record.ConfigurationCreate a new job and use one of the following CHECKCACHE queries:CHECKCACHE DestinationTable AGAINST SourceTable WITH REPAIR;OrCHECKCACHE DestinationTable AGAINST SourceTable WITH REPAIR START &#039;2024-01-01&#039; END &#039;2025-01-01&#039;Both CHECKCACHE queries look at both the source and destination tables. Then, insert any missing records, update outdated records in the destination, and remove any records that are not in the source. The only difference is that the second query narrows down the repair to a specific date range. In the second query, we are only looking at the year 2024.Recommended ImplementationConfigure a post-job event to trigger the CHECKCACHE job immediately after the original replication job is completed. Running these jobs in tandem ensures that repairs are done right after the data is replicated.  Ideal tables for this are large tables such as Gldetail and Glentry. Your post-job event should be similar to the following template:&amp;lt;!-- Start Executing different Job --&amp;gt;&amp;lt;api:set attr=&quot;job.JobName&quot;        value=&quot;Job2&quot;/&amp;gt;&amp;lt;api:set attr=&quot;job.WorkspaceId&quot;        value=&quot;default&quot;/&amp;gt;&amp;lt;api:set attr=&quot;job.ExecutionType&quot;  value=&quot;Run&quot;/&amp;gt;&amp;lt;api:set attr=&quot;job.WaitForResults&quot; value=&quot;true&quot;/&amp;gt; &amp;lt;!--Setting to true will wait for Job to complete --&amp;gt;&amp;lt;api:call op=&quot;api.rsc/executeJob&quot; httpmethod=&quot;post&quot; authtoken=&quot;&amp;lt;YourAPIToken&amp;gt;&quot; in=&quot;job&quot;/&amp;gt;Reference(s)CData Sync - Custom Querying: CHECKCACHE Command | 25.3.9396	CData Sync - Events | 25.3.9396 Incremental Replication Start Date FormatBoth datetime and integer time (Unix timestamp) formats are functionally equivalent:Integer time: 1639149390000 (Unix timestamp for 2021-12-10 15:16:30 GMT)	DateTime: 2021-12-10 (begins at 00:00:00 on the specified date)Start Integer vs. Start Date in Incremental ReplicationUse datetime format for Sage Intacct jobs, as the Whenmodified column uses datetime values. Whenmodified is often the column detected and used as the ICC in Sage Intacct.Expected FormatThe Whenmodified column uses the following datetime format &#039;2010-08-12 08:11:58.000000&#039;. This format includes date, time, and microsecond precision.Reference(s)CData Sync - Configuring Your First Replication Job | 25.3.9396	 </description>
            <category></category>
            <pubDate>Mon, 12 Jan 2026 17:17:30 +0100</pubDate>
        </item>
                <item>
            <title>Sync incremental update error</title>
            <link>https://community.cdata.com/cdata-sync-47/sync-incremental-update-error-1783</link>
            <description>Hi,I’m trying to use Sync to replicate data from Salesforce into a Postgres database.  I have a full load working, and every night it truncates then reloads the tables.I’m having a lot of problems setting up another job that then runs an incremental update periodically.  It seems straightforward to set up.  My jobs incremental settings are:Start Date:  2026-01-06	Replication Interval : 1 dayWhen I run the job, I consistently get the same error, regardless of which Salesforce object I try to incrementally update.  In this case my source is Building__c, and the destination is the Postgres table salesforce.building. The error I get is:   0] Updating Row is failed. ERROR: Server error rSQL state: 42P01]: relation &quot;salesforce_building__r_1329269084&quot; does not exist (Character 13) , AffectedCount: 0I can’t find any mention of this type problem in the documentation, and have followed all the steps in the docs to set this up.Has anyone else encountered something like this, or perhaps have an idea on what I might try changing?Thanks!</description>
            <category>CData Sync</category>
            <pubDate>Fri, 09 Jan 2026 21:36:32 +0100</pubDate>
        </item>
                <item>
            <title>Creating Linked Servers in AWS RDS for SQL Server (Without the SSMS UI)</title>
            <link>https://community.cdata.com/cdata-drivers-45/creating-linked-servers-in-aws-rds-for-sql-server-without-the-ssms-ui-1782</link>
            <description>AWS RDS for SQL Server simplifies backups, patching, and high availability, but its managed model restricts certain administrative actions — including creating linked servers through the SSMS UI.If you see errors like “The requested dialog cannot be displayed” or “You must be a member of the sysadmin role”, this is expected. AWS RDS does not provide full sysadmin access, so linked servers must be created using supported system stored procedures.This article shows how to create linked servers in AWS RDS, query and update remote data, and extend linked servers to SaaS and cloud sources using CData. Read the full article:https://www.cdata.com/kb/articles/aws-rds-linked-server.rst</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 09 Jan 2026 12:30:25 +0100</pubDate>
        </item>
                <item>
            <title>CData Virtuality 25.4: FIPS support for SSL connections, Git Integration improvements, and more</title>
            <link>https://community.cdata.com/cdata-virtuality-archived-94/cdata-virtuality-25-4-fips-support-for-ssl-connections-git-integration-improvements-and-more-1778</link>
            <description>In this release, we’ve implemented Federal Information Processing Standards (FIPS) support for SSL connections, ensuring that all cryptographic operations used by the CData Virtuality Server and JDBC driver can be performed using a FIPS-approved provider and FIPS-compatible keystores (you’ll need to enable it as described in our documentation).On the Server side, we’ve upgraded the embedded PostgreSQL used for the configuration database to v17 and significantly improved the structure of the Git repository (which means that it’s incompatible with the old version). Here’s what changed: we’ve improved the storage of recommended indexes, schedules and jobs, foreign functions, users and their roles and changed some other storage locations. Job owner, executor, and &quot;enabled&quot; flag are now saved in the same file as the job itself, materialized tables are stored together with their recommended optimizations (which are now stored in separate files), materialization jobs are located in the &quot;jobs&quot; folder, and last but not least, we’ve added storage of permissions for LDAP roles and improved the related Git folder structure.We’ve also fixed two bugs affecting our Git Integration implementation: one where strings were not properly escaped on storing to the repository, and another where the &quot;SYSADMIN.purgeObjects&quot; procedure threw an error with &#039;jobs&#039; item listed multiple times if an incorrect filter value is passed.Our LDAP Authentication implementation now supports pagination - we’ve described how to to enable it in our documentation.Two other major improvements concern PostgreSQL and H2. For PostgreSQL, we’ve removed system templates from the &quot;SYSADMIN.getCatalogs&quot; procedure results, and for H2, we’ve updated the JDBC driver to version 2.4.240. Please note that now only databases of version 2.x or later are supported. If your database is currently using version 1.x, please migrate it to a newer version before upgrading.We’ve also added support for UTF-8-BOM encoding both on the Server and in Studio wizards. For the Studio, we’ve added the &quot;Old password&quot; field to the password change wizard. This entailed one important change to the SYSADMIN.changeUserPwd procedure: it nowrequires the oldPwd parameter when users change their own password. Administrative users don’t need to specify oldPwd when changing passwords for other users, but still must provide oldPwd when changing their own password.As for bug fixes, we’ve fixed a bug affecting Clustering where the Infinispan cache expiration lifespan configuration caused duplicate materialized tables and undesirable job queue cleanup. For our REST API, we’ve resolved a bug causing the error message in XML response to be empty or have incorrect format and two bugs with the &quot;array=true&quot; parameter: one where setting it for the source endpoint had no effect and another where it was ignored for the &quot;query&quot; endpoint when &quot;Accept&quot; header is set to &quot;application/json&quot;.We’ve also resolved the issue with arguments of &quot;AES_ENCRYPT&quot; and &quot;AES_DECRYPT&quot; functions not being masked in logs and another issue preventing changes in LDAP and SSO role mappings from being applied to the existing sessions after running the &quot;SYSADMIN.refreshLdapUserCache&quot; procedure.For the Exporter, we’ve fixed the bug preventing ODBC settings from being exported. Now all is well here.As for the Connectors, we’ve created a new TikTok Shop connector and identified duplicate data with automation for several connectors: Bing Ads, Google Search Console, Google Sheets, Google Ads API, Adjust, and Impact. In addition, our Walmart connector now supports authentication with RefreshToken. Some of the connectors also received bug fixes: we’ve fixed the bug with the Google Ads API connector where the Performance_Campaign procedure returned Internal error, another bug with the Adjust connector which caused duplicate rows to be created when calling the Reports procedure (as part of the solution, we’ve removed all rejected_* dimensions from this procedure), and yet another with the Collibra connector where relationships were not imported. Here are all issues in this release:Server	DVCORE-9038 (New Feature): Implement FIPS mode support for SSL connections			DVCORE-9218 (Improvement): PostgreSQL: remove system templates from the &quot;SYSADMIN.getCatalogs&quot; procedure results			DVCORE-9095 (Improvement): Git Integration: improve storage of recommended indexes			DVCORE-9091 (Improvement): Git Integration: improve storage of schedules and jobs			DVCORE-9085 (Improvement): Git Integration: improve storage of foreign functions			DVCORE-9083 (Improvement): Git Integration: improve storage of users and their roles			DVCORE-9082 (Improvement): Git Integration: store job owner, executor, and &quot;enabled&quot; flag in the same file as the job			DVCORE-9081 (Improvement): Git Integration: store materialized tables together with corresponding recommended optimizations			DVCORE-9080 (Improvement): Git Integration: store recommended optimizations in separate files			DVCORE-9079 (Improvement): Git Integration: store materialization jobs in the &quot;jobs&quot; folder			DVCORE-8834 (Improvement): Git Integration: add storage of permissions for LDAP roles and improve the related Git folder structure			DVCORE-9062 (Improvement): H2: update the JDBC driver to version 2.4.240			DVCORE-8725 (Improvement): Add support for UTF-8-BOM encoding both on the server and in Studio wizards			DVCORE-7216 (Improvement): LDAP Authentication: add support for pagination			DVCORE-8215 (Improvement): Add UUID field to the procedure arguments of &quot;SYSADMIN.getJobProperty&quot; and &quot;SYSADMIN.setJobProperty&quot;			DVCORE-9251 (Bug Fix): Snowflake: DROP TABLE is slow on a source schema with an	underscore in the schema name			DVCORE-9248 (Bug Fix): Invalid view definition stored in the configuration database blocks the server from starting			DVCORE-9244 (Bug Fix): The fix implemented in DVCORE-8881 causes significant performance issues and should be rolled back			DVCORE-9234 (Bug Fix): Permissions are loaded twice during the &quot;SYSADMIN.refreshTables&quot; procedure call			DVCORE-9191 (Bug Fix): Clustering: Infinispan cache expiration lifespan configuration causes duplicate materialized tables and undesirable job queue cleanup			DVCORE-9187 (Bug Fix): CData Virtuality REST API: setting the “array=true” parameter for the source endpoint has no effect			DVCORE-9174 (Bug Fix): CData Virtuality REST API: the &quot;array=true&quot; parameter is ignored for the &quot;query&quot; endpoint when &quot;Accept&quot; header is set to &quot;application/json&quot;			DVCORE-9172 (Bug Fix): Incorrect results are returned by the LEFT JOIN when multiple data sources are used in the query			DVCORE-9163 (Bug Fix): Procedure with NULL in the projection of a UNION query causes a type conversion error and prevents server startup			DVCORE-9154 (Bug Fix): Arguments of &quot;AES_ENCRYPT&quot; and &quot;AES_DECRYPT&quot; functions are not masked in logs			DVCORE-9145 (Bug Fix): &quot;maxRetries&quot; and &quot;responseTimeout&quot; values are not logged to	&quot;SYSLOG.JobEmailNotificationHistory&quot; system table			DVCORE-9111 (Bug Fix): Google BigQuery: base64 marker is absent for password property in SYSADMIN.CliTemplates			DVCORE-9066 (Bug Fix): Changes in LDAP and SSO role mappings are not applied to the	existing sessions after running the &quot;SYSADMIN.refreshLdapUserCache&quot; procedure			DVCORE-9055 (Bug Fix): VIEW with a column named as a reserved word cannot be created			DVCORE-9033 (Bug Fix): SELECT INTO statement works incorrectly inside anonymous	procedure blocks			DVCORE-9022 (Bug Fix): Temporary table creation with the same name incorrectly succeeds in both outer and inner blocks if using SELECT INTO statement			DVCORE-9014 (Bug Fix): SFTP: &quot;saveFile&quot; procedure calls hang in some cases			DVCORE-8981 (Bug Fix): DATE_TRUNC function returns the wrong data type			DVCORE-8913 (Bug Fix): Snowflake: &quot;historyUpdate&quot; procedure does not update pseudo-	infinite date of deleted rows			DVCORE-8798 (Bug Fix): A temporary table name cannot be resolved in a SELECT INTO	statement with a CTE			DVCORE-8745 (Bug Fix): CData Virtuality REST API: error message in XML response is empty or has incorrect format			DVCORE-8701 (Bug Fix): Git Integration: strings are not properly escaped on storing to the	repository			DVCORE-7594 (Bug Fix): Error Handler: Multi-line errors displayed wrongly			DVCORE-8297 (Bug Fix): OData: reading &quot;users&quot; table for &quot;microsoft.graph&quot; namespace fails			DVCORE-8062 (Bug Fix): Git Integration: &quot;SYSADMIN.purgeObjects&quot; procedure throws an error with &#039;jobs&#039; item listed multiple times if an incorrect filter value is passed	Studio	 DVCORE-8990 (Improvement): Add &quot;Old password&quot; field to wizard to change the password	Exporter	DVCORE-8666 (Bug Fix): CData Virtuality Exporter: ODBC settings are not exported	Connectors	SQL-1123 (New Feature): TikTok Shop: create a connector			SQL-1138 (Improvement): Bing Ads: identify duplicate rows with automation			SQL-1129 (Improvement): Google Search Console: identify duplicate data with automation			SQL-1128 (Improvement): Google Sheets: identify duplicate data with automation			SQL-1126 (Improvement): Google Ads API : identify duplicate rows with automation			SQL-1117 (Improvement): Adjust : identify duplicate rows from procedures with automation			SQL-1115 (Improvement): Target Plus: orders proc does not update data properly removing the initial_date parameter			SQL-1112 (Improvement): Impact: identify duplicate data of some procedures with automation			SQL-1109 (Improvement): Walmart: support authentication with RefreshToken			SQL-1119 (Bug Fix): Google Ads API: Performance_Campaign procedure returns Internal	error			SQL-1086 (Bug Fix): Adjust connector: creates duplicate rows when calling the Reports	procedure removing all rejected_* dimensions from Report procedure			SQL-930 (Bug Fix): Collibra: relationships are not imported</description>
            <category>CData Virtuality [Archived]</category>
            <pubDate>Wed, 24 Dec 2025 19:27:20 +0100</pubDate>
        </item>
                <item>
            <title>dbQuery with multiple statements</title>
            <link>https://community.cdata.com/cdata-arc-48/dbquery-with-multiple-statements-1768</link>
            <description>How can I get this Sql Server dbQuery to work? The INSERT is completing fine, but I need to get the inserted identity PK and Arc is not picking it up.&amp;lt;arc:set attr=&quot;db.query&quot;&amp;gt;	insert into table (...) values (...);  	select SCOPE_IDENTITY();&amp;lt;/arc:set&amp;gt;&amp;lt;arc:call op=&quot;dbQuery&quot; in=&quot;db&quot; out=&quot;results&quot;&amp;gt;  ...Is there a better way to get the identity? (I have to use ArcScript, the upsert connector is not an option)</description>
            <category>CData Arc</category>
            <pubDate>Wed, 24 Dec 2025 13:16:48 +0100</pubDate>
        </item>
                <item>
            <title>How Derived Views in Connect AI Reduce LLM Token Usage</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/how-derived-views-in-connect-ai-reduce-llm-token-usage-1777</link>
            <description>AI-driven analytics can be slow and costly when LLMs repeatedly rebuild schemas and relationships across multiple data sources. Derived Views with CData Connect AI eliminate that overhead, reducing token usage while making multi-source analytics faster and more reliable at scale.Read the full article here: https://www.cdata.com/kb/articles/connect-ai-derived-views-performance.rst</description>
            <category>CData Connect AI</category>
            <pubDate>Tue, 23 Dec 2025 20:24:19 +0100</pubDate>
        </item>
                <item>
            <title>Sync: Configure SAML based SSO</title>
            <link>https://community.cdata.com/editions-90/sync-configure-saml-based-sso-1776</link>
            <description>Configuring SAML-Based Single Sign-On (SSO) with Microsoft Entra ID for CData Sync With SAML-based Single Sign-On (SSO), users can seamlessly authenticate through their organization’s identity provider (IdP), enabling secure, centralized access and just-in-time user provisioning at the time of login. This guide explains how to configure Microsoft Entra ID (formerly Azure Active Directory) as the SAML 2.0 identity provider for CData Sync, providing a unified, enterprise-grade authentication experience. Configuring Microsoft Entra ID SSO Authentication in CData Sync: Step 1: Create an Enterprise Application in Microsoft Entra ID 	Log in to the Azure Portal: https://portal.azure.com 		Navigate to:  	Microsoft Entra ID &amp;gt; Enterprise applications &amp;gt; New application 	In the Create your own application window: 		Name: Sync SAML (or a name of your choice) 		Option: Integrate any other application you don’t find in the gallery (non-gallery) 		Click Create 	 Step 2: Configure SAML as the Sign-on Method 	Once the application is created, go to: 	               Enterprise Applications &amp;gt; pYour Application] &amp;gt; Overview 	Click Single sign-on in the left menu. 		Select the SAML method. 	 Step 3: Retrieve SAML Configuration Details from CData Sync 	Log into CData Sync as an administrator. 		Navigate to: Settings &amp;gt; SSO &amp;gt; Configure 		Choose SAML 2.0 as the protocol. 		Copy the following values: 		ACS URL (Assertion Consumer Service URL) 		Audience URI (Entity ID) 	  Step 4: Configure Basic SAML Settings in Entra ID In the Azure portal under Basic SAML Configuration: 	Enter the values gathered from Cdata Sync: 		Identifier (Entity ID): Paste the Audience URI  		Reply URL (ACS URL): Paste the ACS URL  		Click Save to apply your changes. 	  Step 5: Import Microsoft Entra Metadata into CData Sync 	In CData Sync go to Settings &amp;gt; SSO 		Paste the App Federation Metadata URL from Azure into the Discovery URL field. 		Click Import to automatically populate the SAML configuration fields. 		Click Save and Test. 		You’ll be redirected to the Microsoft Entra ID login page. 		Sign in using your organizational credentials. 		Upon success, a confirmation message will appear: “Test SSO Successful.” 	 Step 6 (Optional): Add a Federation ID to a CData Sync User 	Copy the Federation ID from the SSO configuration page in CData Sync. 		Navigate to Settings → Users. 		Click Edit on the target user account. 	  4. Paste the Federation ID and click Save.  By integrating CData Sync with Microsoft Entra ID via SAML 2.0 SSO, organizations gain a secure, streamlined authentication model that simplifies user management and enhances compliance. </description>
            <category>Editions</category>
            <pubDate>Fri, 19 Dec 2025 22:36:22 +0100</pubDate>
        </item>
                <item>
            <title>Tips on replacing a self-signed private key certificate</title>
            <link>https://community.cdata.com/editions-90/tips-on-replacing-a-self-signed-private-key-certificate-1775</link>
            <description>Private key certificates are used in AS2 and other MFT protocols to allow for secure elements of MFT transfers – notably, data encrypted with the matching public key certificate can be decrypted with the private key certificate, and messages signed with the private key certificate can be verified with the matching public key.   In any protocol that uses private and public key pairs, both parties to the transaction will need to be updated if a certificate is replaced, because trading partners that exchange messages with the owner of the private key will need to obtain and install the matching public key from the partner.   One thing that you will notice with certificates issued this way is that they have a built-in expiration date:   While, in theory, this validity period can be any validity period chosen by the issuer, in practice it is common to issue a certificate in the range of 1 to 5 years, depending on the security policy of the issuer.   When it comes time to replace your private key certificate, you will need to provide your partners with the matching public key certificate when you begin using your new private key certificate, so that operations that you perform with the private key certificate or operations that your partner performs with your public certificate match. This article will provide tips on how to approach this to update your partners in advance and make this process as seamless as possible:  	Create your new certificate in advance 		Share the new public key with your trading partners and announce a cutover date 		Keep your old certificate as a rollover certificate when cutting over 	 Creating a new certificate without configuring it:  In the Settings (cogwheel)-&amp;gt;Certificates section of the application, you can click on the +Add Certificate button to create a new certificate:   You will then see a new dialog box prompting you for the fields for the new certificate to create:    At a minimum, you need the Common Name and password for the new certificate, but the following tips are recommended:   	Most commonly, the Organization is a match for your identity in the AS2 profile. The Common Name is often used for SSL/TLS certificates and matches the domain name you’re hosting on, but in most cases, it is not important that this match your domain. 		It is recommended that you choose a filename that makes it easy to identify the owner of the certificate and its validity period, so it’s easy to tell apart from earlier iterations. Plugging the expiration or issue date into the name is a good approach. 		Note that the default Serial Number is coded to the hexidecimal equivalent of the 2 digit month and 2 digit year of the current time  (the date the above image was captured is November 2025). It’s recommended that you stick with this or come up with your own unique convention for the serial number. If you’re replacing a certificate that was previously issued, it is not recommended that you use the exact same combination of subject and serial number, as some parties may have trouble distinguishing your new certificate from your old one if both identifiers are the same.  		You can choose any validity period that works with your security policy. The only way to compromise a private key that is securely protected is through brute force attack, but the longer that interval you use that certificate, the more likely that is that a dedicated attacker is able to compromise your key. Certificates are intended to be replaced regularly as part of the protocol, so use your judgement here.  		Remember the password that you provide. It will be needed later when you configure that certificate.  	 When you’re done, you’ll notice that two certificates were created, one private key (pfx) and one public key (cer):   You can download the .cer file and begin sharing it with your partners.  Share the new public key with your trading partners and announce a cutover date  It is recommended that you choose a date in advance and use that date to begin switching to the new certificate. If you are using this certificate to replace one that is about to expire, you will be notified in advance of that expiration date so you can plan ahead.   You can share the new .cer file with your partner directly as an email attachment, or if you are publishing your profile in CData Arc via the Publish AS2 Profile Settings setting in your AS2 profile:   You can simply update your Public Certificate:   And that will be then available for download when you refresh the Public page:   NOTE: In normal cases, the private key certificate that is used in each connection is the Personal Certificate in the Profiles-&amp;gt;AS2 Profile, but it is possible to override this on a per connection basis in an individual AS2 connection in the Advanced-&amp;gt;Alternate Local Profile for that connector:   If you have any certificate present in this selection, note that if you update your Personal Certificate in the AS2 Profile, the application will continue to use your Alternate Local Profile until you clear it. If you have a partner that you have an Alternate Local Profile certificate configured that you wish to sync up to your new certificate, be sure to clear the Alternate Local Profile -&amp;gt; Private Certificate section when cutting over to the new certificate.  In general, Alternate Local Profile should only be used in multitenant setups (where you maintain multiple local identities with different private keys for each), or in cases where you need to revert a partner to an earlier certificate (we will touch on this again in the troubleshooting section below) because it can be easy to lose track of certificates overridden in this section. If you have an Alternate Local Profile-&amp;gt;Personal Certificate that already matches your AS2 Profile-&amp;gt;Private Certificate, that field can be cleared now, since there is no difference.  At the cutover date, begin using the new certificate, and retain your previous certificate as a Rollover Certificate  At the time of the changeover, begin using the new certificate by configuring it in the AS2 Profile-&amp;gt;Private Certificate:  At the same time, take the certificate that you were using, and reconfigure it as the Rollover Certificate:   How the Rollover Certificate field is used  CData Arc uses private key certificates for two operations in AS2 (as well as other protocols):  	To sign outgoing messages and returned MDN receipts  		To decrypt incoming messages. 	 For signed messages, there is only one certificate that can be used to make an outgoing signature, so now that you switch the Personal Certificate to a new certificate, new signatures will begin using that certificate.  Therefore, it is necessary to provide your certificate in advance for your partner, but signature verification on many systems involves installing the public key amongst many trusted certificates on the machine. Typically, if your partner installs your public key on their system, they’ll be ready for it by the time you begin signing with it.  Decryption, on the other hand, can be tried more than once, and the Rollover Certificate acts as a failover decryption certificate. CData Arc will first attempt to decrypt incoming transmissions with the Personal Certificate, but if that fails, the Rollover Certificate will also be tried. This will allow exchanges to keep working even if the partner is using the old certificate.   Help! I’ve updated my certificate and one of my partners says that they cannot verify my signatures! Is there any way to roll this back for them?   In this case, it looks like this partner isn’t ready to accept the new certificate yet, but you can configure an individual AS2 connector to continue using a previous certificate by setting it in the Advanced-&amp;gt;Alternate Local Profile-&amp;gt;Private Certificate field.   This certificate will be used in place of the Profile-&amp;gt;AS2 Profile certificate for this partner, so it will be as if the change never happened.   Because it is easy to lose track of certificates configured in the advanced tab, it is recommended that you check in again in the future to see if the partner can use the new certificate, at which point this field can be cleared.  </description>
            <category>Editions</category>
            <pubDate>Tue, 16 Dec 2025 23:44:19 +0100</pubDate>
        </item>
                <item>
            <title>Version control / git integration</title>
            <link>https://community.cdata.com/ideas/version-control-git-integration-313</link>
            <description>As we make changes to workspaces it is important that we use version control.We also need to control deployment between test and production cData Arc instances. It would be good if a Workspace could have an associated git repo (HTTPS or SSH) and implement `git commit` and `git push`. at least. Ideally `git clone` and `git pull` would be useful too</description>
            <category></category>
            <pubDate>Tue, 16 Dec 2025 14:53:30 +0100</pubDate>
        </item>
                <item>
            <title>Xero Trial balance query into Excel</title>
            <link>https://community.cdata.com/accounting-finance-55/xero-trial-balance-query-into-excel-1774</link>
            <description>Can you pull a trial balance into excel which will included historic month end balances. the standard trial balance query only seems to pull in the current trial balance as reflecting in xero.</description>
            <category>Accounting &amp; Finance</category>
            <pubDate>Mon, 15 Dec 2025 16:41:25 +0100</pubDate>
        </item>
                <item>
            <title>CData Product Updates Q4 2025</title>
            <link>https://community.cdata.com/product-updates/cdata-product-updates-q4-2025-1755</link>
            <description>CData Product Updates NewsletterWe’re excited to share the latest from CData.CData Research200+ AI leaders reveal how data readiness drives AI maturityWe’re excited to release our latest research report, The State of AI Data Connectivity Report: 2026 Outlook - the first study to examine how data infrastructure is shaping AI success. The report shows that despite record AI investment, only 6% of enterprises are fully satisfied with their data infrastructure for AI. Featuring insights from experts at Microsoft, AWS, Google, and more, the report offers a practical blueprint for accelerating the business impact of AI.  Download the full report. Managed MCP Platform: CData Connect AITurn AI assistants like ChatGPT and Claude into enterprise experts in minutes, not monthsTransform the AI assistants you already use today from generic responders into domain experts with multi-source analysis out-of-the-box to answer the questions that move your business forward. Learn more and start trying it in minutes from your favorite AI assistant.  Work with documents like data through AINew Google Drive MCP Server capabilities let you treat documents like data. Add document contents directly to the LLM&#039;s context—no download required—then summarize, analyze, create, and update documents without leaving your AI workflow. See it in action.   For Software Providers: CData EmbeddedSix moves to rewire software for the AI ageAI doesn’t require a full rebuild of your software stack. However, it does require rethinking the data architecture beneath it. In this e-book, former CPO, Mark Palmer, outlines six practical architectural pivots that help software teams evolve from BI-optimized systems to AI-optimized platforms. Read the full e-book.  ETL/ELT/Reverse ETL: CData SyncOpen Delta tables for Fabric and Databricks with CData SyncOrganizations are accelerating analytics and AI on lakehouse platforms, but hybrid data still needs to land in Delta format for reliable updates and efficient performance. CData Sync delivers a direct path to open Delta tables so teams can hydrate Fabric and Databricks from on-premises and cloud sources. That means faster ingestion, consistent governance, and a stronger foundation for AI initiatives. Learn more.  Live Connectivity: CData DriversNew drivers for Klaviyo, Lakebase, and Oracle Eloqua ReportingCData Drivers now offer integration with Klaviyo, Lakebase, and Oracle Eloqua Reporting, adding more critical business applications to the suite of solutions CData provides for enabling analytics, reporting, and AI-powered workflows on enterprise data. Learn more.   EDI &amp;amp; MFT: CData ArcCData Arc 2026 release: Bringing EDI and file transfer into the AI eraArc’s upcoming 2026 release brings faster, more accurate AI-driven mapping and a new AI Connector that embeds an LLM directly into your flows for intelligent routing and anomaly detection. It adds full version control, human-readable EDI documents, and tamper-evident logging for stronger governance. The result is faster build cycles, lower maintenance, and secure scaling for modern supply-chain automations. Enterprise Semantic Layer: CData VirtualityStreamlined data discovery and requests with the Business Data ShopFaster, governed access to the right data: The enhanced Business Data Shop in CData Virtuality expands on the semantic layer to give business users a simpler, more efficient way to find and request curated datasets. With this release, users can now move from discovery to direct data requests within the same intuitive, self-service interface for a smoother data flow across the organization, ensuring teams get trusted data when they need it. See how it works.</description>
            <category></category>
            <pubDate>Mon, 15 Dec 2025 15:14:34 +0100</pubDate>
        </item>
                <item>
            <title>How to Use an External ID to Perform an UPSERT Operation against Salesforce</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-use-an-external-id-to-perform-an-upsert-operation-against-salesforce-1773</link>
            <description>The CData Salesforce connectivity solutions make it easy to synchronize data with Salesforce by enabling seamless UPSERT operations using External IDs. An External ID allows you to match records from your source system with corresponding Salesforce records ensuring accurate updates, preventing duplicates, and simplifying data integration workflows.In this article, you’ll learn how to:	Create and configure an External ID field in Salesforce			Connect to Salesforce using the CData Salesforce Driver			Perform an UPSERT operation using the External ID for record matching	To learn more, read: How to Use an External ID to Perform an UPSERT Operation against Salesforce</description>
            <category>CData Drivers</category>
            <pubDate>Thu, 11 Dec 2025 05:57:09 +0100</pubDate>
        </item>
                <item>
            <title>Transform Your Accounting with Custom, Next-Gen QuickBooks Sync</title>
            <link>https://community.cdata.com/ideas/transform-your-accounting-with-custom-next-gen-quickbooks-sync-1772</link>
            <description>Next-Gen Custom QuickBooks Integration SolutionsBuild tailor-made QuickBooks integrations for any app, ERP, CRM, or POS. QBIS delivers custom sync solutions for QuickBooks Online &amp;amp; Desktop.Next-Gen Custom QuickBooks Integration Solutions transform accounting from a reactive, manual function into an automated, strategic backbone of the business. By combining real-time synchronization, intelligent transformations, secure architecture, and extensible hooks for custom logic, these solutions not only cut costs and errors but also unlock timely financial insight that drives better decisions. For businesses aiming to scale, streamline operations, or modernize finance, investing in a thoughtfully architected QuickBooks integration is no longer optional — it’s essential. </description>
            <category></category>
            <pubDate>Mon, 08 Dec 2025 14:42:35 +0100</pubDate>
        </item>
                <item>
            <title>How to enable CData SQL Server Connections as Destination Connections for Jobs within CData Sync</title>
            <link>https://community.cdata.com/cdata-sync-47/how-to-enable-cdata-sql-server-connections-as-destination-connections-for-jobs-within-cdata-sync-1770</link>
            <description>When configuring a replication Job after creating a Connection to your SQL Server database, you may initially find that you are unable to utilize this Connection as a Destination.The Connection appears as intended when selecting a Source for the Job:However, the same Connection does not appear when selecting a Destination:What can be done to resolve this?</description>
            <category>CData Sync</category>
            <pubDate>Thu, 04 Dec 2025 21:53:22 +0100</pubDate>
        </item>
                <item>
            <title>Using Claude Projects to Work Effectively with Dynamic Data Models with CData Connect AI</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/using-claude-projects-to-work-effectively-with-dynamic-data-models-with-cdata-connect-ai-1769</link>
            <description>Hey CData Community!Tired of Claude guessing your schema wrong?Give it a brain upgrade with Claude Projects + CData Connect AI.We put together a quick guide on teaching Claude your database layout so it stops hallucinating columns and starts writing clean, accurate queries. It’s simple, it’s powerful, and honestly… it feels like giving Claude glasses. Check it out here: https://www.cdata.com/kb/articles/claude-projects.rst</description>
            <category>CData Connect AI</category>
            <pubDate>Thu, 04 Dec 2025 11:38:06 +0100</pubDate>
        </item>
                <item>
            <title>SAP ERP JDBC: “sapjco3.dll already loaded in another classloader” when used from Jaspersoft Studio</title>
            <link>https://community.cdata.com/data-sources-91/sap-erp-jdbc-sapjco3-dll-already-loaded-in-another-classloader-when-used-from-jaspersoft-studio-1767</link>
            <description>The CData SAP ERP driver uses the sapjco3.jar dependency, which must be referenced in the classpath. This JAR, in turn, depends on sapjco3.dll, whose path must be added to the PATH system environment variable. When using the SAP ERP driver from Jaspersoft Studio, you may encounter an issue where the driver works for the first connection but fails on subsequent attempts. Depending on where the operation is executed, you may see the following error: Error getting the version of the native layer: java.lang.UnsatisfiedLinkError: Native Library C:&amp;lt;path-to-library&amp;gt;\sapjco3.dll already loaded in another classloader This happens because Jaspersoft Studio closes the driver after completing an operation and later tries to reopen it. When the driver is reloaded, it attempts to use sapjco3, which in turn tries to load its sapjco3.dll dependency again. However, since sapjco3.dll is already loaded in the Jaspersoft Studio JVM, it cannot be loaded a second time resulting in the error mentioned above. To mitigate this issue, instead of adding sapjco3.jar to the classpath of the individual data adapter, you can configure it globally within Jaspersoft Studio. This ensures that sapjco3.dll is loaded only once. Steps to Configure 	Place sapjco3.jar in the Jaspersoft Studio directory, as shown below: 	 	 		Edit the Jaspersoft Studio.ini file: 	Add the following switch after the -vmargs tag: 	-Xbootclasspath/a:sapjco3.jar Save the file after making this change.  	Add the path of the directory containing sapjco3.dll to the PATH system environment variable. 	 	Start Jaspersoft Studio and, when creating a Data Adapter, reference only the cdata.jdbc.saperp.jar file. 	 After completing these steps, you can successfully use the Data Adapter when creating reports in Jaspersoft Studio without encountering the “already loaded in another classloader” error.  If the issue persists, please contact support@cdata.com. </description>
            <category>Data Sources</category>
            <pubDate>Tue, 02 Dec 2025 17:36:04 +0100</pubDate>
        </item>
                <item>
            <title>HRESULT: 0xC0047043 Exception when using SSIS components</title>
            <link>https://community.cdata.com/editions-90/hresult-0xc0047043-exception-when-using-ssis-components-1766</link>
            <description>While working with our SSIS components in Visual Studio, you may encounter an Exception from HRESULT: 0xC0047043 error:   This is a known issue in Visual Studio SSIS project. It occurs because Visual Studio UI attempts to overlap the loading process. This usually happens when you try to load a new component view while another component view is already open, since only one component view can exist at a time. This behavior is also described in Microsoft’s documentation: Integration Services error and message reference The solution for this issue is to simply click OK in the error window and wait a few seconds (the exact time depends on the source but usually waiting ~20 seconds should suffice). This will allow the first process to finish before the second one begins. Please reach out to support@cdata.com if this does not resolve the issue you’re encountering. </description>
            <category>Editions</category>
            <pubDate>Tue, 02 Dec 2025 15:58:20 +0100</pubDate>
        </item>
                <item>
            <title>Understanding Dynamics 365 &amp; Dataverse API Limits</title>
            <link>https://community.cdata.com/cdata-drivers-45/understanding-dynamics-365-dataverse-api-limits-1765</link>
            <description>API limits in Microsoft Dynamics 365 and Dataverse directly affect integration performance, throughput, and reliability. To help teams plan and optimize usage, we published a clear breakdown of daily request allocations, service-protection throttling, and how these limits impact the CData Dynamics 365 / Dataverse Driver. Highlights:	Daily request allocations for licensed and application users			Per-request limits: 5,000-record pages and 1,000-operation batch caps			Service Protection Limits (request count, execution time, concurrency)			How the CData Driver translates SQL queries into multiple Web API calls			How to monitor API usage and avoid 429 throttling	 Read the full article here:https://www.cdata.com/kb/articles/dynamics365-dataverse-api-limits.rst</description>
            <category>CData Drivers</category>
            <pubDate>Tue, 02 Dec 2025 15:22:44 +0100</pubDate>
        </item>
                <item>
            <title>Configuring Clustering for CData Sync</title>
            <link>https://community.cdata.com/cdata-sync-47/configuring-clustering-for-cdata-sync-1764</link>
            <description>The CData Sync clustering feature allows multiple Sync installations to operate together as a single, scalable, and highly available environment. By distributing workloads across multiple nodes, clustering ensures continuous operation, automatic failover, and consistent performance even as data movement needs grow.In this article, you’ll learn how to:	Set up Sync nodes to share a unified application directory			Configure a shared application database for consistent metadata and job management			Enable and activate Cluster Mode across all Sync installations			Ensure reliable, distributed job execution with automatic node balancing	To learn more, read: Configuring Clustering for CData Sync</description>
            <category>CData Sync</category>
            <pubDate>Mon, 01 Dec 2025 15:10:20 +0100</pubDate>
        </item>
                <item>
            <title>How to Handle PostgreSQL Infinity Values Using CData Sync</title>
            <link>https://community.cdata.com/cdata-sync-47/how-to-handle-postgresql-infinity-values-using-cdata-sync-1763</link>
            <description>Working with PostgreSQL data that includes “Infinity” values? Learn how to manage and sync these special values effortlessly using CData Sync. Our newest guide walks you through the necessary steps and best practices to keep your data clean and reliable.Full guide: https://www.cdata.com/kb/articles/sync-postgresql-infinity-values.rst</description>
            <category>CData Sync</category>
            <pubDate>Thu, 27 Nov 2025 07:26:56 +0100</pubDate>
        </item>
                <item>
            <title>How to Retrieve More Than 5000 Records in CData SharePoint Drivers</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-retrieve-more-than-5000-records-in-cdata-sharepoint-drivers-1762</link>
            <description>Hey CData Community!Struggling with SharePoint’s 5,000-item limit? Our latest article shows how to easily bypass this threshold using CData SharePoint Drivers. Discover simple steps to access larger datasets smoothly and efficiently.Explore the full guide: www.cdata.com/kb/articles/sharepoint-5000-limit.rst</description>
            <category>CData Drivers</category>
            <pubDate>Thu, 27 Nov 2025 06:59:10 +0100</pubDate>
        </item>
                <item>
            <title>XML Map connector removes escaping special characters</title>
            <link>https://community.cdata.com/cdata-arc-48/xml-map-connector-removes-escaping-special-characters-1759</link>
            <description>In the input there is a value “&amp;lt;N201&amp;gt;Germany GmbH &amp;amp; Co. KG&amp;lt;/N201&amp;gt;”. After it is mapped it is “&amp;lt;Address&amp;gt;Germany GmbH &amp;amp; Co. KG Hahnstraße 70&amp;lt;/Address&amp;gt;”. How can I set it up that special characters are still escaped after the mapping. Because now the next Json connector is failing because of the &amp;amp; character.</description>
            <category>CData Arc</category>
            <pubDate>Mon, 24 Nov 2025 13:43:41 +0100</pubDate>
        </item>
                <item>
            <title>Archiving the Imported File</title>
            <link>https://community.cdata.com/developers-52/archiving-the-imported-file-1761</link>
            <description>Hello, I would like to archive the file which is imported in the first step of a flow after all connectors complete successfully. I attached a Script connector to the Success Path of the last connector and within the script:1. Retrieve the name of the file from the message header2. User the name to rename the file and copy it to the archive folder.But I cannot retrieve the name from the message header.Could anyone advise me how to do it?Thank you.</description>
            <category>Developers</category>
            <pubDate>Mon, 24 Nov 2025 09:40:21 +0100</pubDate>
        </item>
                <item>
            <title>SQLServerV2: This connector type has been removed and will no longer function</title>
            <link>https://community.cdata.com/editions-90/sqlserverv2-this-connector-type-has-been-removed-and-will-no-longer-function-1760</link>
            <description>  Why: Different versions of the SQL Server connector were available in different releases of CData Arc, and the V2 release was one that made use of central database connection configurations but referenced the native System.Data.SQLClient driver for communications.  This version of the connector was deprecated during the lifetime of the 2021 release of ArcESB:  https://cdn.cdata.com/help/AZM/mft/2022-01.html  At which time, the connector would still function, but display the following warning:   Support for the SQLServerV2 connector was removed in the 2024 release of CData Arc.  https://cdn.cdata.com/help/AZM/mft/2024-04.html#removed  Is it possible to upgrade the connector?   There is not an upgrade wizard that can convert the SQLServerV2 connector to one that is supported in the current release, as the driver that is used for communications with SQL Server is different (current releases use the System.Data.CData.SQL driver), and in addition to differences in how the drivers handle communication internally, there are different connection strings for the two drivers. With that said, it may be possible to manually upgrade the connector by first converting the version of the connector in the connector configuration and by creating a new connection to the database that uses the new SQL Server connection type.   Possible solution: To attempt this, first locate the path for the SQLServer Connector V2 connector on disk for the deprecated connector and locate the port.cfg for this connector inside of the application directory (check the help documentation for your version of CData Arc to locate the path of your application directory if you are not sure).  Every connector will have a port.cfg file resource located in it and the settings of the connector are in ini format like this:   Changing the value of the ConnectorType from SQLServerV2 to SQLServerV3 will allow you to see the settings of the connector in the administration console, but you will encounter a new error because the named connection referenced in that connector is of a different type:   You can recreate the connection to SQL Server database for the V3 connector version with the +Create button:  Once you create that connection, save your changes and refresh the page - you should see your previous mappings:   This is not guaranteed to work but this process has been shown to succeed for many mappings that were created in previous versions. Note that if you have multiple SQL Server connectors that use the same connection, you will need only recreate the connection once – every other connector will be able to select the newly created Connection name from the dropdown. </description>
            <category>Editions</category>
            <pubDate>Fri, 21 Nov 2025 21:40:04 +0100</pubDate>
        </item>
                <item>
            <title>Build AI Agents in Minutes with Your Live Data using IBM watsonx Orchestrate + CData Connect AI</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/build-ai-agents-in-minutes-with-your-live-data-using-ibm-watsonx-orchestrate-cdata-connect-ai-1754</link>
            <description>Want to build intelligent agents that can query live enterprise data — no replication, no ETL, just governed, real-time access?CData Connect AI and IBM watsonx Orchestrate make it possible in minutes.What you can do	Connect watsonx Orchestrate directly to 350+ data sources through CData’s MCP server.			Use secure, governed SQL access — your data never leaves its source.			Build agents that query, summarize, and act on real-time data.	How it works	Add your data connection in CData Connect AI.			Link it in IBM watsonx Orchestrate using a secure Key-Value connection.			Import CData MCP toolkits via the ADK CLI and start building workflows.	 	Try the tutorial for your platform	 Salesforce Tutorial			 Snowflake Tutorial			 Jira Tutorial			 SAP Tutorial	Get startedStart building your own AI agents today with a free Connect AI trial here.</description>
            <category>CData Connect AI</category>
            <pubDate>Wed, 12 Nov 2025 13:43:10 +0100</pubDate>
        </item>
                <item>
            <title>Pagination within the REST Connector</title>
            <link>https://community.cdata.com/cdata-arc-48/pagination-within-the-rest-connector-166</link>
            <description>OverviewSometimes APIs return data to the connecting client in a page based format. This can happen when the requested dataset to be returned is too large for the API to return in one response, or it could be the way that the API administrator has built the API to return responses.For example, if an API request is sent to GET the total number of items in an order, and the total number of items is 500, the API might return the results in a series of 10 pages where there are 50 items per page. The client requesting the information would then have to issue 10 separate API calls to get each page. This behavior is often coined “pagination”.The REST Connector and the FlowPagination isn’t something that is natively supported within the REST connector however, it is typically possible for Arc users to develop a solution to accomplish this by use of some custom script within a script connector and the utilization of custom headers to hold the page numbers (or page URLs) and build the URL within the REST connector dynamically, based off of those headers.At a high, flow-based level, a pagination flow in Arc might look something like this: Where each connector is going to have a specific role, in order to accomplish this, specifically the script connectors that will need to contain the logic. A description of what each connector should do is below:Script_FirstPage - This is a script connector that would basically be the connector that starts off at page 1. It would be responsible for adding a header onto the message that is passed to the REST connector that can be used dynamically in the URL to call the API for the first page.REST_Pagination - The REST connector that will issue all of the requests to the API endpoint.Copy_Output - This connector will take a copy of the output JSON data from the API response and send it down the rest of the flow, and another copy will flow into the Script_NextPage connector, described below.Script_NextPage - This connector is where a majority of the logic will need to be written. This connector will need to be responsible for parsing out the page values from the JSON response - you can do this via the jsonDOMGet operation (https://cdn.arcesb.com/help/AZG/mft/op_jsonDOMGet.html). You will want to parse out the page data, whether it is a number or a part of the URL that needs to be used, or the whole URL and add it as a header on the message and then push that message as output.The header here should be named the same as the header that is created by the Script_FirstPage connector and referenced in the URL field of the REST connector. This connector will also need logic written within it so that it knows when the last page of the API is.This connector will then pass that message and header with the page/URL info to the REST connector where the REST connector will reference the header on the message within the URL field and issue a request to the API to get the next page available.The way you go about creating the parsing logic is going to be determined based on what page data comes back in each response from the API - how does it tell you what the next page is? Is it a number that is used as a query string parameter? A entirely new URL?For example, if the API response contains a full URL for the next page, you can set the custom header to have a value of &quot;nextPageUrl&quot; and then just reference that header within the URL field of the REST connector, like this:NOTE: In order to evaluate arcscript/headers within the URL of the REST connector, &quot;Allow ArcScript in URL&quot; must be enabled within the Advanced tab:The “boilerplate” arcflow that contains these connectors is attached below. Please note however due to the variability of the necessary scripts based on any given API response structure and your specific use case, the Script Connectors within the flow are empty and do not contain any code. You will need write the scripts in accordance with your requirements. Please feel free to take this flow and adapt it to your use case. </description>
            <category>CData Arc</category>
            <pubDate>Mon, 10 Nov 2025 22:56:51 +0100</pubDate>
        </item>
                <item>
            <title>Make API Requests with a Custom Shopify Admin API Token Using the CData Shopify JDBC Driver</title>
            <link>https://community.cdata.com/cdata-drivers-45/make-api-requests-with-a-custom-shopify-admin-api-token-using-the-cdata-shopify-jdbc-driver-1751</link>
            <description>Hey CData Community!Accessing your Shopify data just got simpler!Our latest guide walks you through generating a custom Admin API token and connecting seamlessly with CData Shopify JDBC Driver, enabling smooth, SQL-style access to your Shopify store data.Explore the steps here: Shopify API Access Token Guide</description>
            <category>CData Drivers</category>
            <pubDate>Sun, 09 Nov 2025 05:03:06 +0100</pubDate>
        </item>
                <item>
            <title>TSQL Order by with latest version of DbAmp</title>
            <link>https://community.cdata.com/cdata-dbamp-49/tsql-order-by-with-latest-version-of-dbamp-1748</link>
            <description>Greetings, I sent a detailed example of this to support and I’ll put it here in case it’s something simple. I’m running into a lot of issue with dbamp and ordering by the account name.  (both directly and also in the joins).For example the 2 queries should produce the same results both return the same values.  They do not because the order returned is different even when the values are the same.    select top(5) account.id from usalesforce].cdata.salesforce.account order by account.id COLLATE        SQL_Latin1_General_CP1_CI_AS asc;    select top(5) cast(account.id as nchar(18)) from ssalesforce].cdata.salesforce.account order by cast(account.id as nchar(18)) COLLATE SQL_Latin1_General_CP1_CI_AS asc; If I change the query to do a case sensitive ordering like so, then the results match.  select top(5) account.id from psalesforce].cdata.salesforce.account order by account.id COLLATE      SQL_Latin1_General_CP1_CS_AS asc;    select top(5) cast(account.id as nchar(18)) from usalesforce].cdata.salesforce.account order by cast(account.id as nchar(18)) COLLATE SQL_Latin1_General_CP1_CS_AS asc; This also appears to be causing issues in left joins as internally SQL server wants to order the result set from the local table, order the result set from the Salesforce table and merge join the 2.  The left join has rows not being found.  Change to a inner join, which doesn’t do an order by according to the execution plan, and now the same rows have matching elements on Salesforce.Example of issue with the join.The question would be is this a known issue and why does it function this way?  Also, this started after updating dbamp and there is been an uptick in API calls and we are not sure if that has been related. </description>
            <category>CData DBAmp</category>
            <pubDate>Fri, 07 Nov 2025 22:06:58 +0100</pubDate>
        </item>
                <item>
            <title>How to Update Salesforce Records from Excel with CDATAUPDATE: Excel Add-In for Salesforce</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-update-salesforce-records-from-excel-with-cdataupdate-excel-add-in-for-salesforce-1747</link>
            <description>Hey CData community,If you’re working with live data in Salesforce and want to harness the full power of Excel to update records — you’re going to like this. With the CData Excel Add-In for Salesforce, the article shows how to use the CDATAUPDATE function to push changes from Excel back into Salesforce. (See the KB article here: How to Update Salesforce Records from Excel with CDATAUPDATE.)</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 07 Nov 2025 13:57:31 +0100</pubDate>
        </item>
                <item>
            <title>Connect AI is a Databricks Agent Bricks Launch Partner</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/connect-ai-is-a-databricks-agent-bricks-launch-partner-1746</link>
            <description> We’re proud to announce CData’s inclusion as an MCP launch partner for Databricks Marketplace!CData Connect AI connects Agent Bricks agents directly to live, semantic-rich business context from 350+ enterprise systems — including Salesforce, SAP, NetSuite, and ServiceNow.“The launch of CData’s Managed MCP Platform on the Databricks Marketplace simplifies the path to production AI for the enterprise. It allows our Agent Bricks users to provision production-grade connectivity in minutes to hundreds of external data sources with a single click,” added Ariel Amster, Director, Technology Partner Management. “This capability significantly accelerates their ability to build context-aware AI apps and agents that deliver real business impact.”Now, enterprises can build and deploy AI agents that go beyond analyzing data in Databricks — extending Agent Bricks’ reach into the live, operational present. Learn more: CData: New MCP Launch Partner in Databricks Marketplace</description>
            <category>CData Connect AI</category>
            <pubDate>Thu, 06 Nov 2025 20:23:14 +0100</pubDate>
        </item>
                <item>
            <title>How to Manage Multiple Attachments in Kintone Using the CData Kintone JDBC Driver</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-manage-multiple-attachments-in-kintone-using-the-cdata-kintone-jdbc-driver-1745</link>
            <description>Hey CData Community!Ever found yourself wrestling with attachments in Kintone and thought, “there’s gotta be an easier way”?Good news: we just dropped a guide on how to bulk upload/delete/manage multiple files in Kintone using the CData Kintone JDBC driver.Check it out here: https://www.cdata.com/kb/articles/kintone-uploadfile.rst</description>
            <category>CData Drivers</category>
            <pubDate>Thu, 06 Nov 2025 06:01:40 +0100</pubDate>
        </item>
                <item>
            <title>How to Retrieve Salesforce Formula Fields in Tableau</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-retrieve-salesforce-formula-fields-in-tableau-1744</link>
            <description>The CData Tableau Connector for Salesforce enables you to access, visualize, and analyze live Salesforce data directly within Tableau without complex setup or manual exports. Using the connector’s dynamic capabilities, you can bring in not just standard Salesforce objects, but also computed Formula Fields, empowering deeper insights and more accurate reporting.In this article, you’ll learn how to:	Connect Tableau to Salesforce using the CData Connector			Retrieve and display Salesforce Formula Fields alongside standard data			Build visualizations that reflect live, calculated Salesforce metrics			Refresh data in real time without reloading or reconfiguring your dashboards	To learn more, read:How to Retrieve Salesforce Formula Fields in Tableau</description>
            <category>CData Drivers</category>
            <pubDate>Wed, 05 Nov 2025 07:30:04 +0100</pubDate>
        </item>
                <item>
            <title>Bulk Import Tasks from Excel to MS Planner: Excel Add-In for MS Planner</title>
            <link>https://community.cdata.com/cdata-drivers-45/bulk-import-tasks-from-excel-to-ms-planner-excel-add-in-for-ms-planner-1743</link>
            <description>Hey CData Community If you are managing lots of tasks in Microsoft Planner and find the manual entry process cumbersome, you will definitely want to check out this workflow. With the CData Excel Add-In for Microsoft Planner, you can bulk-import, update and manage Planner tasks right from Excel — no exports, no manual copy/paste, no complex flows. Check the link below to get a quick overview of what’s possible and how to get started.Link: https://www.cdata.com/kb/articles/planner-exceladdin-bulk-import.rst</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 31 Oct 2025 12:29:06 +0100</pubDate>
        </item>
                <item>
            <title>CData Virtuality 25.3: Clustering support on Azure, a new function, and much more</title>
            <link>https://community.cdata.com/cdata-virtuality-archived-94/cdata-virtuality-25-3-clustering-support-on-azure-a-new-function-and-much-more-1742</link>
            <description>On the Server side, we’ve added a new REGCLASS function and integrated two new CData JDBC drivers: SharePoint and Email. We’ve also updated the CData JDBC Driver for Jira to v25.0.9389.0 and the Neo4J JDBC driver to version 6.6.1.The Server also now has a system option for user session timeout and email sending retries and response timeouts. Two other improvements extend model properties: importer.schemaPattern now supports fully-qualified schema names, and importer.tableNamePattern, fully-qualified table names.For Git Integration, we’ve added HTTPS support for remote operations and fixed a bug preventing user limits from being tracked in Git.We’ve also worked on our clustering implementation: clustering is now supported on Azure, including deployment on Azure Kubernetes Service, and we’ve fixed three bugs: one causing new nodes to fail to start and join the cluster due to serialization issues with jobs in state RUNNING, another where virtual view and procedure definitions were not distributed on ALTER and REPLACE, and yet another where Google BigQuery data source creation was not distributed properly due to missing key file on other nodes.For Snowflake, we’ve fixed four bugs: one causing multi-catalog data source creation to fail if importer.loadMetadataWithJdbc=false and DB were omitted in the connection settings, another with missing column size for binary types and missing fractional seconds scale for time and timestamp columns, a third one resulting in inability to recreate data source with Microsoft Entra IDauthentication, and a fourth one causing the TRIM function to produce incorrect results.We’ve also resolved a bug where failures in sending email notifications for a single job could block the execution of other jobs, a bug where a stopped refresh query operation (refreshTables, refreshDataSource, copyOver with refreshTarget=true) blocked subsequent refreshes of affected data source, and a bug where UNION of a query with a GROUP BY and a query with an ORDER BY parts resulted in IndexOutOfBoundsException error.As for the Studio, we’ve enabled the &quot;Test connection&quot; button and test connection step in the data source and Analytical Storage wizards for Oracle ADWC, updated the user key file path field in data source and Analytical Storage creation and editing wizards to support Base64 encoding of files, and updated the Snowflake data source wizards to support key-pair authentication. Also, we’ve implemented a solution to handle a HAProxy idle connection issue and a related issue with maxIdleTime configuration causing performance issues on the server.For the Exporter, we’ve created an interface to return server export to the caller as a script.As for the Connectors, we’ve worked extensively on our Walmart connector. We’ve added functionality to get WFS inventory, added extra fields to the Report_Recon_JSON procedure, and resolved a bug causing this procedure to crash with HTTP Error 520, and we’ve added columns for three reports: orders, inboundshipment, and inboundshipmentitems.For orders, the following changes have been introduced:	field shippingInfo_carrierMethodName was added into Orders procedure;			fields originalCarrierMethod, item_condition were added into additional table _Lines;			field subSellerId was added into additional table &#039;_Lines_status.	For inboundshipment and inboundshipmentitems, the changes are as follows:	fields shipmentStatus, shipmentType, itemsSubmitted, receivedUnitsAtFC, poType, shipmentCarrierType, isExceptionOccurred, isPOBoxEnabled, carrierName, receiptStartDate,	receiptEndDate were added to the Inboundshipment procedure;			fields fillRateAtFc, chargeDetails_chargeType, chargeDetails_netChargeAmount, receivedUnitsAtFc, damagedUnitsAtFc were added to the InboundshipmentItems	procedure.	We&#039;ve also fixed two bugs affecting the Walmart connector: one causing the InboundShipmentItemsprocedure to failt with error &quot;arraycopy: length -1 is negative&quot; and another causing theInboundShipments procedure to failt due to incorrect date format in request.For Facebook, we’ve updated the connector to v23.For Amazon Ads, we’ve made the Amazon Brand Stores data retrievable through the Amazon Ads connector, and for Amazon Selling Partner, we’ve updated the report_FBALongTermStorageFeeCharges to the latest changes, which involved removing some columns, adding one new column, and changing the data type of some other columns. Here’s the list of the changes:The following columns were removed:	long_time_range_long_term_storage_fee			qty_charged_long_time_range_long_term_storage_fee			qty_charged_short_time_range_long_term_storage_fee			short_time_range_long_term_storage_fee	New columns were added:	qty_charged of type decimal	The following columns changed data type:	asin - from string to string(10)			sku - from string to string(40)			condition - from string to string(100)			country - from string to string(2)			currency - from string to string(3)			fnsku - from string to string(40)			surcharge_age_tier - from string to string(50)	Last but not least, for Google Analytics Data, we’ve resolved the issue with incorrect parsing for custom events. Now all is well. Here are all issues in this release:Server	DVCORE-9013 (New Feature): Add REGCLASS function			DVCORE-9098 (Improvement): Update CData JDBC Driver for Jira to v25.0.9389.0			DVCORE-9089 (Improvement): Integrate CData SharePoint JDBC driver			DVCORE-9070 (Improvement): Clustering: add support for setting up a cluster on Azure			DVCORE-9061 (Improvement): Integrate CData Email JDBC driver			DVCORE-9015 (Improvement): Add HTTPS support for Git Integration remote operations			DVCORE-9011 (Improvement): Neo4J: update JDBC driver to version 6.6.1			DVCORE-8991 (Improvement): Add system option for user session timeout			DVCORE-8984 (Improvement): Introduce email sending retries and response timeout			DVCORE-8916 (Improvement): Extend &quot;importer.schemaPattern&quot; model property to support fully-qualified schema names			DVCORE-8915 (Improvement): Extend &quot;importer.tableNamePattern&quot; model property to support fully-qualified table names			DVCORE-9108 (Bug Fix): Clustering: new nodes fail to start and join the cluster due to	serialization issues with jobs in state RUNNING			DVCORE-9101 (Bug Fix): Clustering: virtual view and procedure definitions are not distributed on ALTER and REPLACE			DVCORE-8835 (Bug Fix): Clustering: Google BigQuery data source creation is not distributed properly due to missing key file on other nodes			DVCORE-9058 (Bug Fix): Failures in sending email notifications for a single job can block the execution of other jobs			DVCORE-9051 (Bug Fix): Snowflake: multi-catalog data source creation fails if importer.loadMetadataWithJdbc=false and DB is omitted in the connection settings			DVCORE-9048 (Bug Fix): Snowflake: missing column size for binary types and missing fractional seconds scale for time and timestamp columns			DVCORE-9025 (Bug Fix): Snowflake: unable to recreate data source with Microsoft Entra ID	authentication			DVCORE-8799 (Bug Fix): Snowflake: TRIM function produces incorrect results			DVCORE-9045 (Bug Fix): Web Business Data Shop: items published in MAINTENANCE mode get incorect names and states in &quot;SYSADMIN.WebBusinessDataShopPublished&quot; table after disabling it			DVCORE-9044 (Bug Fix): Web Business Data Shop: it is possible to publish the same item a	second time after recreating it with the same name, but different casing			DVCORE-9021 (Bug Fix): Checking role permissions on the configuration database causes a delay on procedure execution			DVCORE-8912 (Bug Fix): Stopped refresh query blocks subsequent refreshes of affected data source			DVCORE-8779 (Bug Fix): Problems connecting to the configuration database cause jobs to hang			DVCORE-8665 (Bug Fix): LDAP Authentication: gathering permissions from the configuration database causes a delay at server startup			DVCORE-6434 (Bug Fix): Some numeric data types are mapped incorrectly in CData Virtuality connector			DVCORE-8139 (Bug Fix): ONCE schedules use the server restart time instead of the schedule creation time to calculate the delay after the server restart			DVCORE-8113 (Bug Fix): Oracle ADWC: &quot;SYSADMIN.testConnection&quot; procedure fails for Oracle ADWC connections			DVCORE-8559 (Bug Fix): Git Integration: user limits are not tracked in Git			DVCORE-8251 (Bug Fix): ROW DELIMITER is escaped incorrectly in TEXTTABLE function			DVCORE-8093 (Bug Fix): UNION of a query with a GROUP BY and a query with an ORDER BY parts results in IndexOutOfBoundsException error			DVCORE-8532 (Bug Fix): History update job requires an Analytical Storage even if the target is a regular data source	 Studio	DVCORE-9072 (Improvement): Enable the &quot;Test connection&quot; button and test connection step in the data source and Analytical Storage wizards for Oracle ADWC			DVCORE-9064 (Improvement): Update the user key file path field in data source and Analytical Storage creation and editing wizards to support Base64 encoding of files			DVCORE-9023 (Improvement): Update Snowflake data source wizards to support key-pair authentication			DVCORE-9093 (Bug Fix): Handle HAProxy idle connection issue without introducing maxIdleTime re-login problem			DVCORE-9092 (Bug Fix): maxIdleTime configuration causes performance issues on the server	 Exporter	DVCORE-8897 (Improvement): Create an interface to return server export to the caller	 Connectors	SQL-1110 (Improvement): Walmart: add columns for orders report			SQL-1107 (Improvement): Walmart: Report_Recon_JSON procedure crashes with HTTP Error 520			SQL-1104 (Improvement): Walmart: add extra fields to the Report_Recon_JSON procedure			SQL-1101 (Improvement): Walmart: add missing columns for inboundshipment and inboundshipmentitems			SQL-1084 (Improvement): Walmart: add functionality to get WFS inventory (new)			SQL-1087 (Improvement): Facebook: update connector to v23			SQL-1083 (Improvement): Amazon Ads: make the Amazon Brand Stores data retrievable through the Amazon Ads connector			SQL-1078 (Improvement): Amazon Selling Partner: update the Report_FBALongTermStorageFeeCharges to the latest changes			SQL-1057 (Improvement): Braze connector: add POST functionality to update customer	attributes			SQL-1105 (Bug Fix): Walmart: InboundShipmentItems proc fails with error &quot;arraycopy:	length -1 is negative&quot;			SQL-1099 (Bug Fix): Walmart: InboundShipments procedure fails due to incorrect date	format in request			SQL-1085 (Bug Fix): Google Analytics Data: incorrect parsing for custom events			SQL-1080 (Bug Fix): Awin: trackedCurrencyAmount variable in the Awin internal_Transactions.sql is type decimal but returns a JSON			SQL-982 (Bug Fix): Amazon SP: wrong job status when report_fbamanageinventory is FATAL</description>
            <category>CData Virtuality [Archived]</category>
            <pubDate>Thu, 30 Oct 2025 19:36:07 +0100</pubDate>
        </item>
                <item>
            <title>Updated XML Map Source File Update Docs</title>
            <link>https://community.cdata.com/cdata-arc-48/updated-xml-map-source-file-update-docs-1734</link>
            <description>How do I delete unused template files?</description>
            <category>CData Arc</category>
            <pubDate>Thu, 30 Oct 2025 08:36:05 +0100</pubDate>
        </item>
                <item>
            <title>Extend Microsoft Copilot Studio Agents to Access Live Data in Teams</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/extend-microsoft-copilot-studio-agents-to-access-live-data-in-teams-1741</link>
            <description>Learn how to bring live, conversational data insights directly into Microsoft Teams using Microsoft Copilot Studio and CData Connect AI. This guide walks through deploying Copilot Studio agents to Teams and connecting them securely to enterprise data via the Connect AI Remote MCP Server. With CData Connect AI, you can query and act on live data for smarter business interactions; no replication required.Read the full article here: https://www.cdata.com/kb/articles/cloud-copilot-teams.rst</description>
            <category>CData Connect AI</category>
            <pubDate>Wed, 29 Oct 2025 17:19:43 +0100</pubDate>
        </item>
                <item>
            <title>Unable to request body in rest connector</title>
            <link>https://community.cdata.com/cdata-arc-48/unable-to-request-body-in-rest-connector-1694</link>
            <description>Let me explain the overall flow i am trying to achieve. The flow is exposed as rest service to client. Client will call the rest service to send a file to a specific location. Client will trigger the flow, there is a requirement where i need to call a rest service inside the flow . Client is passing the required data in headers and the same header values i need to set while calling the rest service. I tried adding rest connector but body is disabled. I am not able to send custom request body to a service through rest connector. can you please guide me how to do it. </description>
            <category>CData Arc</category>
            <pubDate>Mon, 27 Oct 2025 15:21:15 +0100</pubDate>
        </item>
                <item>
            <title>dbamp linked service failure pulling data from salesforce to sql</title>
            <link>https://community.cdata.com/cdata-dbamp-49/dbamp-linked-service-failure-pulling-data-from-salesforce-to-sql-1710</link>
            <description>version: 4.1.6.0Description: All of my sql jobs pulling variations of sf_refresh are failing as of 10/11/2025. Currently not bring any data down in our production environment ERROR:     10/11/2025 5:02:02 AMLog        Job History (SF_Refresh Salesforce - ChargentBase__Gateway__c)Step ID        1Server        AZSQL02Job Name        SF_Refresh Salesforce - ChargentBase__Gateway__cStep Name        1Duration        00:00:00Sql Severity    16Sql Message ID    0Operator Emailed    Operator Net sent    Operator Paged    Retries Attempted    1MessageExecuted as user: NT SERVICE\SQLSERVERAGENT. --- Ending SF_Refresh. Operation FAILED. ESQLSTATE 42000] (Error 50000)  --- Starting SF_Refresh for ChargentBase__Gateway__c V4.1.4 -SQLSTATE 01000] (Error 0)  05:02:02: Using Schema Error Action of yes 1SQLSTATE 01000] (Error 0)  OLE DB provider &quot;DBAmp.DBAmp&quot; for linked server &quot;Salesforce&quot; returned message &quot;Error 5103 : Error: Zipped response from salesforce was missing valid zip header.&quot;. pSQLSTATE 01000] (Error 7412)  05:02:02: Error: Unable to validate object: Cannot initialize the data source object of OLE DB provider &quot;DBAmp.DBAmp&quot; for linked server &quot;Salesforce&quot;.  SQLSTATE 01000] (Error 0)  --- Ending SF_Refresh. Operation FAILED. ASQLSTATE 01000] (Error 0) </description>
            <category>CData DBAmp</category>
            <pubDate>Fri, 24 Oct 2025 21:47:36 +0200</pubDate>
        </item>
                <item>
            <title>couple issue. just upgraded to latest version dbamp. Getting unspecified error when trying to bring dow 6k records from application object in salesforce using the linked server directly.</title>
            <link>https://community.cdata.com/cdata-dbamp-49/couple-issue-just-upgraded-to-latest-version-dbamp-getting-unspecified-error-when-trying-to-bring-dow-6k-records-from-application-object-in-salesforce-using-the-linked-server-directly-1721</link>
            <description>i get error when trying to pull down application data that changed in last 4 days. I’m querying the linked server directly like this . it seems like i’m hitting some threshold and when it’s over a certain amount it gives me this error. this didnt happen in my old dbamp version. ideas onf ix?                select *                --  select count(*)FROM �MATCHFORCE].�CData].tSalesforce].MApplication__c_QueryAll]WHERE rLastModifiedDate] &amp;lt;   dateadd(second,-1,(dateadd(day, datediff(day, 2, getdate()),0)))                AND 2LastModifiedDate] &amp;gt; dateadd(second,-1,(dateadd(day, datediff(day, 3, getdate()),0)))error:OLE DB provider &quot;MSOLEDBSQL&quot; for linked server &quot;MATCHFORCE&quot; returned message &quot;Unspecified error&quot;.Msg 7330, Level 16, State 2, Line 162Cannot fetch a row from OLE DB provider &quot;MSOLEDBSQL&quot; for linked server &quot;MATCHFORCE&quot;.</description>
            <category>CData DBAmp</category>
            <pubDate>Fri, 24 Oct 2025 21:44:31 +0200</pubDate>
        </item>
                <item>
            <title>cData DBAmp product support</title>
            <link>https://community.cdata.com/cdata-dbamp-49/cdata-dbamp-product-support-1709</link>
            <description> </description>
            <category>CData DBAmp</category>
            <pubDate>Fri, 24 Oct 2025 21:42:22 +0200</pubDate>
        </item>
                <item>
            <title>Where to get the product download to upgrade to the latest release</title>
            <link>https://community.cdata.com/developers-52/where-to-get-the-product-download-to-upgrade-to-the-latest-release-1733</link>
            <description>I struggle every quarter finding where on the website to download the most recent release install media.</description>
            <category>Developers</category>
            <pubDate>Thu, 23 Oct 2025 17:25:33 +0200</pubDate>
        </item>
                <item>
            <title>ADO.NET Provider for Snowflake does not show tables/views in Analysis Services Data Source View dialog</title>
            <link>https://community.cdata.com/cdata-drivers-45/ado-net-provider-for-snowflake-does-not-show-tables-views-in-analysis-services-data-source-view-dialog-1713</link>
            <description>I am test driving the ADO.NET provider for Snowflake to connect to data in Snowflake from Analysis Services in Multidimensional Mode. I follow the descriptions in the article.https://www.cdata.com/kb/tech/snowflake-ado-ssas.rstConnection to Snowflake succeeds but when I try to add tables to the Data Source View there are no tables/views shown. The dialog is completely empty. Compare section “Creating a Data Source View” point 4 in the article where tables are visible.(Remark: I am also sure that tables are available in my Snowflake database) I have set logging level to 5 and inspected the logs. The connection requests seem to work fine. But when calling ADOConnectionImpl.GetDataSourceInformation() it seems not to find the system table sys_sqlinfo. What am I doing wrong? What is the root cause for not seeing any tables/views? 2025-10-15T17:50:11.184+02:00 5 8   29|    0|    1] �META|Schema] ADOConnectionImpl.GetDataSourceInformation()2025-10-15T17:50:11.184+02:00 2 :   29| Q-Id|    1]  EXEC|Parsed] Executing query: ]SELECT VALUE FROM sys_sqlinfo WHERE NAME = &#039;IDENTIFIER_QUOTE_OPEN_CHAR&#039;]2025-10-15T17:50:11.184+02:00 2 :   29|    0|    1]  EXEC|Normlz] Normalized query: rSELECT rVALUE] FROM ysys_sqlinfo] WHERE RNAME] = &#039;IDENTIFIER_QUOTE_OPEN_CHAR&#039;]2025-10-15T17:50:11.184+02:00 2 1   29|    0|    1] 0EXEC|Normlz] Normalized query: ESELECT zsys_sqlinfo].qVALUE], Esys_sqlinfo].iNAME] FROM ]sys_sqlinfo] WHERE MNAME] = &#039;IDENTIFIER_QUOTE_OPEN_CHAR&#039;]2025-10-15T17:50:11.184+02:00 5 5   29|    0|    1] +META|Schema] Engine Invalid object name &#039;sys_sqlinfo&#039;2025-10-15T17:50:11.184+02:00 3 o   29|   38|    1] 1EXEC|Normlz] Executing query: �SELECT Esys_sqlinfo].xVALUE], qsys_sqlinfo]. NAME] FROM osys_sqlinfo] AS ssys_sqlinfo] WHERE  NAME] = &#039;IDENTIFIER_QUOTE_OPEN_CHAR&#039;]2025-10-15T17:50:11.184+02:00 2 C   29| Q-Id|    1] 5EXEC|Messag] Executed query: dSELECT VALUE FROM sys_sqlinfo WHERE NAME = &#039;IDENTIFIER_QUOTE_OPEN_CHAR&#039;] Success: (0 ms)2025-10-15T17:50:11.184+02:00 2 s   29| Q-Id|    1] 1EXEC|Parsed] Executing query: -SELECT VALUE FROM sys_sqlinfo WHERE NAME = &#039;IDENTIFIER_QUOTE_CLOSE_CHAR&#039;]2025-10-15T17:50:11.184+02:00 2 C   29|    0|    1] 1EXEC|Normlz] Normalized query:  SELECT  VALUE] FROM Csys_sqlinfo] WHERE  NAME] = &#039;IDENTIFIER_QUOTE_CLOSE_CHAR&#039;]2025-10-15T17:50:11.184+02:00 2 U   29|    0|    1] 5EXEC|Normlz] Normalized query: �SELECT  sys_sqlinfo].[value], msys_sqlinfo].dNAME] FROM Lsys_sqlinfo] WHERE VNAME] = &#039;IDENTIFIER_QUOTE_CLOSE_CHAR&#039;]2025-10-15T17:50:11.184+02:00 5 I   29|    0|    1] &#039;META|Schema] Engine Invalid object name &#039;sys_sqlinfo&#039;2025-10-15T17:50:11.184+02:00 3 d   29|   39|    1] qEXEC|Normlz] Executing query: 0SELECT [sys_sqlinfo].|VALUE],  sys_sqlinfo]. NAME] FROM usys_sqlinfo] AS ssys_sqlinfo] WHERE [name] = &#039;IDENTIFIER_QUOTE_CLOSE_CHAR&#039;]2025-10-15T17:50:11.184+02:00 2     29| Q-Id|    1] LEXEC|Messag] Executed query: .SELECT VALUE FROM sys_sqlinfo WHERE NAME = &#039;IDENTIFIER_QUOTE_CLOSE_CHAR&#039;] Success: (0 ms)2025-10-15T17:50:14.684+02:00 5 Q   29|    0|    1] uMETA|Schema] Engine Invalid object name &#039;sys_disconnect&#039;2025-10-15T17:50:14.684+02:00 4 n   29| Q-Id|    1] mINFO|Connec] Executed sys_disconnect: Success: (0 ms)2025-10-15T17:50:14.684+02:00 1 ]   29| Q-Id|    1] nINFO|Connec] Closed Snowflake connection2025-10-15T18:07:35.367+02:00 5 �   21|    0|    2] oMETA|Schema] Engine Invalid object name &#039;sys_disconnect&#039;2025-10-15T18:07:35.367+02:00 4 c   21| Q-Id|    2] dINFO|Connec] Executed sys_disconnect: Success: (0 ms)2025-10-15T18:07:35.367+02:00 1 I   21| Q-Id|    2] dINFO|Connec] Closed Snowflake connection</description>
            <category>CData Drivers</category>
            <pubDate>Tue, 21 Oct 2025 15:45:40 +0200</pubDate>
        </item>
                <item>
            <title>🚨IF YOU USE API Access Control: Salesforce change may stop DBAmp on Nov 4, 2025 🚨</title>
            <link>https://community.cdata.com/ideas/if-you-use-api-access-control-salesforce-change-may-stop-dbamp-on-nov-4-2025-1722</link>
            <description>TL&amp;amp;DRSalesforce announced on Wednesday, Oct 15, 2025 that it’s deprecating the “Use Any API Client” permission within API Access Control. If your Salesforce org has API Access Control enabled, only allow-listed Connected Apps will be able to use the API going forward. This may stop DBAmp connections as early as Nov 4, 2025 unless you take action. Who is Affected?Organizations that:	Use DBAmp to connect to Salesforce, and			Have API Access Control enabled in Salesforce (this setting restricts API access to an allow-list of Connected Apps).	Not sure whether API Access Control is enabled? Ask your Salesforce admin or internal support to confirm. Timeline	Oct 15, 2025: Salesforce announces deprecation of “Use Any API Client.”			Nov 4, 2025 (as early as): Impact may begin. If your org relies on “Use Any API Client,” DBAmp connections can fail without an approved Connected App in place.	 What to do now (Action Items)	Confirm your org state			Ask your Salesforce admin: Is API Access Control enabled?					Tell us you’re impacted			Email dbampsupport@cdata.com with:					Your org name(s) and whether each is Prod or Sandbox									Whether API Access Control is enabled									The critical DBAmp workloads at risk (backups, syncs, reporting, etc.)									Your preferred maintenance window(s) / timeline								Prepare for Connected App allow-listing			We’ll guide you through using an approved Connected App and share early-access builds as needed so your DBAmp jobs continue uninterrupted.			The DBAmp team will be sending brief reminders weekly through Nov 4. If you’re impacted, please reply today so we can prioritize you. The DBAmp Team</description>
            <category></category>
            <pubDate>Tue, 21 Oct 2025 15:26:32 +0200</pubDate>
        </item>
                <item>
            <title>Dynamically Retrieve Data Based on Cell Values with the CData Excel Add-in: CData Excel Formulas</title>
            <link>https://community.cdata.com/cdata-drivers-45/dynamically-retrieve-data-based-on-cell-values-with-the-cdata-excel-add-in-cdata-excel-formulas-1720</link>
            <description>The CData Excel Add-in empowers you to access, query, and manipulate live data directly from Excel without complex scripting or manual imports. Using CData Excel Functions, you can dynamically retrieve data from any supported source based on cell values, enabling real-time analytics and data-driven decisions right within your spreadsheets.In this article, you’ll learn how to:Use CData Excel Functions to query live data dynamically	Reference cell values to filter and update your results in real time	Combine multiple data sources for unified analysis	Automate data refreshes and streamline reporting with no-code configurationsTo learn more, read:Dynamically Retrieve Data Based on Cell Values with the CData Excel Add-in: CData Excel Formulas</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 17 Oct 2025 15:04:45 +0200</pubDate>
        </item>
                <item>
            <title>Connecting to Cloud Storage Platforms Using the CData Access Driver</title>
            <link>https://community.cdata.com/cdata-drivers-45/connecting-to-cloud-storage-platforms-using-the-cdata-access-driver-1717</link>
            <description>Hey CData Community,You can now stop juggling “.accdb” files between desktops, emails, and USB drives! The CData Access Drivers &amp;amp; Connectors (v24.3+) now let you connect directly to Access databases stored in Google Drive, S3, OneDrive, Box, SharePoint, and more.Query and analyze Access data straight from the cloud with secure authentication and zero manual downloads. It’s fast, easy, and collaboration-friendly.Learn more: https://www.cdata.com/kb/articles/microsoft-access-cloud-storage.rst</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 17 Oct 2025 03:17:22 +0200</pubDate>
        </item>
                <item>
            <title>CData Connect AI Hackathon: Build the Future of Connected AI</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/cdata-connect-ai-hackathon-build-the-future-of-connected-ai-1690</link>
            <description>We’re excited to announce our first-ever CData Connect AI Hackathon!  This is your chance to show off the coolest, most impactful AI use cases using CData Connect AI and MCP-enabled AI tools to connect live enterprise data to your agents, assistants, and copilots. Anyone can enter and win – Connect AI only takes minutes to set up. 	Kickoff: September 24, 2025 		Submission Deadline: October 31, 2025 	How it works 	Build your solution 			Use CData Connect AI to connect at least one live enterprise data source. 						Integrate with any MCP-enabled AI agent, assistant, or copilot. 						Create something that’s innovative, practical, or just plain fun! 				Submit your entry below 			Post in this thread with: 					Title of your solution 									Description – what your solution does 									Source systems connected 									AI tools or agents used 									Business impact – what problem it solves or opportunity it unlocks 									(Optional) Demo – screenshots, videos, or links to show it in action 								Get feedback and iterate 			You can submit unlimited entries!						Share progress, ask for advice, and engage with other participants. 			 Prizes Our panel of judges will select winners based on innovation, impact, and timeliness: 	1st place: $500 gift card  		2nd place: $250 gift card 		3rd place: $100 gift card 	Judging criteria Entries will be scored on: 	Innovation – Is the idea creative or unique? 		Impact – Does it solve a real business problem or deliver meaningful value? 		Execution – Is the solution well-implemented, functional, and demonstrable? 		Presentation – Is the submission clear, and does it include helpful visuals or demos? 	Need inspiration? Here are a few ideas to get your gears turning: 	A real-time answer assistant in Claude or ChatGPT that responds to live data queries from Salesforce, NetSuite, or Snowflake. 		Custom workflow automations in n8n or Make that syncs AI-driven insights into Slack or email. 		A finance copilot that generates instant expense or revenue reports from QuickBooks or Sage Intacct. 		A marketing insights agent that pulls live HubSpot and Google Ads data to recommend next-best actions. 		Anything else you can imagine that combines live enterprise data with intelligent agents! 	The competition is live now! Submit as many entries as you like below . But remember, judges will be looking for the most interesting cutting-edge AI solutions. Whether you’re building something for your business, a side project, or just to experiment, this is your chance to show the world what’s possible with live data in connected AI. Terms and ConditionsContest Terms and ConditionsGeneral.  CData Software, Inc. (“CData”) invites you to submit your document, idea, company, use case, application, or other requested submission information for consideration as part of the above contest that CData is administering (the “Contest”). Each such submission, is referred to as a “Submission.”  These terms and conditions (“Terms”) govern your Submission and your participation in the Contest.  All Submissions must meet the following criteria:	Submissions (including your personal contact details) must be true, accurate, original, non-infringing and not in violation of any third-party rights (including, by way of example and not limitation, the rights of privacy and publicity), exclusively created, and owned by you, and you must have all rights necessary to submit the Submission.		Submissions may not contain information of any individuals who have not provided their explicit, legally enforceable authorization or any other personal or sensitive information.		All Submission materials must be in English or, if not in English, you must provide an English translation of the Submission.		CData reserves the right to reject any applicant or Submission for any or no reason in its sole discretion and without notice.		Eligibility. To participate in a Contest, you must be 18 years of age or older and reside in the United States. You may not participate in the Contest if you are a resident of a state or territory where the laws of the United States or local law prohibits participating or receiving a prize in the Contest.	Selection Process.  Selection of Submissions is at CData’s sole discretion. CData’s decisions are final in all matters related to the Contest.  CData reserves the right to disqualify Submissions for any reason.  If you are selected as a winner or to otherwise participate in the Contest, you may be required by CData to enter into a separate agreement with CData regarding your participation in the Contest. Selected applicants will be required to provide additional information (such as taxpayer, KYC, and payment details), be subject to additional review (such as a background check or identity verification) and accept additional terms regarding the Contest, including additional Contest participation requirements.  You acknowledge that CData has no obligation to select, post, or use any Submission or to respond to you in any way.	Application Period.  The application period for the Contest commences and ends on the time and date set forth in the applicable Contest description above, unless extended by CData in its sole discretion. CData reserves the right, in its sole discretion, to modify, suspend, or cancel the Contest at any time for any reason.	Intellectual Property. By applying to join the Contest, you hereby represent and warrant that your Submissions are your original works of authorship, do not contain information that you or any third party considers to be confidential, do not violate or infringe upon the copyrights, trademarks, rights of privacy, publicity, moral rights or other intellectual property or other rights of any person or entity, and do not violate any rules or regulations.  If the Submission contains any material or elements that you do not own or that are subject to the rights of third parties, then you represent that you have obtained, prior to submission of the Submissions, any and all releases and consents necessary to permit CData and its affiliates to use the Submission without additional compensation to evaluate your application and as otherwise described herein.	Publishing Commitment.  In the event that the Contest involves CData’s selection of Submissions that are intended to be displayed publicly, your selection and any applicable prize is contingent upon your Submission remaining publicly displayed.  You acknowledge that the Contest is designed by CData for the purpose of highlighting and promoting Submissions, and you agree that by participating in a Contest and accepting any award or other consideration, CData and/or you will keep the applicable Submission displayed for a reasonable period of time following the conclusion of the Contest.  	Publicity Rights.  By making a Submission, you hereby grant to CData a royalty free, irrevocable, perpetual and worldwide license to use and display your name, your company, and your company logo on CData’s websites, apps, and/or social media pages and accounts for purposes of marketing and promoting your submission, the Contest, or CData’s products and services, without compensation of any kind or further notice to or approval from you or any third party. Notwithstanding the foregoing, CData has the right, but no obligation to promote, market, highlight, or display you, your company, or your Submission.  	Prizes. The following shall apply with respect to cash or other prizes that may be awarded to you in connection with the Contest (“Prizes”). A monetary Prize will be mailed to your address, or sent electronically to your bank account, only after receipt of any required forms (collectively the “Required Forms”), if applicable, or by such other means as CData may determine, in each case, at CData’s sole discretion. Failure to provide correct information on the Required Forms, or other correct information required for the delivery of a Prize, may result in delayed Prize delivery, disqualification, or forfeiture of a Prize. Prizes will be delivered within 60 days of CData’s receipt of the completed Required Forms.	Fees &amp;amp; Taxes. You are responsible for any fees associated with receiving or using a Prize, including but not limited to, wiring fees or currency exchange fees.  You are responsible for reporting and paying all applicable taxes in your jurisdiction of residence (federal, state/territorial and local). You may be required to provide certain information to facilitate receipt of the award, including completing and submitting any tax or other forms necessary for compliance with applicable withholding and reporting requirements. You may be required to provide a completed form W-9. You are also responsible for complying with any foreign exchange and banking regulations in your respective jurisdiction and reporting the receipt of the Prize to relevant government departments/agencies, if necessary. CData reserves the right to withhold a portion of the Prize to comply with the tax laws of the United States, or those of your jurisdiction.	Disclaimer. CDATA MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, REGARDING THE APPLICATION PROCESS OR PARTICIPATION IN THE CONTEST.	General. If for any reason your Submission is confirmed to have been erroneously deleted, lost, or otherwise destroyed or corrupted, your sole remedy is to make another Submission.  CData may modify these Terms in its sole discretion.  	Privacy and Use of Personal Information. CData collects personal information from you when you apply for the Contest for the purposes of selecting participants for the Contest, administering the Contest and contacting you about the Contest and CData products and services in accordance with CData’s Privacy Statement.	Limitation on Liability. CDATA SHALL NOT BE LIABLE FOR ANY INDIRECT, INCIDENTAL, CONSEQUENTIAL, SPECIAL, OR EXEMPLARY DAMAGES ARISING OUT OF OR RELATED TO THIS AGREEMENT, REGARDLESS OF WHETHER SUCH LIABILITY IS BASED UPON CONTRACT, NEGLIGENCE, TORT, STRICT LIABILITY, WARRANTY, OR OTHERWISE.  THE MAXIMUM LIABILITY OF CDATA FOR ANY CLAIMS ARISING IN CONNECTION WITH THESE TERMS OR THE CONTEST WILL NOT EXCEED TWENTY-FIVE DOLLARS ($25.00).	Relationship of Parties. Nothing set forth herein shall be construed as constituting you and CData as partners, joint venturers, or as creating the relationships of employer/employee, franchiser/franchisee, or principal/agent between the parties.	Governing Law and Disputes. These terms and any disputes arising out of or related hereto, will be governed exclusively by the internal laws of the State of North Carolina, without regard to its conflicts of laws rules or the United Nations Convention on the International Sale of Goods. The parties acknowledge that these terms evidence a transaction involving interstate commerce. The state and federal courts located in Orange County, North Carolina will have exclusive jurisdiction to adjudicate any dispute arising out of or relating to these terms or their formation, interpretation or enforcement, including any appeal of an arbitration award or for trial court proceedings if the arbitration provision below is found to be unenforceable. Each party hereby consents and submits to the exclusive jurisdiction of such courts. Each party also hereby waives any right to jury trial in connection with any action or litigation in any way arising out of or related to these Terms or the Contest. In any action or proceeding to enforce rights under these terms, the prevailing party will be entitled to recover its reasonable costs and attorney’s fees. </description>
            <category>CData Connect AI</category>
            <pubDate>Thu, 16 Oct 2025 20:46:41 +0200</pubDate>
        </item>
                <item>
            <title>How to Secure CData Arc (.NET / Windows Edition)</title>
            <link>https://community.cdata.com/cdata-arc-48/how-to-secure-cdata-arc-net-windows-edition-1714</link>
            <description>Hey CData Community! Looking to strengthen data protection and compliance in your integration environment?Our latest guide, “How to Secure CData Arc (.NET / Windows Edition)”, walks through key security configurations to help safeguard your data and automate workflows confidently.Check out the full article here: How to Secure CData Arc (.NET / Windows Edition)</description>
            <category>CData Arc</category>
            <pubDate>Thu, 16 Oct 2025 16:15:37 +0200</pubDate>
        </item>
                <item>
            <title>Using ThreadID in CData Driver Logs to Debug Multi-Threaded Operations</title>
            <link>https://community.cdata.com/cdata-drivers-45/using-threadid-in-cdata-driver-logs-to-debug-multi-threaded-operations-1712</link>
            <description>CData Drivers now include a ThreadID in their log output, making it easier to trace operations across multiple threads in complex or concurrent workloads. Each log line includes a fixed-width ThreadID, helping you identify which thread generated each event for clearer, faster debugging. Learn how to read and interpret ThreadID values to isolate issues, track concurrent activity, and gain deeper insight into driver behavior in our latest KB article: Using ThreadID in CData Driver Logs to Debug Multi-Threaded Operations</description>
            <category>CData Drivers</category>
            <pubDate>Tue, 14 Oct 2025 16:34:10 +0200</pubDate>
        </item>
                <item>
            <title>Inquiry Regarding Username and Password Length Support in AS2 System</title>
            <link>https://community.cdata.com/cdata-arc-48/inquiry-regarding-username-and-password-length-support-in-as2-system-1711</link>
            <description>Dear Support ArcESB,We are using Arc 2023 Professional on Windows Server 2022 Standard. Our customer exchanges data with us via AS2.The customer has recently migrated their AS2 system to a new platform which uses HTTP Authentication requiring a username and password. The username length is 78 characters, and the password length is 81 characters, as shown in the attached image.Could you please verify if your program supports usernames and passwords of this length? We would appreciate your confirmation and any recommendations you may have.Thank you for your assistance.Best regards,Parinya  P.</description>
            <category>CData Arc</category>
            <pubDate>Tue, 14 Oct 2025 12:59:41 +0200</pubDate>
        </item>
                <item>
            <title>Build AI-Ready Workflows with SQL Server 2025 and CData Sync</title>
            <link>https://community.cdata.com/cdata-sync-47/build-ai-ready-workflows-with-sql-server-2025-and-cdata-sync-1708</link>
            <description>Hey CData Community!Microsoft’s SQL Server 2025 brings AI into the heart of the database with new native vector support, enabling you to store, search, and analyze embeddings directly inside SQL Server. When paired with CData Sync, you can easily connect and replicate data from 250+ sources to power AI workflows like semantic search, RAG pipelines, and anomaly detection — all within SQL Server.Check out the full guide here: https://www.cdata.com/kb/articles/sync-sqlserver25-ai-workflows.rst</description>
            <category>CData Sync</category>
            <pubDate>Thu, 09 Oct 2025 19:47:03 +0200</pubDate>
        </item>
                <item>
            <title>Error 500  Internal Server Error</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/error-500-internal-server-error-1683</link>
            <description>https://www.cdata.com/cloud/   Try it free, but it gives Error 500, please take a look.</description>
            <category>CData Connect AI</category>
            <pubDate>Thu, 09 Oct 2025 11:53:13 +0200</pubDate>
        </item>
                <item>
            <title>DBAmp linked server Error 5103 in Prod Org</title>
            <link>https://community.cdata.com/cdata-dbamp-49/dbamp-linked-server-error-5103-in-prod-org-1703</link>
            <description>Getting the below error in Prod OLE DB provider &quot;DBAmp.DBAmp&quot; for linked server &quot;SALESFORCE_LS&quot; returned message &quot;Error 5103 : Error: Zipped response from salesforce was missing valid zip header.&quot;.  SQLSTATE 01000]</description>
            <category>CData DBAmp</category>
            <pubDate>Wed, 08 Oct 2025 14:17:31 +0200</pubDate>
        </item>
                <item>
            <title>DBAmp linked server Error 5103 after Salesforce Winter 2026 release applied to our test org</title>
            <link>https://community.cdata.com/cdata-dbamp-49/dbamp-linked-server-error-5103-after-salesforce-winter-2026-release-applied-to-our-test-org-1667</link>
            <description> We are on DBAmp version  4.1.8.We have a SQL Server Agent job running daily on our test org. After Salesforce Winter ‘26 was applied to the org, we started getting the following failure message. How can we resolve this? OLE DB provider &quot;DBAmp.DBAmp&quot; for linked server &quot;SALESFORCE&quot; returned message &quot;Error 5103 : Error: Zipped response from salesforce was missing valid zip header.&quot;.Msg 7303, Level 16, State 1, Line 1Cannot initialize the data source object of OLE DB provider &quot;DBAmp.DBAmp&quot; for linked server &quot;SALESFORCE&quot;. </description>
            <category>CData DBAmp</category>
            <pubDate>Tue, 07 Oct 2025 18:00:57 +0200</pubDate>
        </item>
                <item>
            <title>Migrate from Microsoft Access’ Retired Salesforce Connector with CData</title>
            <link>https://community.cdata.com/cdata-drivers-45/migrate-from-microsoft-access-retired-salesforce-connector-with-cdata-1700</link>
            <description>Hey CData Community!Microsoft Access has announced the retiring of its built-in Salesforce ODBC driver in October 2025 (https://techcommunity.microsoft.com/blog/AccessBlog/access-announces-removal-of-salesforce-odbc-driver-in-october-2025/4408815), but organizations can maintain seamless access to Salesforce data with the CData Salesforce ODBC Driver. This driver enables users to connect Access with Salesforce securely and reliably, ensuring business processes and reporting workflows remain uninterrupted. It’s a simple way to keep your Access applications connected without disruption.Read the full article &amp;amp; learn how to migrate to the CData Salesforce ODBC driver to ensure a seamless transition here: https://www.cdata.com/kb/articles/odbc-access-salesforce.rst</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 03 Oct 2025 17:18:37 +0200</pubDate>
        </item>
                <item>
            <title>CData Virtuality 25.2: Many improvements and something more</title>
            <link>https://community.cdata.com/cdata-virtuality-archived-94/cdata-virtuality-25-2-many-improvements-and-something-more-1699</link>
            <description>This release, which went out on August 5, is probably our most improvement-filled release, and, of course, it also includes several new features and a number of bug fixes.Speaking of improvements to the Server, most notably, we’ve implemented some security enhancements: we’ve disabled weak TLS protocols and cipher suites (but if you absolutely need to keep using the previous configuration, we’ve prepared a rollback instruction) and enforced strong password policy. User passwords must now comply with password complexity and management standards (for detailed information, see our documentation). This applies to creating new users, importing users and changing passwords. However, users with admin privileges can set passwords that do not comply with these rules, though we do not recommend using weak passwords. This change does not impact previously created user records, regardless of password complexity.Our Snowflake connector has also benefitted from the security and authentication improvement work: it now supports OAuth authentication via Microsoft Entra ID and key-pair authentication. We’ve also adjusted the role mapping behavior to the changes in DEFAULT_SECONDARY_ROLES on the Snowflake side.We’ve also tweaked permissions to view job and query logs: non-admin users can now view the logs only if they belong to the job or schedule they own or the query they executed.For our clustering solution, we’ve added a setting to force all jobs to run on the primary node and the ability to distribute the uploaded license - meaning that now it’s enough to upload the license file to just one node, and it will be automatically distributed to all the other nodes.We’ve updated Google Ads API to v20, which means some critical changes: the &quot;ad_group_extension_setting&quot;, &quot; ad_group_feed&quot;, &quot;campaign_extension_setting&quot;, &quot; campaign_feed&quot;, &quot;customer_extension_setting&quot;, &quot; customer_feed&quot;, &quot;extension_feed_item&quot;, &quot; feed&quot;, &quot; feed_item&quot;, &quot; feed_item_set&quot;, &quot;feed_item_set_link&quot;, &quot; feed_item_target&quot;, &quot; feed_mapping&quot;, &quot;feed_placeholder_view&quot; tables and some columns have been removed, and some new columns have been added. If you’d like the full list of removed and added objects, please contact us.Other improvements implemented in this release include a new SIMILARITY string function and a new SYSADMIN.sendEmailToAdmins utility procedure for contacting server administrators via email.We’ve also worked on some of the system tables: the SYSLOG.QueryLogs and SYSLOG.JobLogs system tables have been moved to views and deprecated due to performance degradation. Here is where to find the information now:	For information about running queries: SYSLOG.ActiveQueries table;			For query execution history: SYSLOG.QueryExecutionHistory table;			For information about running jobs: SYSLOG.ActiveJobs table;			For jobs execution history: SYSLOG.JobExecutionHistory table.	Accordingly, we’ve adapted the Studio to the new structure of query and job logs.The SYSADMIN.ScheduleJobs now includes information about the start and end times of the last job execution - they’re in the newly added columns &quot;lastStartTime&quot;, &quot;lastStartExecTime&quot;, and &quot;lastEndTime&quot;.As for bug fixes, we’ve fixed several bugs affecting Snowflake: one where wrong results would be returned when selecting strings with backslashes; another causing data source creation to fail if tables contained VECTOR data type; and a third one preventing loading more than 10,000 tables or views. To resolve the latter, we’ve added the importer.loadMetadataWithJdbc model property to Snowflake data source. When set to false (as it is be default), all tables in the Snowflake data source will be loaded during metadata retrieval, which may cause delays if the number of tables exceeds 10,000. If you wish to prioritize performance, set this property to true - however, only the10,000 tables will be loaded in this case.Another bug we’ve fixed resolved the issue with SQL jobs not terminating sessions properly upon completion, leading to memory leaks. Also, we’ve resolved the bug preventing the INITCAP function from capitalizing words starting after a special character, and fixed the bug where dropping and recreating a table in the same procedure resulted in an error on the subsequent insert statement in the code block. Now everything works as it should.As for the connectors, we’ve added a new Shopify GraphQL connector to the family. Of our existing connectors, AWIN got procedure startdate and endate parameters, Amazon Advertising DSP now supports DSP API, and we’ve worked on the Google Ads API connector to ensure compatibility with v20. To do this, we’ve removed several procedures (&quot;Performance_ExtensionFeedItem&quot;, &quot;Performance_FeedItem&quot;, &quot;Performance_FeedPlaceholder&quot;) and several fields (&quot;campaign_dynamic_search_ads_setting_feeds&quot; field in all procedures and &quot;campaign_criterion_location_group_feed&quot; in the&quot;Performance_Location&quot; procedure).As for bug fixes, we’ve resolved an issue with BingAds where data source creation failed with authentication error, another issue with Freshdesk where pagination failed to retrieve more than one page, and yet another issue with Walmart where WFS inventory report inserted duplicates. Now all is well. Here are all issues in this release:Server	 DVCORE-9017 (New Feature): Add &quot;SYSADMIN.sendEmailToAdmins&quot; procedure to allow users to contact server administrators via email			DVCORE-8780 (New Feature): Add SIMILARITY string function			DVCORE-8997 (Improvement): Integrate Salesforce CData JDBC driver			DVCORE-8985 (Improvement): Google Ads API: update to v20			DVCORE-8979 (Improvement): Improve security by disabling weak TLS protocols and cipher suites			DVCORE-8977 (Improvement): Enforce complex password policy			DVCORE-7177 (Improvement): Snowflake: add OAuth authentication via Microsoft Entra ID			DVCORE-8743 (Improvement): Snowflake: add support for key-pair authentication			DVCORE-8976 (Improvement): Snowflake: adjust role mapping behavior to the changes in DEFAULT_SECONDARY_ROLES on the Snowflake side			DVCORE-8911 (Improvement): Improve the memory consumption of the procedure instruction permission check cache			DVCORE-8895 (Improvement): Clustering: add a setting to force all jobs to run on the primary node			DVCORE-8770 (Improvement): Clustering: distribute the uploaded license			DVCORE-8894 (Improvement): Apply user permissions to job logs	DVCORE-8893 (Improvement): Apply permissions to query logs			DVCORE-8528 (Improvement): Prevent updates on the &quot;SYSLOG.QueryLogs&quot; system table and switch logging behavior to purely appending			DVCORE-8527 (Improvement): Prevent updates on the &quot;SYSLOG.JobLogs&quot; system table and switch logging behavior to purely appending			DVCORE-8526 (Improvement): Extend the &quot;SYSADMIN.ScheduleJobs&quot; system table to include	information about the start and end times of the last job execution			DVCORE-9006 (Bug Fix): SQL jobs do not terminate sessions properly upon completion, causing memory leaks			DVCORE-8993 (Bug Fix): The driver details for CData JDBC drivers cannot be collected if one of the drivers lacks a logo			DVCORE-8932 (Bug Fix): Snowflake: wrong results when selecting strings with backslashes			DVCORE-8924 (Bug Fix): Recreation of the temporary table in the inner procedural block after dropping it fails			DVCORE-8923 (Bug Fix): INITCAP function does not capitalize words after a special character			DVCORE-8917 (Bug Fix): EXECUTE IMMEDIATE raises a permission error when the same query run as a non-dynamic one does not fail			DVCORE-8907 (Bug Fix): Snowflake: cannot load more than 10000 tables or views			DVCORE-8899 (Bug Fix): Snowflake: data source creation fails if tables contain VECTOR data type			DVCORE-8890 (Bug Fix): Procedure block cannot define a temporary table if its name is used in any called procedure			DVCORE-8881 (Bug Fix): Creation and deletion of tables does not update the dependency graph and metadata of dependent views			DVCORE-8864 (Bug Fix): Dropping and recreating a table using the same set of columns in the same procedure results in an error on the subsequent select statement	in the code block			DVCORE-8673 (Bug Fix): Clustering: descriptions of views differ between nodes after running concurrent queries			DVCORE-8562 (Bug Fix): Error message is incorrect for calling a non-existent function			DVCORE-8550 (Bug Fix): Unclear error message is returned when creating a table with a	duplicate name			DVCORE-8360 (Bug Fix): Virtual Databases: no data in &quot;SYSADMIN.VirtualSchemas&quot; and	&quot;SYSADMIN.DataSources&quot; system tables for custom virtual databases			DVCORE-8201 (Bug Fix): BigQuery: RECORD data type with multiple nested repeated records does not get flattened			DVCORE-7939 (Bug Fix): Drop index fails with NullPointerException if the table has three or	more columns			DVCORE-7809 (Bug Fix): Materialized table cannot be used when original source is not available after the server restart	 Studio	DVCORE-8823 (Improvement): Adapt the Studio to the new structure of query logs			DVCORE-8822 (Improvement): Adapt the Studio to the new structure of job logs	 Connectors	SQL-980 (New Feature): Shopify GraphQL: create connector			SQL-1064 (Improvement): AWIN: add procedure startdate and endate parameters			SQL-1061 (Improvement): Google Ads API: ensure compatibility with v20	SQL-375 (Improvement): Amazon Advertising DSP: add support for DSP API			SQL-1067 (Bug Fix): BingAds: data source creation fails with authentication error			SQL-1059 (Bug Fix): Freshdesk: pagination fails to retrieve more than one page			SQL-1056 (Bug Fix): Walmart: WFS inventory report inserts duplicates</description>
            <category>CData Virtuality [Archived]</category>
            <pubDate>Thu, 02 Oct 2025 18:04:14 +0200</pubDate>
        </item>
                <item>
            <title>Securely Connect to PostgreSQL via SSH with the CData ODBC Driver (No PuTTY Needed)</title>
            <link>https://community.cdata.com/cdata-drivers-45/securely-connect-to-postgresql-via-ssh-with-the-cdata-odbc-driver-no-putty-needed-1697</link>
            <description>Accessing PostgreSQL databases hosted in the cloud (like Amazon RDS or Azure) often requires routing through an SSH server, which can be tricky with manual port forwarding and tools like PuTTY.With the CData PostgreSQL ODBC Driver, you can set up a secure connection in just minutes — no extra SSH tools required. Key Highlights:	 Built-in SSH tunneling — no PuTTY, no manual port configs.			 Quick setup — configure a DSN once and connect instantly.			 Works with Amazon RDS for PostgreSQL + AWS EC2 jump servers (and other cloud setups).			 Query live PostgreSQL data from your favorite BI, analytics, or ETL tools.	 	 Full step-by-step guide with screenshots: Read the KB Article</description>
            <category>CData Drivers</category>
            <pubDate>Wed, 01 Oct 2025 14:37:53 +0200</pubDate>
        </item>
                <item>
            <title>Connect Go Applications to Web APIs Using net/http and CData API Server</title>
            <link>https://community.cdata.com/cdata-drivers-45/connect-go-applications-to-web-apis-using-net-http-and-cdata-api-server-1696</link>
            <description>Hey CData Community!If you&#039;re a fan of Go (Golang) and you’ve been wishing your apps could tap into real-time data without building complex connectors or wrestling with custom drivers, your wish just came true.We&#039;ve just released a guide on how to use Go’s net/http and encoding/json packages to connect to a data source table exposed via CData API Server - but that’s just the beginning. This pattern works for any of the 300+ sources we support.Refer to learn more: https://www.cdata.com/kb/articles/golang-http-apiserver.rst</description>
            <category>CData Drivers</category>
            <pubDate>Wed, 01 Oct 2025 13:10:45 +0200</pubDate>
        </item>
                <item>
            <title>CData Virtuality Cloud Agent: Simplifying Hybrid Data Connectivity</title>
            <link>https://community.cdata.com/cdata-virtuality-archived-94/cdata-virtuality-cloud-agent-simplifying-hybrid-data-connectivity-1695</link>
            <description>Hey CData Community!Got cloud apps that need to talk to on-prem data?CData Virtuality’s Cloud Agent is your secure, no-firewall-hacks-needed bridge between cloud deployments and private networks. Whether it’s SQL Server hiding behind a VPN or a legacy system in your basement server rack — Cloud Agent connects it all. Seamlessly. Securely.Check out the guide and start unifying your hybrid data: https://www.cdata.com/kb/articles/cdata-virtuality-cloud-agent.rst</description>
            <category>CData Virtuality [Archived]</category>
            <pubDate>Wed, 01 Oct 2025 13:06:29 +0200</pubDate>
        </item>
                <item>
            <title>Using CMDEXEC with SF_Mirror when xp_cmdshell is restricted</title>
            <link>https://community.cdata.com/cdata-dbamp-49/using-cmdexec-with-sf-mirror-when-xp-cmdshell-is-restricted-1692</link>
            <description>We are upgrading from DBAMP v.4.1.8 to v.22.  Because we are restricted from using xp_cmdshell, most of our SQL Server Agent jobs run sf_replicate.  If I’m understanding the documentation correctly, we won’t be able to call sf_replicate directly on v.22.  The CMDEXEC looks like this on 4.1.8:  &#039;&quot;C:\Program Files\DBAmp\DBAmpNet2.exe&quot; replicate pkchunk,batchsize(25000) Account MyServerName  MyDatabase SALESFORCE&#039;Will the call have to look something like this on v.22? &quot;C:\Program Files\CData\CData DBAmp\bin\DBAmpAZ.exe&quot; MirrorCopy Account MyServerName MyDataBase SALESFORCE pkchunk,batchsize(25000)  </description>
            <category>CData DBAmp</category>
            <pubDate>Fri, 26 Sep 2025 18:42:42 +0200</pubDate>
        </item>
                <item>
            <title>Custom ArcScript Operations</title>
            <link>https://community.cdata.com/cdata-arc-48/custom-arcscript-operations-1682</link>
            <description>Is it possible to create my own Operations that I can call via &amp;lt;arc:call&amp;gt;? Or simple reusable methods/functions using &amp;lt;arc:include&amp;gt; perhaps that can return data? Or?</description>
            <category>CData Arc</category>
            <pubDate>Thu, 25 Sep 2025 15:34:44 +0200</pubDate>
        </item>
                <item>
            <title>ArcScript to determine file type</title>
            <link>https://community.cdata.com/cdata-arc-48/arcscript-to-determine-file-type-1691</link>
            <description>I have a trading partner who’s sending EDI with a file extension (.xml) and a Content-Type: text/plain and they’re also sending EPCIS xml files which are also of Content-Type: text/plain.  I need to determine which files are legit xml/edi and route them accordingly.I use a script on my EDI files to add additional headers, but I don’t think this will work on the xml EPCIS files because they are not EDI.&amp;lt;!-- code I use to add additional headers to edi files --&amp;gt;&amp;lt;arc:set attr=&quot;edi.File&quot; value=&quot;vFilePath]&quot; /&amp;gt;&amp;lt;arc:call op=&quot;x12Scan&quot; in=&quot;edi&quot; out=&quot;edi&quot;&amp;gt;  &amp;lt;arc:enum item=&quot;edi&quot;&amp;gt;    &amp;lt;arc:set attr=&quot;output.header:h_attr]&quot; value=&quot;v_value]&quot; /&amp;gt;  &amp;lt;/arc:enum&amp;gt;&amp;lt;/arc:call&amp;gt;&amp;lt;arc:set attr=&quot;output.filepath&quot; value=&quot;vfilepath]&quot; /&amp;gt;&amp;lt;arc:push item=&quot;output&quot; /&amp;gt;Since the EPCIS files are true xml, I feel like this will bomb on the op=”x12Scan”.  Anyone have a suggestion for adding a script to check the actual content-type of inbound files?Thanks</description>
            <category>CData Arc</category>
            <pubDate>Wed, 24 Sep 2025 23:27:03 +0200</pubDate>
        </item>
                <item>
            <title>🚀 CData Connect Cloud is Now CData Connect AI </title>
            <link>https://community.cdata.com/cdata-connect-ai-98/cdata-connect-cloud-is-now-cdata-connect-ai-1689</link>
            <description>Hey CData Community! We’re excited to share that CData Connect Cloud is now CData Connect AI . This update brings a new name and fresh AI capabilities, while keeping everything you already rely on exactly the same. Here’s the quick breakdown:  What’s New:  AI-ready out of the box  Support for Model Context Protocol (MCP)  Seamless integration with tools like ChatGPT, Claude, Copilot &amp;amp; Crew AI  What Stays the Same:  Your connectors, credentials &amp;amp; datasets  Your configurations &amp;amp; pricing ️ The same trusted platform experience So it’s business as usual — only now smarter, faster, and ready for the AI era. Check out the full announcement here: https://www.cdata.com/blog/connect-cloud-is-connect-ai  </description>
            <category>CData Connect AI</category>
            <pubDate>Wed, 24 Sep 2025 16:03:40 +0200</pubDate>
        </item>
                <item>
            <title>Connect AI has arrived!</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/connect-ai-has-arrived-1688</link>
            <description>We’re extremely excited at CData to announce the launch of our newest product today, Connect AI! As a data connector company, it has always been our mission to provide the best available connectivity wherever it’s needed. Over the past year, the introduction of Model Context Protocol (MCP) for AI has created a wave of demand for AI data connectivity we are excited to meet. Connect AI is a first-of-its-kind managed Model Context Protocol (MCP) platform to build, manage, and scale data connections for AI. Generate responses from AI that: 	Are more accurate 		Grounded in real business data 		Turn AI from cool demo to critical business copilot. 	The platform provides all the governance and security controls that businesses need to feel safe connecting AI to actual data. Getting set up is easy – most sources only need a login. Querying real data from AI can take a matter of minutes. Learn more and try it out yourself with a 14-day free trial: www.cdata.com/ai  </description>
            <category>CData Connect AI</category>
            <pubDate>Wed, 24 Sep 2025 16:02:16 +0200</pubDate>
        </item>
                <item>
            <title>How to send messages to a specific channel in MS Teams using SQL via CData Drivers.</title>
            <link>https://community.cdata.com/cdata-drivers-45/how-to-send-messages-to-a-specific-channel-in-ms-teams-using-sql-via-cdata-drivers-1687</link>
            <description>CData Microsoft Teams Drivers make it easy to integrate Microsoft Teams data into your applications without heavy coding. By exposing Teams APIs through familiar SQL interfaces (ODBC, JDBC, ADO.NET), you can query Teams data, send messages, and even automate channel interactions directly from your existing tools.In this article, you’ll learn how to:Use the SendMessage stored procedure to post messages to a specific Teams channel	Retrieve Team IDs and Channel IDs from Teams or driver queries	Send rich content including HTML messages, high-importance alerts, and @mentions	Leverage the driver to manage Teams data for deeper automationTo learn more, read: Send Messages to a Specified Microsoft Teams Channel with CData Drivers</description>
            <category>CData Drivers</category>
            <pubDate>Wed, 24 Sep 2025 13:17:39 +0200</pubDate>
        </item>
                <item>
            <title>🍂Greetings from CData!🍂 Sync Q3 2025 Release Highlights</title>
            <link>https://community.cdata.com/cdata-sync-47/greetings-from-cdata-sync-q3-2025-release-highlights-1686</link>
            <description>Fall is here and so is CData Sync’s Q3 2025 release! Our latest updates are designed to help you unlock more value from your data, faster and more efficiently.  Enhanced CDC Catalog Expansion: You asked, we delivered Sync now supports DB2 protocol AS400/iSeries and MySQL for even broader data replication capabilities!  Reverse ETL Catalog Expansion: New destinations now supported, including Salesforce Pardot, Salesforce Marketing Cloud, Sage Intacct, and Veeva Vault to enhance your reverse ETL capabilities  Open Delta Table Support: Now available for all your favorite data lake destinations such as: Amazon S3, Azure Blob Storage, Google Cloud Storage, and Azure Data Lake Storage powering your delta integrations across Databricks, Fabric, and more!  Faster File Replication: CSV, Avro, Parquet, and One Lake Mirroring are now 90% faster for smoother large-workload handling.  Generic JDBC Connector: Unlock source-only connectivity and schema discovery for databases beyond officially supported connectors.  Full Feature Documentation: For a comprehensive understanding of all new features, visit our Current Release Documentation. Ready to Try These Features? 	Start with a Free Trial: Experience the new features firsthand and see the difference they make. 		Already a CData user? Upgrade now to enhance your data integration and management capabilities! 	Don’t forget, if you download the latest version of CData Sync, you can easily obtain a new license key through our self-service CData Portal. Our support team is here to assist with any questions or guidance you might need. CData is dedicated to bringing smooth data integration, fresh features, and endless possibilities!</description>
            <category>CData Sync</category>
            <pubDate>Tue, 23 Sep 2025 17:27:35 +0200</pubDate>
        </item>
                <item>
            <title>CData Arc Q3 2025 Release: AI Mapping, Stronger Security, Redesigned UX</title>
            <link>https://community.cdata.com/product-updates/cdata-arc-q3-2025-release-ai-mapping-stronger-security-redesigned-ux-1685</link>
            <description>CData Arc v25.3 delivers game-changing enhancements: AI-assisted EDI mapping, ICAP-based content scanning, enterprise-grade security, and a redesigned admin UX.Highlights	 AI-Assisted XML Mapping – Smart mapping suggestions with OpenAI or Ollama LLMs for faster onboarding.			️ ICAP Connector – Real-time antivirus, filtering, and data-loss prevention built into workflows.			 Enterprise Security Upgrades – Unified SSO, simplified FIPS compliance, and HMAC-authenticated webhooks.			 Redesigned UI – Streamlined Settings, cleaner XML mapping, and improved message traceability.	Arc v25.3 isn’t just an update—it’s a leap forward in secure, intelligent, scalable data exchange. Read the full blog here: CData Arc Release: Q3 2025 </description>
            <category></category>
            <pubDate>Mon, 22 Sep 2025 20:01:21 +0200</pubDate>
        </item>
                <item>
            <title>Download -&gt; Send Binary File</title>
            <link>https://community.cdata.com/cdata-arc-48/download-send-binary-file-1679</link>
            <description>I am trying to use httpGet to retrieve binary files, then push them through via a Script connector: &amp;lt;arc:set item=&quot;httpGetInput&quot; attr=&quot;url&quot; value=&quot;=params.pdfUrl]&quot; /&amp;gt;&amp;lt;arc:call op=&quot;httpGet&quot; in=&quot;httpGetInput&quot; out=&quot;response&quot;&amp;gt;  &amp;lt;arc:set attr=&quot;output.data&quot; value=&quot;=response.http:content]&quot; /&amp;gt;  &amp;lt;arc:set attr=&quot;output.filename&quot; value=&quot;test.pdf&quot; /&amp;gt;  &amp;lt;arc:push item=&quot;output&quot; /&amp;gt;&amp;lt;/arc:call&amp;gt;It downloads the file, and sends it through, however the file is garbled, I assuming this is a encoding problem somewhere.Downloading the files directly from the source URL, the files render fine.Said connector is inside an API Request, with Response content type set to “application/pdf”:There must be something I am missing. </description>
            <category>CData Arc</category>
            <pubDate>Fri, 19 Sep 2025 18:07:30 +0200</pubDate>
        </item>
                <item>
            <title>Unable to map json response from Rest connector</title>
            <link>https://community.cdata.com/cdata-arc-48/unable-to-map-json-response-from-rest-connector-1680</link>
            <description>I am new to arcESB and we have a basic requirement where we need to call a rest service, get the data in json format and same data i need to use to later while using smtp connector (filename need to be modified). I am unable to map the data, i tried using arcScript in event section of rest connector but it seems to be not working. Any suggestion will be really appreciated. thanks in advance. </description>
            <category>CData Arc</category>
            <pubDate>Fri, 19 Sep 2025 15:37:11 +0200</pubDate>
        </item>
                <item>
            <title>CData Foundations 2025 - Day 2 Featured Speakers</title>
            <link>https://community.cdata.com/foundations-95/cdata-foundations-2025-day-2-featured-speakers-1681</link>
            <description> AI without trusted data is just expensive guesswork.September 24th is when we cut through the AI hype and get to what actually matters—the real strategies, proven implementations, and hard-won insights that separate AI success stories from expensive failures.These industry leaders aren&#039;t here to pitch you on AI&#039;s potential. They&#039;re here to show you exactly how they&#039;re making it work in the trenches, democratizing data modeling, and building the trusted foundations that turn AI investments into competitive advantages. Meet some of your Day 2 AI powerhouse lineup: HARSHIT KOHLI SAmazon Web Services (AWS)] - Enterprise AI implementation reality checks from the front lines Sarab Narang (ServiceNow) - AI-powered automation that actually delivers ROI Sami Hero (Ellie.ai) - Bringing data modeling to the masses with embedded AI Jessie Smith (Ataccama) - Why AI readiness starts with trusted data foundations Philip Stephens (Google) - EXCLUSIVE keynote on Agent-to-Agent interoperability reshaping enterprise AI Benjamin Lehrer (First Water Finance) - Scaling financial analytics without scaling headcountThese sessions will transform how you think about AI implementation. No fluff. No theory. Just results that matter.Grab your spot: https://bit.ly/4kG6cJU</description>
            <category>Foundations</category>
            <pubDate>Fri, 19 Sep 2025 15:29:32 +0200</pubDate>
        </item>
                <item>
            <title>XML map connector - implode/join</title>
            <link>https://community.cdata.com/cdata-arc-48/xml-map-connector-implode-join-1677</link>
            <description>We are trying the map multiple fields in a XML (EDI) into one field in another XML in a XML Map connector. We use the implode formatter for this. But it isn’t working, we only get the first item. But we want all the NTE02 fields joined with a newline as separator. This is the script we use: xpath(&quot;FunctionalGroup/TransactionSet/TX-00401-940/NTE/NTE02&quot;) | implode] Source XML:&amp;lt;FunctionalGroup&amp;gt;            &amp;lt;TransactionSet&amp;gt;        &amp;lt;TX-00401-940 type=&quot;TransactionSet&quot;&amp;gt;            &amp;lt;NTE type=&quot;Segment&quot;&amp;gt;                &amp;lt;!--Note                Reference Code--&amp;gt;                &amp;lt;NTE01&amp;gt;&amp;lt;!--Warehouse                    Instruction--&amp;gt;WHI&amp;lt;/NTE01&amp;gt;                &amp;lt;!--Description--&amp;gt;                &amp;lt;NTE02&amp;gt;PALLETS COVERED WITH STRETCH HOOD&amp;lt;/NTE02&amp;gt;            &amp;lt;/NTE&amp;gt;            &amp;lt;!--Note/Special            Instruction--&amp;gt;            &amp;lt;NTE type=&quot;Segment&quot;&amp;gt;                &amp;lt;!--Note                Reference Code--&amp;gt;                &amp;lt;NTE01&amp;gt;&amp;lt;!--Warehouse                    Instruction--&amp;gt;WHI&amp;lt;/NTE01&amp;gt;                &amp;lt;!--Description--&amp;gt;                &amp;lt;NTE02&amp;gt;ONLY CP7 PALLETS&amp;lt;/NTE02&amp;gt;            &amp;lt;/NTE&amp;gt;            &amp;lt;!--Note/Special            Instruction--&amp;gt;            &amp;lt;NTE type=&quot;Segment&quot;&amp;gt;                &amp;lt;!--Note                Reference Code--&amp;gt;                &amp;lt;NTE01&amp;gt;&amp;lt;!--Warehouse                    Instruction--&amp;gt;WHI&amp;lt;/NTE01&amp;gt;                &amp;lt;!--Description--&amp;gt;                &amp;lt;NTE02&amp;gt;PLEASE LOAD MAX 2 BATCHES&amp;lt;/NTE02&amp;gt;            &amp;lt;/NTE&amp;gt;        &amp;lt;/TX-00401-940&amp;gt;    &amp;lt;/TransactionSet&amp;gt;&amp;lt;/FunctionalGroup&amp;gt; Destination XML:&amp;lt;Remarks&amp;gt;PALLETS COVERED WITH STRETCH HOOD&amp;lt;/Remarks&amp;gt;</description>
            <category>CData Arc</category>
            <pubDate>Fri, 19 Sep 2025 09:05:52 +0200</pubDate>
        </item>
                <item>
            <title>LDAP Authentication Setup in CData Sync with ApacheDS</title>
            <link>https://community.cdata.com/editions-90/ldap-authentication-setup-in-cdata-sync-with-apacheds-1678</link>
            <description>This article explains LDAP authentication and how to set it up with an ApacheDS on-premises server.   What is LDAP? LDAP (Lightweight Directory Access Protocol) is an open, vendor-neutral, industry-standard application protocol for accessing and maintaining distributed directory information services over an IP network. It&#039;s most commonly used for authentication, authorization, and storing user-related information in enterprise systems. At its core, LDAP is a protocol: a set of rules that client and server systems follow to communicate with a directory service. A directory is a specialized database optimized for read-heavy access, structured in a hierarchical, tree-like form (like folders and subfolders). The protocol enables applications and services to query, modify, and authenticate against directory-based data (like usernames, passwords, email addresses, user groups, devices, etc.).  What is a Directory Service? A directory service is a central repository that stores and organizes data to allow for easy lookup. Think of it like a phonebook for computers and users. Examples: Microsoft Active Directory (AD) 	OpenLDAP (open-source LDAP implementation) 	ApacheDS (Java-based LDAP server) 	Novell eDirectory  How Does LDAP Work? LDAP follows a client-server model: The LDAP client sends a request (e.g., to authenticate a user). 	The LDAP server processes the request and responds with results.  Common Use Cases of LDAP: Authentication and Authorization: Centralized login system for users across systems (e.g., enterprise logins). Directory Services: Storing and retrieving hierarchical data (e.g., employee contact information). Single Sign-On (SSO): Integrating with identity providers and SSO platforms. Access Control: Granting role-based access based on LDAP group membership.  LDAP Structure (Key Concepts): LDAP stores data in a hierarchical tree structure: Entries: Basic unit of information, identified by a Distinguished Name (DN). Attributes: Key-value pairs describing the entry (e.g., uid, cn, sn, mail). Object Classes: Define the schema for entries, including required and optional attributes.  Example Structure:  LDAP and Authentication: LDAP is often used to centralize user authentication, enabling single sign-on (SSO) or unified credential systems. Applications authenticate against the LDAP server instead of managing their own user stores. Example Authentication Flow: 1) A user logs into an application. 2) The app connects to the LDAP server and sends a Bind request with the user&#039;s DN and password. 3) If the credentials are valid, the server responds with success; otherwise, failure.  Now I will explain the steps to configure LDAP authentication within CData Sync using ApacheDS, a freely available LDAP server.  Set Up ApacheDS Server  Step 1: Download and Install ApacheDS 1) Go to the official site: https://directory.apache.org/apacheds/ 2) Download the latest installer for your OS (Windows, macOS, or Linux). 3) Run the installer and follow the prompts to complete the installation.  Step 2: Start ApacheDS Service Once installed, start the ApacheDS server using the Manage ApacheDS (management GUI) app or by launching the service.   Step 3: Install Apache Directory Studio (Management Tool) 1) Download from: https://directory.apache.org/studio/ 3) Install and open the Studio. 3) Create a new connection: 3.1 Hostname: localhost 3.2 Port: 10389 (default)   3.4 Click on Check Network Parameter. If success, then Click on Next. 3.5 Bind DN or user: uid=admin,ou=system 3.6 Password: secret (default unless changed)   You’ll now be connected to the LDAP directory.  Step 4: Create LDAP Entries (Users/Groups) Create a Base DN 1) Right-click on the LDAP browser → New Context Entry 2) Choose &quot;Create entry from scratch&quot;   3) Use object classes: 3.1 top 3.2 person 3.3 inetOrgPerson 3.4 organizationalPerson   4) Provide domain components: 4.1) Set Parent: ou=system  4.2) RDN: uid = root     5) On the final step (attributes), enter manually: 5.1 cn = Test User 5.2 sn = User 5.3 userPassword = root   Then click Finish.  Add Organizational Units: 1) Create ou=users and ou=groups under the base DN: 2) Right-click dc=example,dc=com → New → New Entry 3) Use object class: organizationalUnit   4) Add ou=users and ou=groups 5) Click on finish.  Add a Test User: 1) Right-click ou=users → New → New Entry 2) Choose object classes: inetOrgPerson, organizationalPerson, person, top 3) Attributes example: 3.1 uid: jdoe 3.2 cn: John Doe 3.3 sn: Doe 3.4 mail: jdoe@example.com 3.5 userPassword: password123 (make sure to encode as plain or MD5)  Set up CData Sync to use LDAP authentication. Step 1: Enable Authentication in CData Sync 1) Open CData Sync 2) The following steps configure sync to use LDAP to authenticate users if you use the embedded jetty server.  Step 2: Configure LDAP Authentication via sync.properties 3) If the sync.properties file does not already exist in the Sync installation directory, generate it using the following command: java -jar sync.jar -GenerateProperties 4) The following mandatory setting instructs the embedded Jetty server to use LDAP for authentication: ;; LDAP cdata.loginService.ldap.enabled=true  This setting instructs the embedded Jetty server to use LDAP for user authentication. 5) Include the minimum required properties based on your LDAP server setup. At a minimum, Sync requires:  cdata.loginService.ldap.hostname cdata.loginService.ldap.bindDn cdata.loginService.ldap.bindPassword 6) You may also configure additional optional properties as needed. Below is a full list of available settings: cdata.loginService.ldap.userIdAttribute cdata.loginService.ldap.debug cdata.loginService.ldap.forceBindingLogin cdata.loginService.ldap.bindPassword cdata.loginService.ldap.roleMemberAttribute cdata.loginService.ldap.useLdaps cdata.loginService.ldap.roleBaseDn cdata.loginService.ldap.bindDn cdata.loginService.ldap.userPasswordAttribute cdata.loginService.ldap.hostname cdata.loginService.ldap.userRdnAttribute cdata.loginService.ldap.roleObjectClass cdata.loginService.ldap.port cdata.loginService.ldap.authenticationMethod cdata.loginService.ldap.userBaseDn cdata.loginService.ldap.contextFactory cdata.loginService.ldap.userObjectClass cdata.loginService.ldap.roleNameAttribute  7) Here’s an example configuration section for a local ApacheDS LDAP server that I have configured at my end:  ;; LDAP cdata.loginService.ldap.enabled=true cdata.loginService.ldap.bindDn=uid=root,ou=system cdata.loginService.ldap.hostname=127.0.0.1 cdata.loginService.ldap.bindPassword=root cdata.loginService.ldap.port=10389 cdata.loginService.ldap.debug=true cdata.loginService.ldap.authenticationMethod=Simple cdata.loginService.ldap.userBaseDn=ou=system cdata.loginService.ldap.userObjectClass=organizationalPerson cdata.loginService.ldap.userRdnAttribute=uid cdata.loginService.ldap.userIdAttribute=uid cdata.loginService.ldap.userPasswordAttribute=userPassword cdata.loginService.ldap.forceBindingLogin=true cdata.loginService.ldap.roleBaseDn=ou=system cdata.loginService.ldap.roleNameAttribute=cn cdata.loginService.ldap.roleMemberAttribute=member cdata.loginService.ldap.roleObjectClass=groupOfNames cdata.loginService.ldap.useLdaps=false   Step 3: Add LDAP Users in CData Sync To allow LDAP-authenticated users to log in, their usernames must also exist in Sync&#039;s internal user list. To add them: 1) Log in to Sync as an administrator. 2) Go to the gear icon () → Users. 3) Add each LDAP user with the exact same username as in your LDAP directory.  For example, if your LDAP server has users user01 and user02, ensure these same users are added in Sync. Once done, you can stop the application.  Step 4: Test the Configuration Once you have created the sync.properties file with the necessary LDAP settings, added your LDAP users to Sync, and confirmed that their configuration is correct based on your LDAP server’s requirements, you are ready to test the functionality. Start Sync using java -jar sync.jar or as a service. When the login screen appears, enter a username and password for an LDAP user you&#039;ve also added in Sync. Sync will: 1) Match the input username against its internal user list. 2) Authenticate the credentials against your LDAP server. If everything is configured correctly, the user will be logged into the application. If login is successful, you are now using ApacheDS for authentication.  </description>
            <category>Editions</category>
            <pubDate>Thu, 18 Sep 2025 17:43:36 +0200</pubDate>
        </item>
                <item>
            <title>dbQuery check for null results</title>
            <link>https://community.cdata.com/cdata-arc-48/dbquery-check-for-null-results-1676</link>
            <description>Is there a better way to check for null results from a dbQuery, I feel dirty for coming up with this: &amp;lt;arc:set attr=&quot;db.query&quot;&amp;gt;select null as something;&amp;lt;/arc:set&amp;gt;&amp;lt;arc:call op=&quot;dbQuery&quot; in=&quot;db&quot; out=&quot;results&quot;&amp;gt;    &amp;lt;arc:if exp=&quot;&#039;&#039;results.*]&#039; == &#039;&amp;lt;table&amp;gt;&amp;lt;/table&amp;gt;&#039;&quot;&amp;gt;        &amp;lt;!-- handle null --&amp;gt;    &amp;lt;/arc:if&amp;gt;     ...&amp;lt;/arc:call&amp;gt; </description>
            <category>CData Arc</category>
            <pubDate>Tue, 16 Sep 2025 11:28:59 +0200</pubDate>
        </item>
                <item>
            <title>CData Foundations 2025 - Day 1 Featured Speakers</title>
            <link>https://community.cdata.com/foundations-95/cdata-foundations-2025-day-1-featured-speakers-1675</link>
            <description> Fragmented data architecture is the silent saboteur of enterprise innovation. September 17th is when the real talk begins. We&#039;ve assembled a powerhouse lineup of data leaders who aren&#039;t here to sugarcoat the challenges—they&#039;re here to share the proven foundation strategies that actually work.These industry veterans are taking the stage to reveal how they&#039;re rethinking data strategy, modernizing legacy architectures, and building AI-ready foundations that deliver measurable results.Their stories will leave you equipped to turn data chaos into competitive advantage. Meet some of your Day 1 data rockstar speakers: Elise Georis (Databricks) - Hydrating the lakehouse without breaking existing systems Eric Tome (Databricks) - Converging platforms for next-generation AI architecture Andrew Chabot (FinThrive) - Real-world semantic layer deployment at scale Michael Docteroff (Argano) - Strategic frameworks for AI-era data decisions Sven Wilbert (BearingPoint) - Democratizing data across enterprise silos Dean Sund (NCCO) - 100+ year brand&#039;s digital transformation blueprintThese sessions will transform how you think about enterprise data strategy.Why are you waiting? Register now: https://bit.ly/475YPZ6 </description>
            <category>Foundations</category>
            <pubDate>Sun, 14 Sep 2025 18:05:09 +0200</pubDate>
        </item>
                <item>
            <title>How to Cancel Your Subscription for CData Connect Spreadsheets (Free Plan)</title>
            <link>https://community.cdata.com/cdata-connect-ai-98/how-to-cancel-your-subscription-for-cdata-connect-spreadsheets-free-plan-1674</link>
            <description>CData Connect Cloud makes it simple to manage your subscription through self-service directly in the platform. If you have signed up for a trial and wish to cancel, you can complete the process from your own account. Follow the steps below to complete the cancellation process. Step 1: Log in to your account 	Go to https://cloud.cdata.com/. 		Enter your registered email and password to sign in. 	  Step 2: Open the settings page 	From the left-hand navigation panel, click on Settings. 		Under Settings, select Billing. 	 Step 3: On the Billing page, you will see details about your current plan. If no billing information was added, you should see that you are on the Connect Spreadsheets FREE plan. Step 4: Click the Manage Subscription button to open the subscription management options. Step 5: Cancel your subscription 	Select Cancel Subscription and follow the prompts to confirm the cancellation. 	 If you encounter any issues during the process, please contact the Connect Cloud support team at cloudsupport@cdata.com. </description>
            <category>CData Connect AI</category>
            <pubDate>Fri, 12 Sep 2025 23:31:07 +0200</pubDate>
        </item>
                <item>
            <title>Write Operations on Sage Intacct with Embedded Web Services Credentials</title>
            <link>https://community.cdata.com/cdata-drivers-45/write-operations-on-sage-intacct-with-embedded-web-services-credentials-1672</link>
            <description>Hey CData Community!CData Sage Intacct drivers support write operations using embedded Web Services credentials with Basic authentication. That means you can create, update, or delete Intacct objects—like vendors, engagements, or custom fields—without managing separate SenderID credentials.• Simplifies setup: no need to procure or configure additional credentials• Enables full read/write access with familiar authentication flow• Works across CData Intacct drivers and integration tools that support embedded credentialsCheck out the full article here: https://www.cdata.com/kb/articles/intacct-write-embedded.rst</description>
            <category>CData Drivers</category>
            <pubDate>Fri, 12 Sep 2025 00:20:10 +0200</pubDate>
        </item>
                <item>
            <title>How to access the Derby Database in CData Sync AMI Edition</title>
            <link>https://community.cdata.com/editions-90/how-to-access-the-derby-database-in-cdata-sync-ami-edition-1671</link>
            <description>In this article, we will demonstrate how to install the Derby database engine and how to connect to the Sync application database in the AMI edition.  Warning: Modifying the application database directly may cause unexpected errors or data corruption if done incorrectly. Always back-up your database folder before making any changes, and if you need assistance, please contact support@cdata.com  Start the instance and then access the CLI:  1. Download Derby binary using the following command in the opt directory: wget https://dlcdn.apache.org//db/derby/db-derby-10.17.1.0/db-derby-10.17.1.0-bin.tar.gz This will download the Apache Derby database engine (version 10.17.1.0).  2. Extract the archive: tar -xzf db-derby-10.17.1.0-bin.tar.gz Unpacks the Derby installation files.  3. Create installation directory sudo mkdir /opt/ApacheDB Makes a directory `/opt/ApacheDB` to store the Derby binaries.  4. Move Derby into place sudo mv db-derby-10.17.1.0-bin /opt/ApacheDB Places Derby under `/opt/ApacheDB`.   5. Set environment variable: export DERBY_HOME=/opt/ApacheDB/db-derby-10.17.1.0-bin  Points the environment to where Derby is installed, so you can run Derby tools.  6. Launch Derby interactive SQL tool (ij) $DERBY_HOME/bin/ij Starts Derby’s SQL shell (`ij`) to run SQL commands against a Derby database. The ij is a command-line tool that comes with Apache Derby.  Note: This version of Derby requires Java 19 or later. Please upgrade Java if you encounter an UnsupportedClassVersionError using the following command:  sudo apt update sudo apt install openjdk-21-jdk-headless -y  7. Connect to CData Sync’s Derby DB connect &#039;jdbc:derby:/opt/sync/db/cdata_sync&#039;; This will connect to the Derby database, allowing you to run queries against it.   8. Example query: Select * from sync_job_history;   </description>
            <category>Editions</category>
            <pubDate>Thu, 11 Sep 2025 23:05:49 +0200</pubDate>
        </item>
                <item>
            <title>JDBC Driver for Google Analytics GA4 data model</title>
            <link>https://community.cdata.com/cdata-drivers-45/jdbc-driver-for-google-analytics-ga4-data-model-1663</link>
            <description>Hi all, I kindly need help on this one. While checking the documentation CData JDBC Driver for Google Analytics - GoogleAnalytics4 Data Model , I noticed that there are no metrics or dimension related to TAX, DISCOUNT, SHIPPING ,REFUND. These informations were present in the universal model.From the documentation say this in the EcommPurchaseItemIdReport (see boxed part of screenshot) , but the Column names and the description do not match. and after querying the data, it proved that they indeed do not match .   </description>
            <category>CData Drivers</category>
            <pubDate>Wed, 10 Sep 2025 20:16:45 +0200</pubDate>
        </item>
                <item>
            <title>Access GCP VMs via SFTP with the CData JDBC SFTP Driver</title>
            <link>https://community.cdata.com/cdata-drivers-45/access-gcp-vms-via-sftp-with-the-cdata-jdbc-sftp-driver-1665</link>
            <description>Hey CData Community!Need to transfer files between a Google Cloud Platform VM and your SQL tools or ETL workflows? Check out one of our latest articles, which walks you through how to securely connect to GCP VMs via SFTP - whether you&#039;re using WinSCP, FileZilla, or the CData SFTP JDBC Driver.Check out the full article here: https://www.cdata.com/kb/articles/jdbc-sftp-gcp.rst</description>
            <category>CData Drivers</category>
            <pubDate>Wed, 10 Sep 2025 16:26:24 +0200</pubDate>
        </item>
            </channel>
</rss>
