Build better products with our product team
As we make changes to workspaces it is important that we use version control.We also need to control deployment between test and production cData Arc instances. It would be good if a Workspace could have an associated git repo (HTTPS or SSH) and implement `git commit` and `git push`. at least. Ideally `git clone` and `git pull` would be useful too
TL&DRSalesforce announced on Wednesday, Oct 15, 2025 that it’s deprecating the “Use Any API Client” permission within API Access Control. If your Salesforce org has API Access Control enabled, only allow-listed Connected Apps will be able to use the API going forward. This may stop DBAmp connections as early as Nov 4, 2025 unless you take action. Who is Affected?Organizations that: Use DBAmp to connect to Salesforce, and Have API Access Control enabled in Salesforce (this setting restricts API access to an allow-list of Connected Apps). Not sure whether API Access Control is enabled? Ask your Salesforce admin or internal support to confirm. Timeline Oct 15, 2025: Salesforce announces deprecation of “Use Any API Client.” Nov 4, 2025 (as early as): Impact may begin. If your org relies on “Use Any API Client,” DBAmp connections can fail without an approved Connected App in place. What to do now (Action Items) Confirm your org state Ask your Salesforce admin: Is API Access Control enabled? Tell us you’re impacted Email [email protected] with: Your org name(s) and whether each is Prod or Sandbox Whether API Access Control is enabled The critical DBAmp workloads at risk (backups, syncs, reporting, etc.) Your preferred maintenance window(s) / timeline Prepare for Connected App allow-listing We’ll guide you through using an approved Connected App and share early-access builds as needed so your DBAmp jobs continue uninterrupted. The DBAmp team will be sending brief reminders weekly through Nov 4. If you’re impacted, please reply today so we can prioritize you. The DBAmp Team
In the programming world, regex is a very useful tool to have in your toolbelt.However, the only regex tool available within CDataVirtuality is ARRAY_LIKE_REGEX, which functions solely as a boolean validator—it checks whether a regex expression is present in the array.It would be great to have a regex tool for pure string matching, with the ability to extract group results from the expression outcome.
Connecting TIBCO Spotfire Information Services to a CData Virtuality server requires a custom JDBC data source template. Below we outline the driver details and provide a correctly structured XML template (rooted at <jdbc-type-settings>) for Spotfire’s latest version (Spotfire 12+). This template defines the JDBC driver class, connection URL pattern, and authentication properties needed to connect to a CData Virtuality server on localhost with default credentials. Driver and Connection Details for CData VirtualityBefore creating the template, ensure the JDBC driver JAR for CData Virtuality is installed on the Spotfire Server. Copy the datavirtuality-jdbc.jar into Spotfire Server’s classpath (for example, the tomcat/lib directory). The CData Virtuality JDBC driver class name and URL format are as follows: Driver Class: `com.datavirtuality.dv.jdbc.Driver` JDBC URL Format: `jdbc:cdatavirtuality:<vdb-name>@mm[s]://<host>:<port>` <vdb-name> is the name of the Virtual Database (VDB) to connect to. The primary VDB is typically named “datavirtuality” mm vs. mms: Use @mm for an unencrypted connection or @mms for SSL. The default ports are 31000 for mm and 31001 for mms For example, to connect to the primary VDB on a local server over the default port, the JDBC URL would be:jdbc:cdatavirtuality:datavirtuality@mm://localhost:31000 with Username admin and Password admin We will use these defaults in the template (placeholders can be adjusted for your environment). Template Structure and ConfigurationSpotfire data source templates define a unique <type-name>, the JDBC driver, URL pattern, and various settings/capabilities for the data source. The structure follows the same pattern as the CData MongoDB example, adjusted for the Virtuality driver. Key elements include: <type-name>: A label for this data source type (e.g. “DataVirtuality”). <driver>: The fully qualified JDBC driver class (as given above). <connection-url-pattern>: The JDBC URL prefix including protocol and any static parts. We include the jdbc:cdatavirtuality: prefix (and default VDB and protocol) here, then supply host/port as separate properties. <ping-command>: A simple test query Spotfire can use to verify the connection. We use SELECT 1 (which the Virtuality engine will execute as a trivial query). <connection-properties>: Individual connection parameters (key/value pairs). For the Virtuality driver we provide: These properties will be combined with the URL pattern to form the complete JDBC connection string at runtime. (The CData driver accepts Server and Port as properties similarly to other CData connectors.) Server – the hostname (e.g. localhost). Port – the port (e.g. 31000 for default non-SSL). User – the username for the Virtuality server (e.g. admin). Password – the password (e.g. admin). JDBC Capabilities and SQL Settings Additional tags define how Spotfire should interact with this source. We include typical settings similar to other templates, for example: Fetch and batch sizes for data retrieval (<fetch-size>10000</fetch-size>, <batch-size>100</batch-size>). Supported metadata features (<supports-catalogs>true</supports-catalogs>, <supports-schemas>true</supports-schemas>, etc.). Data Virtuality’s virtual DB supports schemas (for underlying data sources and INFORMATION_SCHEMA) but operates within a single catalog (the VDB), so it’s safe to leave catalogs enabled (Spotfire will see one catalog corresponding to the VDB). Procedures can be marked unsupported (false) since we typically query tables/views. Quoting and naming patterns for identifiers (using $$name$$ placeholders wrapped in quotes, as in other CData templates) to ensure Spotfire generates correct SQL. Temp table patterns and creation/drop commands for Spotfire’s internal use (same conventions as other JDBC templates). <data-source-authentication>false</data-source-authentication> to indicate that the template will supply its own credentials (via the User/Password properties) rather than using Spotfire’s stored credentials. All these elements are wrapped in the <jdbc-type-settings> root tag. Below is the complete XML template incorporating the above settings. XML Template for CData Virtuality Data SourceUse the following XML as the data source template in the Spotfire Server Configuration (under Configuration > Data Source Templates > New). This template is designed for the CData Virtuality JDBC driver on a local server with default admin/admin credentials – adjust host/port or credentials placeholders as needed for your environment. The template is compatible with Spotfire Server 12 (and later):<jdbc-type-settings> <type-name>DataVirtuality</type-name> <driver>com.datavirtuality.dv.jdbc.Driver</driver> <connection-url-pattern>jdbc:cdatavirtuality:datavirtuality@mm://</connection-url-pattern> <ping-command>SELECT 1</ping-command> <connection-properties> <connection-property> <key>Server</key> <value>localhost</value> </connection-property> <connection-property> <key>Port</key> <value>31000</value> </connection-property> <connection-property> <key>User</key> <value>admin</value> </connection-property> <connection-property> <key>Password</key> <value>admin</value> </connection-property> </connection-properties> <fetch-size>10000</fetch-size> <batch-size>100</batch-size> <max-column-name-length>32</max-column-name-length> <table-types>TABLE, VIEW</table-types> <supports-catalogs>true</supports-catalogs> <supports-schemas>true</supports-schemas> <supports-procedures>false</supports-procedures> <supports-distinct>true</supports-distinct> <supports-order-by>true</supports-order-by> <column-name-pattern>"$$name$$"</column-name-pattern> <table-name-pattern>"$$name$$"</table-name-pattern> <schema-name-pattern>"$$name$$"</schema-name-pattern> <catalog-name-pattern>"$$name$$"</catalog-name-pattern> <procedure-name-pattern>"$$name$$"</procedure-name-pattern> <column-alias-pattern>"$$name$$"</column-alias-pattern> <string-literal-quote>'</string-literal-quote> <max-in-clause-size>1000</max-in-clause-size> <condition-list-threshold>10000</condition-list-threshold> <expand-in-clause>false</expand-in-clause> <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern> <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern> <procedure-table-jdbc-type>0</procedure-table-jdbc-type> <procedure-table-type-name></procedure-table-type-name> <date-format-expression>$$value$$</date-format-expression> <date-literal-format-expression>'$$value$$'</date-literal-format-expression> <time-format-expression>$$value$$</time-format-expression> <time-literal-format-expression>'$$value$$'</time-literal-format-expression> <date-time-format-expression>$$value$$</date-time-format-expression> <date-time-literal-format-expression>'$$value$$'</date-time-literal-format-expression> <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions> <temp-table-name-pattern>$$name$$#TEMP</temp-table-name-pattern> <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command> <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command> <data-source-authentication>false</data-source-authentication> <lob-threshold>-1</lob-threshold> <use-ansii-style-outer-join>false</use-ansii-style-outer-join> <credentials-timeout>86400</credentials-timeout> </jdbc-type-settings> This XML can be saved and uploaded as a new data source template in Spotfire Server’s configuration. After adding the template, restart the Spotfire Server, then create an Information Services data source using this template. You should be able to connect to the CData Virtuality server and build information links against the virtual databases.
To streamline access to information and enhance your experience, we’re fully transitioning to the CData Community as our central hub for discussions and knowledge sharing. As part of this move, the Data Virtuality Community will soon be sunset. Going forward, all new posts and updates will be shared exclusively in the CData Community. Most older posts were migrated last year, and the latest release announcements are now also available in our Knowledge Base, so you’ll continue to have access even after the sunset:Virtuality 4.8 Release Notes Virtuality 4.9 Release Notes Virtuality 4.10 Release NotesNot a member yet?Join the CData Community today to connect with our experts, access product updates, and be part of a growing community.
Hi allHopefully this is okay to post here. I'm an experienced cData Arc developer looking to use my skills to help other organisations get the most out of cData Arc. If you have a project or challenge you wish to discuss, please feel free to get in touch. Search @russell-jerseypost for examples of my answers here. https://www.linkedin.com/in/russellhutson
While it’s great that I can query my data across different connections sometimes it takes a while for the query results to populate. It would be nice if:I could schedule specific datasets to query on set days/times Be notified when the query has completed processing View the results at my leisure
While I am currently able to access my data across all the connections I have setup inside of Connect Cloud, I spend a lot of time navigating to different datasets manually versus having them organized in one place. Having the ability to organize my data across all my connections and their originating data sources would be a huge time saver. For example, placing different forms of datasets inside of folders would streamline my ability to find what I’m looking for faster.
I’m a newbie when it comes to writing SQL code and sometimes my manager wants me to create queries that include things like filtering and joins which I am unable to do. It would be great if there was a wizard or easy step-by-step approach for beginners like myself to help fast track the output of the query I am looking to execute.
When viewing a Connection, it would be nice to see someplace the dependencies on that connection. (I.e. the jobs that have tasks which reference that connection.)
Is there an area in the community for posting Jobs or Projects which the cData developer community may be interested in?I may have a number of projects which require resourcing on a short term contract/freelance basis..
I would like to be able to do two things with Transformations:Be able to execute a transformation at the Start of a job, rather than just at the end. So a Trigger of “Before job”. Be able to execute a transformation against a Source, not just a Target. I know I could do this by rigging up something with Job Events, but using the Transformation feature is so much cleaner.
Maybe it’s bad practice, but in my flow I use Script connectors to create data which I pass through the flow in Message Headers. Sometimes, it’s useful to download the message, adjust it and re-upload it.However this only works for the XML body of a message. Headers can’t be downloaded and re-uploaded. It would be useful if we could download the full message with headers, in a format which could be edited (XML) and be able to re-upload it into the connector
While on the main Jobs view in Sync (/jobs.rst), if you use the free-text Search bar, it only searches the names of the jobs. (Whereas the Job view /job-details.rst will search the Task names etc.)Sometimes you can’t remember which job a particular table’s sync lives in. If from the Jobs view, you could search both the jobs and the tasks, that would be convenient.
We're thrilled to announce the latest enhancements to CData Sync, now with even more powerful features to supercharge your data integration and management: 🌐 Sync Cloud - Our renowned Sync product, previously only available on-premise, has now ascended to the cloud! Managed directly by us and unlimited volume allow... experience the full power and scalability of cloud hosting. 💾 S3: Enhanced Incremental Replication - We've significantly enhanced our S3 Incremental Replication Load from Folder feature by fully supporting directory paths in S3. This update allows for more precise and flexible data management, enabling you to efficiently manage and synchronize your data across even the most complex folder structures. 🔄 CDC from Dynamics 365 - Keep your data fresh with our new Change Data Capture (CDC) support for Dynamics 365. Capture and sync changes in real-time, ensuring your data remains current and accurate. 📚 Full Feature Documentation - For a comprehensive understanding of all new features, visit our Current Release Documentation. Ready to Try These Features? 🆓 Start with a Free Trial.- Experience the new features firsthand and see the difference they make. 🔝 Already a CData user? Upgrade now to enhance your data integration and management capabilities! Here’s to a transformative 2024 with continuous innovation! 🚀 The CData Sync Team
Sometimes when deploying flows which handle a large number of messages, it’s easy to consume more disk space than expected. A way to identify the Connectors/Workspaces which are consuming the most disk space would allow help us find connectors which may have the wrong settings for keeping Sent messages or keeping Logs. Also, being able to control automatically cleanup files at the Connector or Workspace level rather than just the Settings /settings.rst#advancedTab level would be great
I would like to be able to add a short comment to some Jobs or Tasks within a Job. Something to guide other developers about any pitfalls or why a particular ingestion approach was taken.It would be great to have a “comment bar” that is editable on Jobs and Tasks, right below the tab strip.
CData Software is a leading provider of data access and connectivity solutions. Our standards-based connectors streamline data access and insulate customers from the complexities of integrating with on-premise or cloud databases, SaaS, APIs, NoSQL, and Big Data.
Data Connectors
ETL/ ELT Solutions
Cloud & API Connectivity
CData Embedded
Data Visualization
Popular AI Integrations
© 2026 CData Software, Inc. All rights reserved. Various trademarks held by their respective owners.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK