Hi @quangdutran809
Based on your description, it looks like you are using the Airtable JDBC driver with the MetadataAPI disabled. By default the EnableMetadataAPI property should be set to true for the latest version of the driver however there are a few instances I can think of for you to not be using the Metadata API:
1) You might be using an older version of the driver (this functionality was fully supported in v23 so any older version will not have it). If so I would recommend you download and upgrade to the v23 version of the JDBC driver for Airtable.
2) You are using an updated version of the driver but you might not have MetadataAPI enabled in your Airtable instance. You can read more about that and how to enable the functionality here: https://cdn.cdata.com/help/JAJ/jdbc/RSBAirtable_p_EnableMetadataAPI.htm
However, if you don't desire or are unable to use the MetadataAPI due to restrictions from the Airtable account you can still use the Airtable driver. When EnableMetadataAPI is not activated the Airtable driver will actually fetch metadata via a column scan which is exactly what is happening here.
There are a couple options for fixing the metadata found by the column scan.
1) Raise the RowScanDepth property value: https://cdn.cdata.com/help/JAJ/jdbc/RSBAirtable_p_RowScanDepth.htm . This property controls the number of rows the driver scans to determine the column metadata.
2) Manually define the schema. You can set "Other=GenerateSchemaFiles=OnStart" to generate static schema files in the Location folder. These schema files can be customized to add any missing columns/desired types.
Let me know if you have any questions about this.
Hi @Ethem Q
Thanks for sharing.
The driver is freshly downloaded from Cdata website so I believe EnableMetadataAPI is turned on by default. Even I set it inclusively as true, metadata still return FLOAT for small numbers column.
My table is for testing so it contains only a few rows, set the RowScanDepth would not really solve the problem.
About defining the schema, could you pls be more specific? Also I don’t really expect to get the correct type of numeric column is so difficult, is this because of the nature of AirTable having only numeric type?
Hi @quangdutran809
While it is true that the default setting for the EnableMetadataAPI is True for the driver that does not neccesarily mean that the driver will use it automatically. From my understanding, you can only use it if it is enabled in your account (it might require you to have an Enterprise subscription). If you are authenticating via a PersonalAccessToken you will need to make sure it has the 'schema.bases:read' scope. Can you confirm this?
Also, did you try specifying the BaseId and the name of the Table in question just in case?
https://cdn.cdata.com/help/JAJ/jdbc/RSBAirtable_p_EnableMetadataAPI.htm
However if using MetadataAPI is not an option, I think you can first disable the EnableMetadataAPI in the driver and consider the other options included earlier.
Regarding RowScan did you actually try adjusting the RowscanDepth value? If so, what happens?
As for the last option, this simply requires you to add the following in the connection string:
jdbc:airtable:PersonalAccessToken=xxxx;......;Other=GenerateSchemaFiles=OnStart;
After testing the connection and trying to load the tables you will notice that schema files have been generated for all the tables available in the following location %APPDATA%\CData\Airtable Data Provider\Schema. Navigate to this location and open the TableName.rsd file in a text editor and there you can actually change the type to an integer for example. Save the changes try loading the table again.
Let me know if this works.
I am using Free plan TRIAL of Airtable so if medata API requires Enterprise subscription, it is not for me then.
The editing rsd file locally seems to be working because after changing the column type to integer, meta data object returns INT.
Is this like a cache locally? What if type is changed in AirTable or new column is added? Does the file got overwritten and I need to update it again?
Many thanks @Ethem Q
Glad to hear that worked. This is a way designed to worked with some of our dynamic drivers in order to handle scenarios like this one where the driver is unable to detect the datatype heuristically for a said column and since most of the tools where the drivers are loaded need some kind of type validation this solution meets that requirement.
However you need to keep in mind that if there are changes to the table structure or if the custom fields name or data type is changed, you will need to regenerate the schema file in order for the changes to be reflected.