Databricks

Source and Destination Databases
Important
This connector can only be used as a source database. The generated data can be written to Local Filesystem, Azure Data Lake Storage (ADLS) or Amazon Simple Storage Service (S3) as Parquet files.
Before you begin
Before you begin, gather this connection information:
Name of the server that hosts the database you want to connect to and port number
The name of the database that you want to connect to
HTTP path to the data source
Personal Access Token
In Databricks, find your cluster server hostname and HTTP path using the instructions in Construct the JDBC URL on the Databricks website.
If you first need to load data into Databricks, see Importing Data into Databricks.
Connect and set up the workspace
Launch Syntho and select Connect to a database (or Create workspace).
Under The connection details, choose Databricks from the Type dropdown.
Fill in the required fields:
Server hostname → e.g.
adb-1111111111111111.0.azuredatabricks.netCatalog name → e.g.
demo_catalogDatabase name → e.g.
marketing_dbHTTP Path → e.g.
sql/protocolv1/o/1234567890123456/0000-111111-demo123Port number → default is
443Personal Access Token → (See Personal Access Tokens on the Databricks website for information on access tokens.)
Warehouse ID (optional) → the SQL Warehouse to query through (recommended). Use this when possible. It is usually faster and more stable for large databases. If omitted, Syntho falls back to JDBC retrieval via the Spark driver.
Click Create Workspace to complete the setup. If Syntho can't make the connection, verify that your credentials are correct. If issues persist, your computer may not be able to locate the server. Contact your network administrator or database administrator for support.
Supported Databricks versions
The table below provides an overview of the supported Databricks versions and their corresponding Apache Spark versions.
16.2
3.5.0
15.4 LTS
3.5.0
14.3 LTS
3.5.0
Note: Version 13 is no longer supported.
Supported data types
The following table summarizes the current support limitations for various data types when using connectors with Databricks. It indicates what's supported per generator type.
BINARY
False
True*
True*
True*
ARRAY
False
True*
True*
True*
MAP
False
True*
True*
True*
VARIANT
False
True*
True*
True*
OBJECT
False
True*
True*
True*
ENUM
False
False
False
False
* Some data types are not actively supported. Some generators may still show True for these fields. This means you can apply the generator, even though the type is not actively supported. Duplication is fully supported for these data types.
Limitations
When entering catalog, database, or schema names, use lowercase letters. Names containing capital letters must be entered in lowercase to ensure a proper connection.
Schema, table, and column names containing single quotes (
') or backticks (`) are not supported.
Last updated
Was this helpful?

