Databricks

Source and Destination Databases

circle-info

Important

This connector can only be used as a source database. The generated data can be written to Local Filesystem, Azure Data Lake Storage (ADLS) or Amazon Simple Storage Service (S3) as Parquet files.

Before you begin

Before you begin, gather this connection information:

  • Name of the server that hosts the database you want to connect to and port number

  • The name of the database that you want to connect to

  • HTTP path to the data source

  • Personal Access Token

  • In Databricks, find your cluster server hostname and HTTP path using the instructions in Construct the JDBC URLarrow-up-right on the Databricks website.

If you first need to load data into Databricks, see Importing Data into Databricks.

Connect and set up the workspace

  1. Launch Syntho and select Connect to a database (or Create workspace).

  2. Under The connection details, choose Databricks from the Type dropdown.

  3. Fill in the required fields:

    • Server hostname → e.g. adb-1111111111111111.0.azuredatabricks.net

    • Catalog name → e.g. demo_catalog

    • Database name → e.g. marketing_db

    • HTTP Path → e.g. sql/protocolv1/o/1234567890123456/0000-111111-demo123

    • Port number → default is 443

    • Personal Access Token → (See Personal Access Tokensarrow-up-right on the Databricks website for information on access tokens.)

    • Warehouse ID (optional) → the SQL Warehouse to query through (recommended). Use this when possible. It is usually faster and more stable for large databases. If omitted, Syntho falls back to JDBC retrieval via the Spark driver.

  4. Click Create Workspace to complete the setup. If Syntho can't make the connection, verify that your credentials are correct. If issues persist, your computer may not be able to locate the server. Contact your network administrator or database administrator for support.

Supported Databricks versions

The table below provides an overview of the supported Databricks versions and their corresponding Apache Spark versions.

Databricks Version
Spark Version

16.2

3.5.0

15.4 LTS

3.5.0

14.3 LTS

3.5.0

Note: Version 13 is no longer supported.

Supported data types

The following table summarizes the current support limitations for various data types when using connectors with Databricks. It indicates what's supported per generator type.

Data Type
AI-powered Generation
Mockers
Mask
Calculated Columns

BINARY

False

True*

True*

True*

BOOLEAN

False

False

True*

DATE

False

TIMESTAMP

False

TIMESTAMP_NTZ

False

ARRAY

False

True*

True*

True*

STRUCT

False

False

False

MAP

False

True*

True*

True*

VARIANT

False

True*

True*

True*

OBJECT

False

True*

True*

True*

ENUM

False

False

False

False

circle-info

* Some data types are not actively supported. Some generators may still show True for these fields. This means you can apply the generator, even though the type is not actively supported. Duplication is fully supported for these data types.

Limitations

  • When entering catalog, database, or schema names, use lowercase letters. Names containing capital letters must be entered in lowercase to ensure a proper connection.

  • Schema, table, and column names containing single quotes (') or backticks (`) are not supported.

Last updated

Was this helpful?