Databricks

Source and Destination Databases

Important

This connector can only be used as a source database. The generated data can be written to Azure Data Lake Storage (ADLS) or Amazon Simple Storage Service (S3) as Parquet files.

Before you begin

Before you begin, gather this connection information:

  • Name of the server that hosts the database you want to connect to and port number

  • The name of the database that you want to connect to

  • HTTP path to the data source

  • Personal Access Token

  • In Databricks, find your cluster server hostname and HTTP path using the instructions in Construct the JDBC URL on the Databricks website.

Connect and set up the workspace

Launch Syntho and select Connect to a database, or under Create workspace, select Databricks. For a complete list of data connections, select More under From database. Then do the following:

  1. Enter the server hostname.

  2. Enter the catalog name.

  3. Enter the database name.

  4. Enter the HTTP Path to the data source.

  5. Enter Personal Access Token. (See Personal Access Tokens on the Databricks website for information on access tokens.)

  6. Select Create Workspace.

    If Syntho can't make the connection, verify that your credentials are correct. If you still can't connect, your computer is having trouble locating the server. Contact your network administrator or database administrator.

Supported data types

The following table summarizes the current support limitations for various data types when using connectors with Databricks. It indicates what's supported per generator type.

Data TypeAI-powered generationDuplicate, Mask

Byte

☑️

☑️

Short

☑️

☑️

Integer

☑️

☑️

Long

☑️

☑️

Float

☑️

☑️

Double

☑️

☑️

Decimal

☑️

☑️

String

☑️

☑️

Binary

☐️

☑️

Boolean

☑️

☑️

Date

☑️

☑️

Timestamp

☑️

☑️

Array

☐️

☑️

Map

☐️

☑️

Struct

☐️

☑️

Last updated