# Importing Data into Databricks

Once your synthetic data is written as Parquet files to a storage location (Local Filesystem, Azure Data Lake Storage (ADLS), or Amazon S3), follow these steps to import it back into Databricks:

1. **Access Databricks workspace**: Go to your Databricks workspace and navigate to the Data tab.<br>
2. **Select your data source**:
   * For ADLS, select Azure Data Lake.
   * For Amazon S3, choose Amazon S3.
   * If using the Local Filesystem, upload your files to a cloud storage service like ADLS or S3 first.<br>
3. **Mount the storage**: Mount your cloud storage (ADLS or S3) to Databricks following the [Databricks mounting documentation](https://docs.databricks.com/data/data-sources/azure/azure-datalake-gen2.html) for ADLS or [S3](https://docs.databricks.com/data/data-sources/aws/amazon-s3.html).<br>
4. **Read the parquet files**: Use the Databricks Data tab or a notebook to load Parquet files into a DataFrame. For details, check the [Databricks guide on reading files](https://docs.databricks.com/data/data-sources/read-parquet.html).<br>
5. **Create or register a table**: Use Databricks SQL commands or the user interface to create a temporary or permanent table from the loaded data. Parquet files generated by Syntho can be loaded into Databricks using standard Databricks SQL commands. For example:

```sql
CREATE TABLE example_table
USING PARQUET
LOCATION 'dbfs:/FileStore/tables/example.parquet';
```

This command creates a table from the specified Parquet file. Refer to the [Databricks documentation](https://docs.databricks.com/data/tables.html) for more information on managing tables.
