View and adjust generation settings
The final step in the data generation wizard allows you to adjust the Generation settings.
Data generation capabilities
In the second step of the data generation wizard, you can configure additional functionalities to control how Syntho prepares your database and optimizes writing performance. These settings help ensure smooth data generation and can significantly improve performance when working with large datasets.

Destination database preparation
Ensures the database is reset into a clean state before data is written. This reduces the risk of conflicts or errors during the write process.
Data truncation for included tables When enabled, all tables included in your workspace will be truncated (emptied) before new data is written. This guarantees that old or inconsistent data will not interfere with the generation process.
High performance writing
Provides options to speed up the data generation process by temporarily bypassing constraints and indexes.
Bypass foreign key constraints Drops foreign key (FK) constraints during the write operation to avoid errors and improve performance. After writing, Syntho will attempt to re-add the FK constraints. This can greatly reduce write time when used in combination with a higher maximum number of connections.
⚠️ Note: This applies to all tables, including excluded ones.
Bypass indexes Drops all indexes before writing and attempts to recreate them afterward. This can greatly reduce write time for large tables.
Identity writing Temporarily disables identity/auto-increment settings to allow direct writes into those columns. After writing, identity behavior is restored.
Advanced generation settings
In the third step of the data generation wizard, the following advanced settings can be adjusted for your generation job:
Read batch size: The maximum number of data points (number of rows * columns) per batch to read from each table in the source database. The allowed value is integer. The default batch size is 1M. You can increase the value to improve reading speeds, at the costs of a larger memory usage, and vice versa.
Write batch size: The maximum number of data points (number of rows * columns) per batch to insert in each table in the destination database. The allowed value is integer. The default batch size is 1M. You can increase the value to improve writing speeds, at the costs of a larger memory usage, and vice versa.
Maximum number of connections: The maximum number of connections that can be made to the database during the writing. A higher number can speed up the process as it allows for more parallel operations. However, if you are reading from a production database, a higher number may not always be desirable.
Known Issues
Parameter Limit Error When data is inserted into the destination database you may encounter an error indicating that the SQL query parameter limit has been exceeded. This occurs because large write batch sizes can exceed the driver’s maximum allowable parameters for a single query. For instance, you might see an error like:
The SQL contains 16970 parameter markers, but 1000010 parameters were supplied'If you experience this error, reducing the "Write batch size" is recommended to stay within the driver’s parameter limits. As a starting point, you can reduce the batch size from 1 million datapoints (1M) to 500k datapoints (0.5M), half the write batch size again even lower, if the errors persist. The exact limit can vary per environment.
Last updated
Was this helpful?

