9. Large workloads
Last updated
Was this helpful?
Last updated
Was this helpful?
Working with large databases can significantly impact the performance and success of your synthetic data generation jobs. These tips will help you configure your workspace for large workloads by minimizing memory consumption and optimizing execution speed.
To reduce memory usage and avoid potential timeouts or job failures, consider these strategies:
: Lower the number of concurrent connections to reduce memory usage.
: Smaller batches consume less memory per operation.
: This is a resource-intensive process. Only enable it when absolutely necessary.
(AI synthesis only): Limiting the training data size speeds up processing and conserves resources.
To accelerate data generation for large-scale datasets, apply the following optimizations:
: More connections can speed up data reading and writing through parallel execution.
Enable schema-independent scheduling: By removing constraints in the destination schema, Syntho can parallelize processing based on the number of records instead of schema dependencies.
Write to Parquet instead of a database: Writing directly to a database is often slower. When dealing with very large datasets, consider exporting to efficient columnar file formats like Parquet.
Follow the interactive guide below to handle large workloads
Always aim to use the minimal viable dataset to validate your configurations before executing large jobs. Scaling up becomes much easier and more stable when you're confident in your setup.