Go-live requirements
Use this page as the go-live requirements checklist for Syntho. Items labelled with a ⏰ icon usually take most time, start them as early as possible. Appoint a single accountable implementation lead to oversee and coordinate all implementation activities end-to-end, including onboarding, architecture, infrastructure, security, DevOps, database access, compliance, approval processes, and use-case prioritization.
The Implementation Statement of Work Status Tracker contains the full list and latest status.
A - Scoping & success
B - Architecture & data flow
Architecture defined, documented and approved ⏰
Network zone placement decided
Intended data flow documented (production → source → Syntho → destination)
Security requirements agreed
Architecture diagram available
Data governance plan defined (if needed)
Data access model and approvals documented
Audit/compliance requirements documented
Data retention policy documented
C - Deployment, databases & users
C-1 - Deployment
Deployment method selected: Docker Compose or Kubernetes (Helm)
Deployment prerequisites met
Docker Compose: Prerequisites
Kubernetes (Helm): Prerequisites
Hardware sizing confirmed: Deployment overview
Registry access works (client can authenticate and pull images)
If required, outbound access to
syntho.azurecr.iois whitelisted (or images are mirrored to a local registry)License key received and available for deployment
Deployment instructions received (and shared with the deployment team) (Docker Compose, Kubernetes (Helm))
Infrastructure provisioned (compute, storage, networking, DNS as needed) ⏰
Infrastructure readiness confirmed (required compute/storage available, DNS prepared if used)
Syntho deployed and reachable by intended users
UI reachable over HTTP(S). DNS/TLS and ingress/proxy posture defined (if applicable) ⏰
If using DNS/TLS, hostname resolves and HTTPS works without browser warnings
Deployment validated (UI loads, admin login works, Ray dashboard visible, logs clean) (see Logs and monitoring and troubleshooting: Docker Compose, Kubernetes)
Backups confirmed functional (policy defined + restore tested):
Docker Compose: Back up PostgreSQL
Kubernetes: Back up PostgreSQL
C-2 - Databases
Data owner(s) identified for the priority use-case dataset/database
Source and destination access request process started (firewall/proxy rules, permissions) ⏰
Source database prepared for read-only access (database needs to be static during data generation, use a consistent snapshot) ⏰
Destination database prepared for writing synthetic data (schema aligned) ⏰
Source DB connectivity + read-only permissions validated (Step 1. Validate source db, Connect to a database)
Destination DB connectivity + write/truncate permissions validated (Step 2. Validate destination db, Connect to a database)
First end-to-end data generation job completed successfully
C-3 - Users
Admin user exists: Manage admin users
Non-admin users provisioned (Owner/Editor/Reader): Manage non-admin users
If using SSO, configured and tested: Single Sign-On (SSO) in Azure ⏰
Database users created for Super Users (source read-only, destination write, alter & truncate) ⏰
Super User credentials shared securely (platform + database)
Super Users can log in and complete the full flow without errors
Connect source and destination (Connect to a database)
Create a workspace (Create a workspace)
Validate source and destination (Step 1. Validate source db, Step 2. Validate destination db)
Run a generation job end-to-end (Step 3. Generate)
Last updated
Was this helpful?

