Get sensible PostgreSQL configuration recommendations from a simple GET request. Built for CI, provisioning, and one-liners—no UI required.
Tip: use with curl, wget, or your config management of choice.
{ "settings": { "shared_buffers": "8GB", "effective_cache_size": "24GB", "work_mem": "64MB", "maintenance_work_mem": "2GB", "wal_buffers": "auto", "checkpoint_completion_target": 0.9, "default_statistics_target": 100, "random_page_cost": 1.1, "effective_io_concurrency": 200, "max_connections": 200, ... }, "warnings": []}
Same inputs → same outputs. Easy to diff and commit.
Just call the endpoint. Great for cloud-init/Ansible/Terraform.
Sane recs that won’t shock your cluster.
Fill in your server profile and copy the URL or curl command. The page won’t make network calls—no CORS headaches.
https://api.example.com/v1/tune?memory_gb=32&cpus=8&storage_type=ssd&workload=oltp
curl -sS "https://api.example.com/v1/tune?memory_gb=32&cpus=8&storage_type=ssd&workload=oltp
wget -qO- "https://api.example.com/v1/tune?memory_gb=32&cpus=8&storage_type=ssd&workload=oltp
Returns recommended postgresql.conf settings based on the provided server profile.
Name | Type | Required | Description |
---|---|---|---|
version | number | no | PostgreSQL major version (default: 17). |
os | enum | yes | linux | macos | windows |
memory_gb | number | yes | Total RAM in GB (e.g., 32). |
cpus | number | yes | Logical CPU count (e.g., 8). |
storage_type | enum | yes | hdd | ssd | network |
workload | enum | yes | webapp | oltp | warehouse | desktop | mixed |
max_conn | number | no | Target max_connections (optional; default derived from profile). |
num_disks | number | yes | Number of disks used in the cluster. |
backup_method | number | no | pg_dump (default) | pg_basebackup |pglogical (this is a catchall for any logical backup) |
num_replicas | number | no | The number of active replicas |
db_size_gb | number | yes | The projected size in GB (e.g., 100) |
All values are returned as strings or numbers suitable for postgresql.conf. You can template or write them directly.
{ "config": { "checkpoint_timeout": duration, "checkpoint_completion_target": float, "wal_compression": on | off, "wal_buffers": size, "wal_writer_delay": duration, "wal_writer_flush_after": size, "shared_preload_libraries": string, "track_io_timing": on | off, "track_functions": string, "max_worker_processes": int, "max_parallel_workers_per_gather": int, "max_parallel_workers": int, "bgwriter_delay": duration, "bgwriter_lru_maxpages": int, "bgwriter_lru_multiplier": float, "bgwriter_flush_after": int, "enable_partitionwise_join": on | off, "enable_partitionwise_aggregate": on | off, "jit": on | off, "track_wal_io_timing": on | off, "wal_recycle": on | off, "max_slot_wal_keep_size": size, "archive_mode": on | off | undefined, "archive_command": string | undefined, "min_wal_size": size, "max_wal_size": size, "max_parallel_maintenance_workers": int, "wal_level": string | undefined, "max_wal_senders": int, "max_connections": int, "superuser_reserved_connections": int, "shared_buffers": size, "effective_cache_size": size, "maintenance_work_mem": size, "huge_pages": try | off, "default_statistics_target": int, "random_page_cost": float, "wal_keep_size": size, "effective_io_concurrency": int, "work_mem": size }, "warnings": [string] | undefined}
{ "config": { "checkpoint_timeout": "15min", "checkpoint_completion_target": 0.9, "wal_compression": "on", "wal_buffers": -1, "wal_writer_delay": "200ms", "wal_writer_flush_after": "1MB", "shared_preload_libraries": "'pg_stat_statements'", "track_io_timing": "on", "track_functions": "pl", "max_worker_processes": 8, "max_parallel_workers_per_gather": 2, "max_parallel_workers": 8, "bgwriter_delay": "200ms", "bgwriter_lru_maxpages": 100, "bgwriter_lru_multiplier": 2.0, "bgwriter_flush_after": 0, "enable_partitionwise_join": "on", "enable_partitionwise_aggregate": "on", "jit": "on", "track_wal_io_timing": "on", "wal_recycle": "on", "max_slot_wal_keep_size": "1GB", "archive_mode": "on", "archive_command": "/bin/true", "min_wal_size": "1GB", "max_wal_size": "4GB", "max_parallel_maintenance_workers": 2, "wal_level": "minimal", "max_wal_senders": 0, "max_connections": 100, "superuser_reserved_connections": 3, "shared_buffers": "1GB", "effective_cache_size": "3GB", "maintenance_work_mem": "256MB", "huge_pages": "try", "default_statistics_target": 100, "random_page_cost": 1.1, "wal_keep_size": "3GB", "effective_io_concurrency": 200, "work_mem": "16MB" }, "warnings": [ "WARNING this tool not being optimal for very high memory systems" ]}
curl -sS "https://pgconfig.com//api/v1/tune?memory_gb=32&cpus=8&storage_type=ssd&workload=oltp&num_disks=1&num_replicas=0&db_size_gb=1&version=17&os=linux&backup_method=pg_dump"
curl -sS "https://pgconfig.com//api/v1/tune?memory_gb=32&cpus=8&storage_type=ssd&workload=oltp&num_disks=1&num_replicas=0&db_size_gb=1&version=17&os=linux&backup_method=pg_dump"
- name: Fetch PG tuning ansible.builtin.uri: url: "https://api.example.com/v1/tune?\ memory_gb={{ ansible_memory_mb.real.total / 1024 }}&\ cpus={{ ansible_processor_vcpus }}&\ storage_type=ssd&\ workload={{ profile }}&\ num_disks={{ ansible_devices.keys() - 1 }}&\ num_replicas=2&\ db_size_gb=100&\ version=17&\ os=linux&\ backup_method=pg_basebackup" method: GET return_content: yes register: pg_tune
- name: Show warnings when: - pg_tune.json.warnings is defined - pg_tune.json.warnings | length > 0 ansible.builtin.debug: var: pg_tune.json.warnings
- name: Write postgresql.conf overrides ansible.builtin.copy: dest: /etc/postgresql/postgresql.conf content: | {% for setting in pg_tune.json | ansible.builtin.dict2items %} {{ setting.key }} = {{ setting.value }} {% endfor %}
JSON payload with recommended settings.
Missing/invalid parameters. Returns { "error": "message" }.
Unexpected failure; retry or report.