Tracking Schema Changes in Production with PostgreSQL Event Triggers

Tracking Schema Changes in Production with PostgreSQL Event Triggers

Harsh Vardhan Goswami

Jul 25, 2025

Product Development

Product Development

Keeping a precise audit trail of database schema changes is essential for compliance, troubleshooting, and collaboration. PostgreSQL’s event triggers provide a robust, programmable method for monitoring, recording, and visualizing DDL activity—making it straightforward to see who changed what, when, and how.

Why Track Schema Changes?

  • Visibility: Know exactly what structural changes were made, and by whom.

  • Compliance: Satisfy auditing and security requirements.

  • Troubleshooting: Quickly pinpoint the origin and context of breaking migrations or unexpected alterations.

Quick Overview

Event triggers in PostgreSQL can respond to Data Definition Language (DDL) events such as CREATE, ALTER, or DROP, allowing you to record details about all changes to your database structure in real time.

Step 1: Set Up the Audit Schema and Tables

Keep audit records isolated by creating a dedicated schema and two audit tables:

  • transaction_audit: Logs metadata for each transaction.

  • ddl_audit: Stores individual DDL statements, referencing their enclosing transaction.

CREATE SCHEMA IF NOT EXISTS audit;

CREATE TABLE audit.transaction_audit (
  transaction_id       BIGINT PRIMARY KEY,
  event_time           TIMESTAMPTZ NOT NULL DEFAULT now(),
  database_user        TEXT NOT NULL,
  application_user_name TEXT,
  application_user_email TEXT,
  client_address       INET
);

CREATE TABLE audit.ddl_audit (
  id                  BIGSERIAL PRIMARY KEY,
  transaction_id      BIGINT NOT NULL REFERENCES audit.transaction_audit(transaction_id) ON DELETE CASCADE,
  command_tag         TEXT NOT NULL,
  object_identity     TEXT,
  query_text          TEXT NOT NULL
);
Step 2: Create the Event Trigger Function

Define a PostgreSQL function that logs schema changes. This function will:

  • Insert transaction metadata if it’s not already present.

  • Log each DDL statement executed in the transaction.

CREATE OR REPLACE FUNCTION audit.log_schema_changes()
RETURNS event_trigger
LANGUAGE plpgsql
AS $$
DECLARE
  obj record;
  tx BIGINT := txid_current_if_assigned();
  usr_name TEXT := current_setting('audit.user_name', true);
  usr_email TEXT := current_setting('audit.user_email', true);
BEGIN
  INSERT INTO audit.transaction_audit (
    transaction_id, database_user, application_user_name, application_user_email, client_address
  ) VALUES (
    tx, session_user, usr_name, usr_email, inet_client_addr()
  )
  ON CONFLICT (transaction_id) DO NOTHING;

  FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
    INSERT INTO audit.ddl_audit (
      transaction_id, command_tag, object_identity, query_text
    ) VALUES (
      tx, obj.command_tag, obj.object_identity, current_query()
    );
  END LOOP;
END;
$$;
Step 3: Attach the Event Trigger to DDL Events

Wire up the logging function to relevant DDL activity:

CREATE EVENT TRIGGER track_schema_changes
ON ddl_command_end
EXECUTE FUNCTION audit.log_schema_changes();
Step 4: Integrate with Production Workflows

Schema migrations should generally be automated through your CI/CD pipeline, not applied by hand. To track who made changes:

  • Create a CI-only PostgreSQL role with narrowly scoped permissions for migrations.

  • Use environment variables and connection options to pass contextual information (committer name, email, etc.) to the database.

Example: Creating a ci_user for migrations

CREATE ROLE ci_user WITH LOGIN PASSWORD '';
GRANT CREATE ON DATABASE yourdb TO ci_user;
GRANT USAGE, CREATE ON SCHEMA public TO ci_user;
GRANT USAGE, CREATE ON SCHEMA audit TO ci_user;
GRANT INSERT, SELECT ON ALL TABLES IN SCHEMA audit TO ci_user;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA audit TO ci_user;

In your CI pipeline (e.g., GitHub Actions), set connection settings:

- name: Set PGOPTIONS for audit
  run: |
    echo "PGOPTIONS=-c audit.user_name=${{ github.event.head_commit.author.name }} -c audit.user_email=${{ github.event.head_commit.author.email }}" >> $GITHUB_ENV
Step 5: Visualize the Audit Log with Read-Only Access

To safely inspect audit records:

  • Create a dedicated, read-only role with access restricted to the audit schema.

CREATE ROLE audit_reader WITH LOGIN PASSWORD '';
REVOKE ALL ON SCHEMA public FROM audit_reader;
REVOKE ALL ON ALL TABLES IN SCHEMA public FROM audit_reader;
REVOKE ALL ON ALL FUNCTIONS IN SCHEMA public FROM audit_reader;

GRANT USAGE ON SCHEMA audit TO audit_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA audit TO audit_reader;

Now, you can securely build a dashboard or reporting interface to review schema changes, filtered by time, user, command, or affected object.

lightbulb_2

Pro tip

  • Isolate Audit Data: Keep audit data separate from application tables for simpler permissions and safer access.

  • Automate Migrations: Always apply migrations via automated tools and CI/CD, never by hand.

  • Pass Context Explicitly: Use connection settings to associate schema changes with the actual code author.

  • Review Regularly: Periodically inspect or alert on unexpected manual changes or drift from expected migration flows.

Tracking schema changes with PostgreSQL event triggers enables traceability, enhances security, and helps ensure smoother, safer database evolution.