Enhanced Reports

Overview

Enhanced Reports is a powerful feature that gives you direct SQL query access to your PostGrid data. With Enhanced Reports, you can analyze your mail campaigns, track performance metrics, generate custom exports, and gain deep insights into your print and mail operations—all through standard SQL queries.

What Enhanced Reports Offers

Unlimited Analysis Capabilities

Enhanced Reports provides you with unprecedented flexibility to analyze your data:

  • Custom Analytics: Create any report you can imagine using SQL — from simple status summaries to complex multi-table analysis
  • Real-time Insights: Query your most up-to-date data with incremental synchronization
  • Parameterized Queries: Build reusable reports with dynamic parameters for different date ranges, statuses, or campaigns
  • Export Capabilities: Download full result sets as CSV files for further analysis in Excel, Google Sheets, or your BI tools
  • Preview Mode: Test and validate queries with sample results (up to 1000 rows) before running full exports
  • Saved Reports: Save frequently-used queries and run them anytime with just a few clicks
  • Flexible Filtering: Filter by any field combination—dates, statuses, campaigns, tracking numbers, and more
  • Aggregation Power: Calculate totals, averages, counts, and other aggregates across your entire dataset
  • Join Multiple Tables: Combine data from letters, postcards, contacts, and tracking information in a single query

Why Use Enhanced Reports?

Traditional reporting tools often force you into predefined templates with limited customization. Enhanced Reports breaks these constraints by giving you:

  1. Complete Control: Write any SQL query to answer your specific business questions
  2. Speed: Query optimized data structures built specifically for analytical workloads
  3. Flexibility: No need to wait for new features—if you can express it in SQL, you can report on it
  4. Integration Ready: Export data in CSV format for seamless integration with your existing tools
  5. Scalability: Handle large datasets efficiently with DuckDB's columnar storage engine
  6. Version Control: Save and version your queries as you refine your analysis over time

Getting Started

Initial Setup

When Enhanced Reports is first enabled for your account, there's an initial provisioning period:

⏱️ Initial Provisioning Time: ~60 minutes

During this time, your data lake is being created. This is a one-time setup process that:

  • Extracts data from your PostGrid account
  • Structures it into optimized SQL tables
  • Prepares the query engine for high-performance analytics

After the initial provisioning, your data lake will be kept up-to-date automatically through incremental synchronization.

Note: If you try to run a query before provisioning is complete, you'll see a message: "Your data is still being synchronized. Please try again in an hour."

Accessing Enhanced Reports

  1. Log into your PostGrid Dashboard
  2. Navigate to the Reports section in the main menu
  3. Click "Run Query" to create a new ad-hoc query
  4. Write your SQL query in the editor
  5. Click "Run" to preview results or "Create Export" to generate a full CSV export

Your First Query

Here's a simple example to get you started:

SELECT
    id,
    status,
    sendDate,
    to_id
FROM letters
WHERE sendDate > '2024-01-01'
ORDER BY sendDate DESC

This query will show you all letters sent after January 1st, 2024, with their IDs, statuses, send dates, and recipient contact IDs.

Available Data Tables

Your data lake contains the following tables with your PostGrid data. All tables are automatically kept in sync with your live data.

Mail Item Tables

letters

Complete data for all letter mailings.

Key Fields:

  • id (VARCHAR) - Unique letter identifier
  • createdAt (TIMESTAMP) - When the letter was created
  • updatedAt (TIMESTAMP) - Last modification time
  • sendDate (TIMESTAMP) - Scheduled send date
  • status (VARCHAR) - Current status (e.g., 'ready', 'in_transit', 'delivered', 'returned')
  • from_id (VARCHAR) - Return address contact ID
  • to_id (VARCHAR) - Recipient contact ID
  • user (VARCHAR) - User ID who created the letter
  • campaign (VARCHAR) - Associated campaign ID
  • trackingNumber (VARCHAR) - USPS/carrier tracking number
  • pageCount (INTEGER) - Number of pages
  • color (BOOLEAN) - Whether printed in color
  • doubleSided (BOOLEAN) - Whether printed double-sided
  • size (VARCHAR) - Letter size (e.g., 'us_letter')
  • envelope (VARCHAR) - Envelope type
  • returnEnvelope (VARCHAR) - Return envelope ID if included
  • addressPlacement (VARCHAR) - Address window placement
  • express (BOOLEAN) - Whether express delivery
  • mailingClass (VARCHAR) - USPS mailing class
  • proofOfMailing (VARCHAR) - URL to proof of mailing
  • proofOfRejection (VARCHAR) - URL to proof of rejection if applicable
  • imbStatus (VARCHAR) - Intelligent Mail Barcode status
  • imbZIPCode (VARCHAR) - IMB ZIP code
  • imbDate (TIMESTAMP) - IMB scan date
  • mergeVariables (JSON) - Template merge variables used
  • description (VARCHAR) - Optional description
  • metadata (JSON) - Custom metadata object

postcards

Complete data for all postcard mailings.

Key Fields:

  • id (VARCHAR) - Unique postcard identifier
  • createdAt (TIMESTAMP)
  • updatedAt (TIMESTAMP)
  • sendDate (TIMESTAMP)
  • status (VARCHAR)
  • from_id (VARCHAR)
  • to_id (VARCHAR)
  • user (VARCHAR)
  • campaign (VARCHAR)
  • trackingNumber (VARCHAR)
  • size (VARCHAR) - Postcard size (e.g., '4x6', '6x9', '6x11')
  • express (BOOLEAN)
  • mailingClass (VARCHAR)
  • pageCount (INTEGER)
  • proofOfMailing (VARCHAR)
  • proofOfRejection (VARCHAR)
  • imbStatus (VARCHAR)
  • imbZIPCode (VARCHAR)
  • imbDate (TIMESTAMP)
  • mergeVariables (JSON)
  • description (VARCHAR)
  • metadata (JSON)

cheques

Complete data for all check mailings.

Key Fields:

  • id (VARCHAR) - Unique check identifier
  • createdAt (TIMESTAMP)
  • updatedAt (TIMESTAMP)
  • sendDate (TIMESTAMP)
  • status (VARCHAR)
  • from_id (VARCHAR)
  • to_id (VARCHAR)
  • user (VARCHAR)
  • campaign (VARCHAR)
  • trackingNumber (VARCHAR)
  • bankAccount (VARCHAR) - Bank account ID used
  • amount (INTEGER) - Check amount in cents
  • currencyCode (VARCHAR) - Currency code (e.g., 'USD', 'CAD')
  • size (VARCHAR) - Check size
  • envelope (VARCHAR) - Envelope type
  • digitalOnly (JSON) - Digital-only check configuration
  • express (BOOLEAN)
  • mailingClass (VARCHAR)
  • pageCount (INTEGER)
  • proofOfMailing (VARCHAR)
  • proofOfRejection (VARCHAR)
  • mergeVariables (JSON)
  • description (VARCHAR)
  • metadata (JSON)

selfmailers

Complete data for all self-mailer mailings.

Key Fields:

  • id (VARCHAR) - Unique self-mailer identifier
  • createdAt (TIMESTAMP)
  • updatedAt (TIMESTAMP)
  • sendDate (TIMESTAMP)
  • status (VARCHAR)
  • from_id (VARCHAR)
  • to_id (VARCHAR)
  • user (VARCHAR)
  • campaign (VARCHAR)
  • trackingNumber (VARCHAR)
  • size (VARCHAR) - Self-mailer size (e.g., '6x18_bifold', '11x17_bifold')
  • express (BOOLEAN)
  • mailingClass (VARCHAR)
  • pageCount (INTEGER)
  • proofOfMailing (VARCHAR)
  • proofOfRejection (VARCHAR)
  • imbStatus (VARCHAR)
  • imbZIPCode (VARCHAR)
  • imbDate (TIMESTAMP)
  • mergeVariables (JSON)
  • description (VARCHAR)
  • metadata (JSON)

Contact Table

contacts

All contact records (both return addresses and recipients).

Key Fields:

  • id (VARCHAR) - Unique contact identifier
  • createdAt (TIMESTAMP)
  • updatedAt (TIMESTAMP)
  • firstName (VARCHAR)
  • lastName (VARCHAR)
  • email (VARCHAR)
  • phoneNumber (VARCHAR)
  • addressLine1 (VARCHAR)
  • addressLine2 (VARCHAR)
  • city (VARCHAR)
  • provinceOrState (VARCHAR)
  • postalOrZip (VARCHAR)
  • countryCode (VARCHAR) - ISO country code (e.g., 'US', 'CA')
  • addressStatus (VARCHAR) - Verification status (e.g., 'verified', 'unverified')
  • addressErrors (JSON) - Validation errors if any
  • skipVerification (BOOLEAN) - Whether verification was skipped
  • forceVerifiedStatus (BOOLEAN) - Whether status was manually forced
  • description (VARCHAR)
  • metadata (JSON)

Tracking Table

trackervisits

Records of tracker link visits (when tracking URLs are clicked).

Key Fields:

  • id (VARCHAR) - Unique visit identifier
  • createdAt (TIMESTAMP) - When the link was clicked
  • updatedAt (TIMESTAMP)
  • tracker (VARCHAR) - Tracker ID
  • orderID (VARCHAR) - Associated mail item ID
  • device (VARCHAR) - Device type (e.g., 'mobile', 'desktop')
  • ipAddress (VARCHAR) - Visitor IP address
  • description (VARCHAR)
  • metadata (JSON)

SQL Query Syntax (DuckDB)

Enhanced Reports uses DuckDB, a high-performance analytical database engine. DuckDB supports standard SQL with some powerful extensions.

Basic Query Structure

SELECT column1, column2, ...
FROM table_name
WHERE condition
ORDER BY column
LIMIT number

Common SQL Operations

Filtering by Date

-- Letters sent in the last 30 days
SELECT * FROM letters
WHERE sendDate >= CURRENT_DATE - INTERVAL '30 days'

-- Letters sent in a specific month
SELECT * FROM letters
WHERE sendDate >= '2024-01-01'
  AND sendDate < '2024-02-01'

Filtering by Status

-- All delivered letters
SELECT * FROM letters
WHERE status = 'delivered'

-- Letters that were returned
SELECT * FROM letters
WHERE status = 'returned_to_sender'

Aggregations

-- Count letters by status
SELECT status, COUNT(*) as count
FROM letters
GROUP BY status
ORDER BY count DESC

-- Average page count by campaign
SELECT campaign, AVG(pageCount) as avg_pages
FROM letters
WHERE campaign IS NOT NULL
GROUP BY campaign

Joining Tables

-- Letters with recipient contact information
SELECT
    l.id,
    l.status,
    l.sendDate,
    c.firstName,
    c.lastName,
    c.city,
    c.provinceOrState
FROM letters l
JOIN contacts c ON l.to_id = c.id
WHERE l.sendDate >= '2024-01-01'

Working with JSON Fields

DuckDB provides powerful JSON functions:

-- Extract a specific metadata field
SELECT
    id,
    json_extract_string(metadata, '$.customField') as custom_value
FROM letters
WHERE metadata IS NOT NULL

-- Query merge variables
SELECT
    id,
    json_extract_string(mergeVariables, '$.name') as recipient_name
FROM letters
WHERE mergeVariables IS NOT NULL

Time-Based Analysis

-- Letters by week
SELECT
    DATE_TRUNC('week', sendDate) as week,
    COUNT(*) as letter_count
FROM letters
GROUP BY week
ORDER BY week DESC

-- Daily delivery rate
SELECT
    DATE_TRUNC('day', sendDate) as day,
    COUNT(*) FILTER (WHERE status = 'delivered') * 100.0 / COUNT(*) as delivery_rate
FROM letters
GROUP BY day
ORDER BY day DESC

Advanced Features

Window Functions

-- Running total of letters by date
SELECT
    sendDate,
    COUNT(*) as daily_count,
    SUM(COUNT(*)) OVER (ORDER BY sendDate) as running_total
FROM letters
GROUP BY sendDate
ORDER BY sendDate

Common Table Expressions (CTEs)

-- Multi-step analysis
WITH monthly_stats AS (
    SELECT
        DATE_TRUNC('month', sendDate) as month,
        COUNT(*) as total_letters,
        COUNT(*) FILTER (WHERE status = 'delivered') as delivered
    FROM letters
    GROUP BY month
)
SELECT
    month,
    total_letters,
    delivered,
    (delivered * 100.0 / total_letters) as delivery_rate
FROM monthly_stats
ORDER BY month DESC

CASE Expressions

-- Categorize letters by status
SELECT
    CASE
        WHEN status IN ('delivered', 'in_transit') THEN 'Successful'
        WHEN status = 'returned_to_sender' THEN 'Returned'
        ELSE 'Other'
    END as status_category,
    COUNT(*) as count
FROM letters
GROUP BY status_category

Parameterized Queries

Parameterized queries let you create reusable reports where values can be changed each time you run the query.

Parameter Formats

Enhanced Reports supports multiple parameter formats:

Positional Parameters

-- Using ? placeholders
SELECT * FROM letters
WHERE sendDate >= ?
  AND status = ?

-- Using $1, $2, etc.
SELECT * FROM letters
WHERE sendDate >= $1
  AND status = $2

Named Parameters

-- Using named parameters
SELECT * FROM letters
WHERE sendDate >= $start_date
  AND sendDate <= $end_date
  AND status = $status

Type Casting Parameters

Since all parameters are passed as strings, you may need to cast them:

-- Cast a date parameter
SELECT * FROM letters
WHERE sendDate >= CAST($start_date AS DATE)

-- Cast to integer
SELECT * FROM cheques
WHERE amount >= CAST($min_amount AS INTEGER)

-- Alternative syntax
SELECT * FROM letters
WHERE sendDate >= $start_date::DATE

Parameter Examples

When you use parameters in your query, the dashboard will automatically show input fields for each parameter. For example:

Query:

SELECT
    status,
    COUNT(*) as count,
    COUNT(*) FILTER (WHERE express = true) as express_count
FROM letters
WHERE sendDate >= $start_date::DATE
  AND sendDate <= $end_date::DATE
  AND ($campaign IS NULL OR campaign = $campaign)
GROUP BY status
ORDER BY count DESC

Parameters to fill in when running:

  • start_date: "2024-01-01"
  • end_date: "2024-12-31"
  • campaign: "summer_2024" (or leave empty for all campaigns)

Example Queries

Campaign Performance Analysis

-- Comprehensive campaign performance report
SELECT
    campaign,
    COUNT(*) as total_sent,
    COUNT(*) FILTER (WHERE status = 'delivered') as delivered,
    COUNT(*) FILTER (WHERE status = 'in_transit') as in_transit,
    COUNT(*) FILTER (WHERE status = 'returned_to_sender') as returned,
    (COUNT(*) FILTER (WHERE status = 'delivered') * 100.0 / COUNT(*)) as delivery_rate,
    AVG(pageCount) as avg_pages,
    MIN(sendDate) as first_send,
    MAX(sendDate) as last_send
FROM letters
WHERE campaign IS NOT NULL
  AND sendDate >= '2024-01-01'
GROUP BY campaign
ORDER BY total_sent DESC

Geographic Distribution

-- Letters by state with delivery rates
SELECT
    c.provinceOrState as state,
    COUNT(l.id) as total_letters,
    COUNT(*) FILTER (WHERE l.status = 'delivered') as delivered,
    (COUNT(*) FILTER (WHERE l.status = 'delivered') * 100.0 / COUNT(l.id)) as delivery_rate
FROM letters l
JOIN contacts c ON l.to_id = c.id
WHERE l.sendDate >= CURRENT_DATE - INTERVAL '90 days'
  AND c.countryCode = 'US'
GROUP BY c.provinceOrState
ORDER BY total_letters DESC
LIMIT 50

Delivery Time Analysis

-- Average delivery time by mailing class
WITH delivery_times AS (
    SELECT
        l.mailingClass,
        l.sendDate,
        l.imbDate,
        DATE_DIFF('day', l.sendDate, l.imbDate) as days_to_deliver
    FROM letters l
    WHERE l.imbDate IS NOT NULL
      AND l.sendDate >= CURRENT_DATE - INTERVAL '6 months'
)
SELECT
    mailingClass,
    COUNT(*) as sample_size,
    AVG(days_to_deliver) as avg_days,
    MIN(days_to_deliver) as min_days,
    MAX(days_to_deliver) as max_days,
    MEDIAN(days_to_deliver) as median_days
FROM delivery_times
GROUP BY mailingClass
ORDER BY avg_days

Cost Analysis (for Checks)

-- Total check values by month
SELECT
    DATE_TRUNC('month', sendDate) as month,
    COUNT(*) as check_count,
    SUM(amount) / 100.0 as total_amount_dollars,
    AVG(amount) / 100.0 as avg_amount_dollars
FROM cheques
WHERE status = 'delivered'
  AND sendDate >= CURRENT_DATE - INTERVAL '12 months'
GROUP BY month
ORDER BY month DESC

Tracker Engagement Analysis

-- Engagement rates by campaign
SELECT
    l.campaign,
    COUNT(DISTINCT l.id) as total_sent,
    COUNT(DISTINCT tv.orderID) as engaged,
    COUNT(tv.id) as total_clicks,
    (COUNT(DISTINCT tv.orderID) * 100.0 / COUNT(DISTINCT l.id)) as engagement_rate,
    (COUNT(tv.id) * 1.0 / COUNT(DISTINCT tv.orderID)) as avg_clicks_per_engaged
FROM letters l
LEFT JOIN trackervisits tv ON tv.orderID = l.id
WHERE l.campaign IS NOT NULL
  AND l.sendDate >= CURRENT_DATE - INTERVAL '3 months'
GROUP BY l.campaign
HAVING COUNT(DISTINCT l.id) >= 10
ORDER BY engagement_rate DESC

Address Quality Report

-- Address verification status breakdown
SELECT
    addressStatus,
    COUNT(*) as contact_count,
    COUNT(*) FILTER (WHERE addressErrors IS NOT NULL) as with_errors,
    COUNT(*) FILTER (WHERE skipVerification = true) as skipped_verification
FROM contacts
WHERE updatedAt >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY addressStatus
ORDER BY contact_count DESC

Multi-Format Comparison

-- Compare performance across different mail formats
WITH all_mail AS (
    SELECT
        'Letter' as format,
        id,
        status,
        sendDate,
        pageCount as pages
    FROM letters

    UNION ALL

    SELECT
        'Postcard' as format,
        id,
        status,
        sendDate,
        1 as pages
    FROM postcards

    UNION ALL

    SELECT
        'Self-Mailer' as format,
        id,
        status,
        sendDate,
        pageCount as pages
    FROM selfmailers
)
SELECT
    format,
    COUNT(*) as total_sent,
    COUNT(*) FILTER (WHERE status = 'delivered') as delivered,
    (COUNT(*) FILTER (WHERE status = 'delivered') * 100.0 / COUNT(*)) as delivery_rate,
    AVG(pages) as avg_pages
FROM all_mail
WHERE sendDate >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY format
ORDER BY total_sent DESC

Working with Results

Preview Mode

When you click "Run" on your query:

  • Results are limited to 1000 rows by default
  • Results appear in a scrollable table below the query editor
  • Column headers show all field names from your SELECT clause
  • You can download the preview as CSV using the "Download CSV" button
  • This is perfect for testing and validating your query before creating a full export

Note: If your query would return more than 1000 rows, you'll see a warning: "Note that the results may have been truncated to 1000 records. Please use the report exports API if you need to download the full results."

Creating Exports

When you need the complete result set:

  1. Click "Create Export" instead of "Run"
  2. The system will process your query in the background
  3. The export is polled every 2 seconds until complete (maximum 2 minutes)
  4. Once ready, the CSV file downloads automatically
  5. The filename format: report-{reportID}-export.csv

Export Specifications:

  • Maximum file size: Limited by system configuration
  • File format: CSV with headers
  • Character encoding: UTF-8
  • Date format: ISO 8601 (e.g., "2024-01-15T14:30:00.000Z")
  • NULL values: Represented as "NULL" in the CSV

Saving Reports

To save a query for future use:

  1. Write and test your query
  2. Click "Save as Report"
  3. Enter a descriptive name (e.g., "Monthly Campaign Performance")
  4. The report appears in your Reports list
  5. Click any saved report to view, edit, or run it again

Saved reports can be:

  • Updated with new SQL queries
  • Run with different parameters
  • Exported multiple times
  • Shared via API (reportID)
  • Deleted when no longer needed

Query Limits and Best Practices

System Limits

  • Sample Query Timeout: Limited to a few seconds (typically 10-30 seconds)
  • Sample Result Limit: 1000 rows maximum
  • Export File Size: Large exports may be truncated (typically several GB limit)
  • Parameter Limits: Maximum 32 parameters per query
  • Parameter Length: Maximum 4,096 characters per parameter value
  • Query Length: Maximum 128 KB per query

Performance Best Practices

1. Use WHERE Clauses Efficiently

-- Good: Filter early
SELECT * FROM letters
WHERE sendDate >= '2024-01-01'
  AND status = 'delivered'

-- Less efficient: Filter after retrieving all data
SELECT * FROM letters
WHERE DATE_PART('year', sendDate) = 2024

2. Limit Results for Testing

-- Add LIMIT when testing
SELECT * FROM letters
WHERE sendDate >= '2024-01-01'
LIMIT 10

3. Use Appropriate JOIN Types

-- Use INNER JOIN when you only want matches
SELECT l.*, c.city
FROM letters l
INNER JOIN contacts c ON l.to_id = c.id

-- Use LEFT JOIN when you want all letters even without contacts
SELECT l.*, c.city
FROM letters l
LEFT JOIN contacts c ON l.to_id = c.id

4. Index-Friendly Queries

The following fields are optimized for filtering:

  • All id fields (primary keys)
  • sendDate in mail item tables
  • createdAt and updatedAt in all tables
  • status in mail item tables

5. Aggregate Before Joining

-- Good: Aggregate first
WITH letter_counts AS (
    SELECT campaign, COUNT(*) as count
    FROM letters
    GROUP BY campaign
)
SELECT * FROM letter_counts
WHERE count > 100

-- Less efficient: Aggregate after join
SELECT campaign, COUNT(*)
FROM letters l
LEFT JOIN contacts c ON l.to_id = c.id
GROUP BY campaign
HAVING COUNT(*) > 100

Query Optimization Tips

  1. Select only needed columns: Don't use SELECT * if you only need a few fields
  2. Filter early: Apply WHERE clauses to reduce data before JOIN operations
  3. Use appropriate data types: Cast parameters to correct types for comparisons
  4. Test with LIMIT: Always test complex queries with LIMIT first
  5. Break complex queries into CTEs: Use Common Table Expressions for readability and debugging

Troubleshooting

Common Error Messages

"Your data is still being synchronized"

Cause: The data lake hasn't finished initial provisioning yet.

Solution: Wait approximately 60 minutes after Enhanced Reports was enabled for your account. If the issue persists beyond this time, contact support.


"Your query took too long to run"

Cause: The query exceeded the time limit (typically 10-30 seconds for samples).

Solution:

  • Add more specific WHERE clauses to filter data
  • Reduce the date range you're querying
  • Simplify complex JOINs or aggregations
  • Use the export feature instead of sample preview for long-running queries

"Binder Error: Referenced column not found"

Cause: You referenced a column that doesn't exist in the table.

Solution: Check the table schema above and verify column names. Remember that column names are case-sensitive.


"Parser Error: syntax error"

Cause: The SQL syntax is invalid.

Solution:

  • Check for missing commas, parentheses, or quotes
  • Verify SQL keyword spelling (SELECT, FROM, WHERE, etc.)
  • Ensure string values are in single quotes: 'value'
  • Ensure table and column names are valid

"Catalog Error: Table with name [table_name] does not exist"

Cause: You're trying to query a table that doesn't exist.

Solution: Check the list of available tables in this documentation. Table names are lowercase and case-sensitive.


Getting Help

If you encounter issues not covered here:

  1. Check the query syntax carefully
  2. Test with a simpler version of your query first
  3. Verify table and column names match the documentation
  4. Contact support at [email protected] with:
    • Your organization ID
    • The report ID (if using a saved report)
    • The SQL query you're trying to run
    • The complete error message

Security and Privacy

Data Isolation

  • Your data lake is completely isolated to your organization
  • Test mode and live mode data are kept in separate data lakes
  • You can only query data belonging to your organization
  • No cross-organization queries are possible

Query Sandboxing

All queries run in a secure sandbox environment that:

  • Prevents access to the file system
  • Blocks external network access
  • Prohibits persistent storage
  • Limits resource usage
  • Isolates each query execution

Data Retention

  • Exported CSV files are temporarily stored and accessible via signed URLs
  • Export files are automatically deleted after 30 days
  • Sample query results are not stored persistently
  • Saved reports store only the SQL query, not the results

Best Practices for Sensitive Data

  1. Don't export more data than needed: Use specific SELECT columns rather than SELECT *
  2. Use appropriate filters: Limit date ranges and row counts to minimize data in exports
  3. Secure your exports: Downloaded CSV files are not encrypted—store them securely
  4. Limit parameter sharing: If sharing reports via API, be cautious about parameter values containing sensitive information

API Integration

Enhanced Reports can be accessed programmatically via the PostGrid API. This allows you to:

  • Create and manage reports programmatically
  • Schedule automated report generation
  • Integrate reporting data into your applications
  • Build custom dashboards

API endpoints include:

  • POST /reports - Create a new saved report
  • GET /reports/:id - Retrieve a report definition
  • POST /reports/:id/sample - Run a report preview
  • POST /reports/:reportID/exports - Create a full export
  • GET /reports/:reportID/exports/:exportID - Check export status
  • DELETE /reports/:reportID/exports/:exportID - Delete an export

For complete API documentation, refer to your PostGrid API reference guide.

Frequently Asked Questions

How often is the data updated?

Your data lake is updated incrementally throughout the day. Most changes to your mail items, contacts, and tracking data will appear in the data lake within a few hours.

Can I query data from both test and live mode together?

No. Test mode and live mode maintain separate data lakes. You'll need to switch modes in the dashboard to query each environment separately.

What happens if I modify a saved report?

When you modify and save a report, the SQL query is updated immediately. Any subsequent runs of that report will use the new query. Previous exports remain unchanged.

Can I schedule reports to run automatically?

Scheduled reporting is available via the API. You can set up a cron job or scheduled task that calls the report export endpoint at your desired frequency.

Is there a limit to how many reports I can save?

There's no hard limit on the number of saved reports. However, we recommend organizing and cleaning up unused reports periodically.

Can I share reports with team members?

Yes, reports are shared at the organization level. Any user in your organization with access to the Enhanced Reports feature can view, run, and modify saved reports.

What if my export gets truncated?

Exports have size limits to ensure system stability. If your export is truncated:

  1. Add more specific WHERE clauses to reduce the result set
  2. Consider breaking large exports into smaller date ranges
  3. Use aggregation to summarize data instead of exporting raw rows
  4. Contact support if you regularly need very large exports

Can I export data in formats other than CSV?

Currently, only CSV export is supported. However, CSV is widely compatible with Excel, Google Sheets, database import tools, and programming languages.

How do I see the schema for a table?

Use this query to see all columns in a table:

PRAGMA table_info('table_name');

For example:

PRAGMA table_info('letters');

Can I use database functions like NOW() or CURRENT_DATE?

Yes, DuckDB supports standard SQL functions including:

  • CURRENT_DATE - Current date
  • CURRENT_TIMESTAMP - Current timestamp
  • DATE_TRUNC() - Truncate dates to specific intervals
  • DATE_DIFF() - Calculate date differences
  • EXTRACT() - Extract parts of dates

For a complete list, refer to the DuckDB SQL functions documentation.

Advanced Topics

Incremental Data Analysis

For analyzing changes over time:

-- Items created or updated in the last 24 hours
SELECT * FROM letters
WHERE updatedAt >= CURRENT_TIMESTAMP - INTERVAL '24 hours'

-- Track daily creation trends
SELECT
    DATE_TRUNC('day', createdAt) as day,
    COUNT(*) as created_count
FROM letters
WHERE createdAt >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY day
ORDER BY day

Complex Aggregations

-- Percentile analysis of delivery times
WITH delivery_data AS (
    SELECT
        DATE_DIFF('day', sendDate, imbDate) as delivery_days
    FROM letters
    WHERE imbDate IS NOT NULL
      AND sendDate >= CURRENT_DATE - INTERVAL '90 days'
)
SELECT
    quantile_cont(delivery_days, 0.25) as p25,
    quantile_cont(delivery_days, 0.5) as median,
    quantile_cont(delivery_days, 0.75) as p75,
    quantile_cont(delivery_days, 0.95) as p95
FROM delivery_data

Cohort Analysis

-- Monthly cohorts by first send date
WITH first_sends AS (
    SELECT
        user,
        DATE_TRUNC('month', MIN(sendDate)) as cohort_month
    FROM letters
    GROUP BY user
),
monthly_activity AS (
    SELECT
        fs.cohort_month,
        DATE_TRUNC('month', l.sendDate) as activity_month,
        COUNT(DISTINCT l.user) as active_users
    FROM letters l
    JOIN first_sends fs ON l.user = fs.user
    GROUP BY fs.cohort_month, activity_month
)
SELECT
    cohort_month,
    activity_month,
    DATE_DIFF('month', cohort_month, activity_month) as months_since_first,
    active_users
FROM monthly_activity
ORDER BY cohort_month, activity_month

Additional Resources


Ready to unlock the full potential of your PostGrid data? Start exploring Enhanced Reports today and discover insights that drive your business forward. From simple status reports to complex multi-dimensional analysis, the possibilities are truly endless.