Many moons ago, before SparkPost was even conceived, its predecessor had a tight integration with PostgreSQL. That software was “Momentum” and it used PostgreSQL for a variety of things because of its power, ease of use, and best of all, its price: free. PostgreSQL’s fast response times were leveraged for things like template management and user substitution data, which it handled very well. Over time we realized that our core competency was message delivery more so than the campaign management and content tools that our experiments with PostgreSQL were leading to. Our partners do a much better job of the kind of things we were using PostgreSQL for, so much of that work was not carried forward into the brave new world of SparkPost.

Fast forward to 2017 and SparkPost is the fastest growing message delivery service on the planet. Many of those customers who were previously using Momentum on-prem software are moving to SparkPost in the cloud (either as pure SparkPost or hybrid model) to take advantage of the speed and scaling technology it provides. Many of them had extended their PostgreSQL deployments to include interesting things like real-time suppression lists, auto-responders, and content snippet libraries to name only a few. So now the question becomes, how can you leverage PostgreSQL with SparkPost’s cloud services to end up with the same types of features?

SparkPost is a powerful delivery engine, and being a cloud service, doesn’t provide the same types of customization opportunities as with on-prem software like Momentum. We’ve built out an extremely comprehensive API and webhook service to expose huge amounts of data, empowering our users to build their own complex integrations around our API and event data. Using the configurable webhooks, you can receive data in real-time, storing it in your external PostgreSQL DB, and processing it on your schedule. You can then tie this real-time data, which includes spam complaints, unsubscribes, and invalid address errors directly into your list hygiene or real-time suppression systems. To build messages, you can combine our Transmissions API with a library of content snippets stored in PostgreSQL to create dynamically relevant messages on-the-fly.

So, let’s look at the example of creating a real-time suppression list. What you’ll need on the data generation side of things is a SparkPost webhook configured to deliver only spam complaint, unsubscribe, and bounce events. We care specifically about “code 10” bounces, which indicate an invalid email address. You will also need a “collector” to accept the feed, and some process to load that feed into your PostgreSQL database (ETL). You could build that yourself and there are a number of samples available, but you could also use one of our partner integrations like Stitch Data that can take your webhook feed directly and manage the entire collection and ETL process for you.

I recently built out exactly that pipeline using SparkPost, Stitch Data and Amazon RDS-PostgreSQL. The entire system took less than 30 minutes to build, configure, and test. Excluding addresses that have opted out can now be done with a simple db lookup before sending, using your local suppression data.

Another great example of leveraging PostgreSQL is building highly dynamic content. The SparkPost template system and substitution language can support arrays of data as well as dynamic content with embedded substitutions, and populating that content can be automated with some logic and a content library in PostgreSQL. This can yield highly personalized content while using a standardized template.

In its simplest form, you could dynamically create messages using content chunks stored in your database, and targeted using the sorts of data you have on your users.

For instance:

$snippet = “SELECT snippet FROM library INNER JOIN interests USING (interest) WHERE interest = ‘email’”;

Might yield:

“<p>Hey, {{first_name}} This is where the content goes for your interest/hobby</p>”.

Now you can expand that variable in the content before injecting.

"html": "’. $snippet .’“

That seems pretty basic though and you can do much more with SparkPost. Let’s say you have a target group of a thousand customers and in addition to the normal field substitutions, they also have variable interests based on past purchase history. Instead of creating separate messages for each different interest, you can put the variable data in an array in a single template.

Let’s assume you have copious volumes of data in PostgreSQL (I am assuming AWD-RDS in this example) and you want to send customers offers based on their closest airport. You can use some logic in an application server, or possibly a serverless AWS Lambda script, to search and pull arrays of data from PostgreSQL, then put the ENTIRE ARRAY into the SparkPost template. Make a single call and have every customer get a unique version of the message.

You might first collect arrays of all your customer’s data and current product offers (excuse my shorthand):

$airportArray = “SELECT DISTINCT code, city, name FROM airports_list”;
$products = SELECT pc, ap, product_url, price, title, content FROM current_offers”;

Which, once sanitized and JSON-ified, might yield something like this:

 $airportsArray =[
            { "code": "YYC","city": "Calgary","name": "Calgary International Airport"},
            { "code": "YYZ","city": "Toronto","name": "Toronto International Airport" },
            { "code": "SFO","city": "San Francisco", "name": "San Francisco International Airport"},
            { "code": "DEN","city": "Denver","name": "Denver International Airport"},
            { "code": "BWI","city": "Baltimore","name": "Baltimore-Washington International Airport" }

  $products =  [   
      { "PC":"17823", "AP":"YYC","product_url":"", "price":299,"title":"Sink or
 swim with these jet powered fins!", "content": "<font color=#B150D6>offer ends
      { "PC":"18056", "AP":"YYZ","product_url":"", "price":350,"title":"Who
 needs glasses when you have these?", "content": "<font color=#C150D6>offer only through
      { "PC":"18111", "AP":"BWI","product_url":"","price":15,"title":"This is why
 dogs and cats don’t get along", "content": "<font color=#B140D6>You really need
 this.</font>" }

A partial template sample is shown below, calling out template placeholders in red, as well as variable data in purple. Variable data would be pulled from PostgreSQL, escaped/sanitized, and interpolated into the JSON request before being sent to the API.

"substitution_data": {
        "Airports":  $airportsArray ,
        “Products”:  $products 
 "recipients": [
    {"address": {"email": "","name": "Tom Morris"},
     "substitution_data": {"AP":"YYC", "lang":"en"}
    {"address": {"email": "kevin@gmall.con","name": "Kevin TheMinion"},
     "substitution_data": {"AP":"YYZ", "lang":"fr"}
    {"address": {"email": "","name": "Thomasina Marriot"},
     "substitution_data": {"AP":"DEN", "lang":"en"}
    {"address": {"email": "","name": "Tim Mays"},
     "substitution_data": {"AP":"BWI", "lang":"en"}
  "content": {
    "from": {"name": "Fred Flintstone","email": ""},
    "subject": "{{}}, You are AWESOME!",
    "reply_to": "Daily Sales <>",
    "headers": {"X-Customer-Campaign-ID": "daily_campaign", "X-Binding":"outbound"
    "text": "Hello {{}} \nThis message was sent to {{}}.  Please view
 this with HTML on",
    "html": "<p>{{ each Airports }}   {{ if Airports.code == AP }}   
      <p>Hi {{}}, here are some deals for you in {{ }} because you are
 near  {{ }}
   <table>{{ each Products }}
      <tr><td align=\"center\">
        <a href=\"{{{Products.product_url}}}\">
            <img alt=\"{{Products.title}}\" src=\"{{{Products.product_url}}}\" width=100 height=100>
               </a><br>$ {{Products.price}} - {{{Products.content}}}
          </td> </tr>
           {{ end }}
      {{ end }} 
    {{ end }}  </p>"

What all of this means is that you can continue to leverage the PostgreSQL deployment and data you have now with SparkPost without having to create a tight integration.  If you are not already doing so, this may be an opportunity to explore PostgreSQL and SparkPost working together.

Finally, if you have other questions around SparkPost or PostgreSQL, you can leave a comment below or find us on Twitter or our Community Slack.

-Tom Mairs