Sending and tracking large volumes of email
In order to transmit a large volume of email through PostageApp in a batch it’s necessary to split up that sending into smaller units that fit within the API payload limits.
Depending on the approach used, some alterations to the sending application may be necessary. Where adding tables or columns to the application’s database are impractical or inconvenient, use data already present in the application instead, such as ID values already in use for existing record types.
Each API call can have a custom UID, or “User-Defined Identifier”, a free-form value that can contain a cryptographic hash, a human-readable string, or some other encoding that may be readable by the sending application. This value, if provided, must be between 1 and 255 characters.
This UID can be used to follow-up on any API calls made to ensure that the payload was received and processed correctly. These UIDs must be unique within a given project and should not be recycled or reused.
NOTE: If two API calls are made to the same project with the same UID, the second and subsequent calls are considered duplicated and will be ignored.
An ideal UID is one that can either be saved in the application’s database or procedurally generated on-demand by a deterministic function. It can also be trivially parsed and broken down into its constituent components for diagnostic purposes.
This is best illustrated by some examples.
When sending to a mailing list on a daily basis a UID that incorporates the list name, the date, and the payload sequence in the overall batch would help with tracking and coordination.
For example, to the list “active-customers” on February 29th, 2020 the sequence of UIDs may look like:
announce:active-customers:20200229:001/025
announce:active-customers:20200229:002/025
:
announce:active-customers:20200229:024/025
announce:active-customers:20200229:025/025
This UID can be easily read, and encodes all of the pertinent information about the payload as it relates to the overall batch.
Note that these UIDs must not be used twice, so to support multiple mailings on the same calendar day additional context must be added such as the hour of the day, or even using a UNIX-style numerical “timestamp” value instead, although at the expense of human-readability.
When sending batches that may not necessarily follow a fixed schedule, a more flexible structure should be employed where there’s no possibility of UID conflict.
Where a record can be created in the sending application that represents this particular batch, that record’s ID can be used as an identifier.
In the case where the record created is Alert #29751 for a “new-stock” event relating to product #1920 the sequence of UIDs would look like:
alert-29751:type=new-stock:product=1920:001/016
alert-29751:type=new-stock:product=1920:002/016
:
alert-29751:type=new-stock:product=1920:015/016
alert-29751:type=new-stock:product=1920:016/016
This UID can be easily read, and encodes all of the pertinent information about the payload as it relates to the overall batch.
In some cases it may be advantageous to add a tracking record within application that can be used to pull down and aggregate statistics for a given mailing.
As PostageApp will give detailed delivery data, as well as a breakdown of click and open activity if those features were enabled, this information may be useful to incorporate into application dashboards or metrics.
When initiating the sending of a batch the API call payloads must be generated and forwarded to the PostageApp API. Where the API responds with a 200 status code, the payload has been received and will be scheduled for processing.
NOTE: The API is rate-limited and clients should space out calls to no more than 5 per second, or 5 concurrent calls, whichever limit comes first.
The send_message
endpoint allows sending to one or more
recipients in the same call.
Ideally as many recipients as can be accommodated with the payload content and still fit within the accounts API payload-size limit is best.
This can be done by accumulating data in an object and serializing it as JSON, adding more and more data until the desired payload size has been reached.
While this is optimal from a number of API calls perspective, producing these batches can be computationally expensive and each batch segment will have an unpredictable number of recipients associated with it.
In practice a simpler approach is to determine, on average, how much data each recipient requires to describe when encoded in the JSON payload.
Where the payload may include file attachments or other variable length content a fair amount of margin in the computation should be included.
For example, if an average recipient requires ~320 bytes to describe when encoded in JSON, and the content is typically 500KB when encoded, for a 2MB API call:
(2*1024*1024 - 500*1024) * 0.75 / 320 = 3715.2
Around 3,715 recipients can be described per batch. Note that this number will vary considerably from one application to another as some applications which personalize the email to a very high degree will need a lot more data to describe each recipient.
Where there’s considerable variability in the amount of contextual information provided, make calculations using a worst-case scenario. Before sending any complex batches it’s worth running some simple tests to see how much data will be necessary per-recipient.
In practice there is usually more than enough room to accommodate 500-1000 recipents per payload within a 2MB API call, notwithstanding the presence of any large file attachments.
When using a simple N per approach all that’s necessary is to divide up the list into chunks of N and begin sending payloads where each is identified with a meaningful UID.
These calls can be sent directly from the application. Where a higher volume of calls is being made, it may be advantageous to shift this to some kind of background process or queue that can better coordinate the sending activity, and retry in the case of a call failure, crash, or other malfunction.
Once the batch has been chunked and delivered as send_message
calls, some
follow-up calls are advised to ensure that everything is proceeding smoothly,
and to pull down useful delivery and metric data.
A detailed breakdown of the result of each send_message
call can be obtained
from the get_message_transmissions
endpoint.
This uses the UID in the original call to fetch information. Where no UID
was supplied, one would have been returned in the API response and would need
to be tracked by the application.
Each project will maintain a list of recipients and their deliverability status that should be periodically inspected in order to ensure that non-deliverable addresses are no longer part of any regular mailings.
NOTE: Sending to dead or undeliverable addresses has a considerable negative impact in your sending reputation and can lead to high levels of filtering or blocking if not addressed in a timely manner.
Suppression information is made available through the dashboard as well as
via the get_suppression_list
API endpoint. Real-time delivery and engagement information can also be
forwarded through webhooks for applications that can
receive those events.