Skip to main content
When your application needs to send thousands or millions of emails, how you structure and pace those sends directly affects deliverability, performance, and cost. This guide covers practical strategies for batching, rate management, and recipient grouping at scale.

Batch Sending with the API

Lettr’s API supports sending to multiple recipients in a single request. This reduces HTTP overhead and simplifies your sending logic.

Single Request, Multiple Recipients

const response = await fetch('https://app.lettr.com/api/emails', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer your-api-key',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    from: 'updates@mail.example.com',
    to: [
      { email: 'user1@example.com', name: 'Alice' },
      { email: 'user2@example.com', name: 'Bob' },
      { email: 'user3@example.com', name: 'Charlie' }
    ],
    subject: 'Your weekly summary',
    templateId: 'weekly-summary',
    substitutionData: {
      // Shared data for all recipients
      companyName: 'Acme Inc'
    }
  })
});

Per-Recipient Substitution Data

When each recipient needs different content (personalized data, unique links), pass substitution data per recipient:
const response = await fetch('https://app.lettr.com/api/emails', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer your-api-key',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    from: 'orders@mail.example.com',
    to: [
      {
        email: 'alice@example.com',
        substitutionData: {
          name: 'Alice',
          orderId: 'ORD-1001',
          total: '$49.99'
        }
      },
      {
        email: 'bob@example.com',
        substitutionData: {
          name: 'Bob',
          orderId: 'ORD-1002',
          total: '$129.00'
        }
      }
    ],
    templateId: 'order-confirmation'
  })
});
Using per-recipient substitution data with templates is more efficient than sending individual API calls for each recipient. One request with 100 recipients is faster than 100 individual requests.

Structuring High-Volume Sends

When sending to large lists (tens of thousands or more), you need to break the work into manageable batches and pace them appropriately.

Batch Size

Keep each API request to a reasonable number of recipients. Larger payloads take longer to process and are more likely to time out.
List SizeRecommended Batch SizeApproach
< 1,000All in one requestSingle API call
1,000–10,000100–500 per requestLoop with sequential requests
10,000–100,000100–500 per requestQueue-based with rate limiting
100,000+100–500 per requestQueue-based with backpressure and monitoring

Queue-Based Architecture

For large sends, use a job queue to manage the workload. This gives you control over pacing, retry logic, and failure handling.
// Producer: Split recipients into batches and enqueue
async function enqueueBulkSend(recipients, templateId) {
  const BATCH_SIZE = 200;

  for (let i = 0; i < recipients.length; i += BATCH_SIZE) {
    const batch = recipients.slice(i, i + BATCH_SIZE);

    await queue.add('send-email-batch', {
      recipients: batch,
      templateId,
      batchNumber: Math.floor(i / BATCH_SIZE) + 1,
      totalBatches: Math.ceil(recipients.length / BATCH_SIZE)
    });
  }
}

// Consumer: Process each batch with rate limiting
queue.process('send-email-batch', async (job) => {
  const { recipients, templateId } = job.data;

  const response = await fetch('https://app.lettr.com/api/emails', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.LETTR_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      from: 'newsletter@mail.example.com',
      to: recipients,
      templateId
    })
  });

  if (response.status === 429) {
    const retryAfter = response.headers.get('Retry-After');
    throw new Error(`Rate limited. Retry after ${retryAfter}s`);
  }

  if (!response.ok) {
    throw new Error(`Send failed: ${response.status}`);
  }
});
Configure your queue workers with appropriate concurrency limits. Running too many workers in parallel will hit Lettr’s rate limits. Start with 2–3 concurrent workers and adjust based on observed throughput.

Rate Management

Lettr enforces rate limits to protect deliverability for all senders. Understanding and working within these limits is essential for high-volume sending.

Handling Rate Limit Responses

When you exceed the rate limit, the API returns a 429 Too Many Requests response with a Retry-After header indicating how long to wait.
async function sendWithRetry(payload, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch('https://app.lettr.com/api/emails', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.LETTR_API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify(payload)
    });

    if (response.status === 429) {
      const retryAfter = parseInt(response.headers.get('Retry-After') || '10');
      console.log(`Rate limited. Waiting ${retryAfter}s before retry...`);
      await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
      continue;
    }

    return response;
  }

  throw new Error('Max retries exceeded');
}

Proactive Rate Limiting

Rather than hitting rate limits and retrying, pace your sends proactively:
// Simple token bucket rate limiter
class RateLimiter {
  constructor(maxRequests, windowMs) {
    this.maxRequests = maxRequests;
    this.windowMs = windowMs;
    this.timestamps = [];
  }

  async waitForSlot() {
    const now = Date.now();
    this.timestamps = this.timestamps.filter(t => now - t < this.windowMs);

    if (this.timestamps.length >= this.maxRequests) {
      const oldestInWindow = this.timestamps[0];
      const waitTime = this.windowMs - (now - oldestInWindow);
      await new Promise(resolve => setTimeout(resolve, waitTime));
    }

    this.timestamps.push(Date.now());
  }
}

// Usage: 300 requests per 5 minutes
const limiter = new RateLimiter(300, 5 * 60 * 1000);

for (const batch of batches) {
  await limiter.waitForSlot();
  await sendBatch(batch);
}

Recipient Grouping Strategies

How you segment your recipients for bulk sends affects both deliverability and engagement.

Group by Engagement Level

Send to your most engaged recipients first. This front-loads positive signals (opens, clicks) that improve your reputation for the remainder of the send.
Send OrderSegmentWhy
FirstOpened/clicked in last 7 daysHighest engagement probability — builds positive signals
SecondOpened/clicked in last 30 daysStill engaged, reinforces positive reputation
ThirdOpened/clicked in last 90 daysModerate risk, but still opted-in
Last (or skip)No engagement in 90+ daysHighest risk — consider a re-engagement campaign first

Group by Domain

When sending to large lists, consider grouping recipients by their email domain. This lets you monitor deliverability per provider and respond if a specific provider starts throttling.
function groupByDomain(recipients) {
  const groups = {};
  for (const recipient of recipients) {
    const domain = recipient.email.split('@')[1];
    if (!groups[domain]) groups[domain] = [];
    groups[domain].push(recipient);
  }
  return groups;
}

// Send to each domain group with independent monitoring
const groups = groupByDomain(allRecipients);
for (const [domain, recipients] of Object.entries(groups)) {
  console.log(`Sending ${recipients.length} emails to ${domain}`);
  await enqueueBulkSend(recipients, templateId);
}

Monitoring Bulk Sends

Track the health of your bulk sends in real time using webhooks.

Key Metrics to Track

MetricHealthyWarningAction Required
Delivery rate> 95%90–95%Investigate bounce reasons
Bounce rate< 2%2–5%Clean list, check data quality
Complaint rate< 0.1%0.1–0.3%Review content and targeting
Deferral rate< 5%5–15%Reduce sending speed

Real-Time Dashboard

Set up a simple counter to track bulk send progress:
const bulkSendMetrics = {
  sent: 0,
  delivered: 0,
  bounced: 0,
  complained: 0,
  deferred: 0
};

app.post('/webhooks/lettr', (req, res) => {
  const event = req.body;

  switch (event.type) {
    case 'email.delivered': bulkSendMetrics.delivered++; break;
    case 'email.bounced': bulkSendMetrics.bounced++; break;
    case 'email.complained': bulkSendMetrics.complained++; break;
    case 'email.deferred': bulkSendMetrics.deferred++; break;
  }

  // Alert if complaint rate exceeds threshold
  const total = bulkSendMetrics.delivered + bulkSendMetrics.bounced;
  if (total > 100 && bulkSendMetrics.complained / total > 0.003) {
    alertTeam('Complaint rate exceeding 0.3% — consider pausing send');
  }

  res.sendStatus(200);
});
Always set up monitoring before starting a bulk send. Discovering a problem after sending to your entire list is much worse than catching it after the first few thousand and pausing.

Common Mistakes

Blasting your full list as fast as possible overwhelms receiving servers and triggers rate limiting or blocks. Pace your sends and start with engaged recipients.
Always check your suppression list before a bulk send. Sending to previously bounced or complained addresses damages your reputation with every hit. Lettr’s suppression list handles this automatically, but you should also maintain your own internal suppression logic.
Treating 429 responses as permanent failures instead of implementing retry logic with backoff. Rate limits are temporary — wait and retry.
Launching a bulk send and walking away. Without real-time monitoring, you won’t catch deliverability problems until it’s too late.
Sending 50,000 individual API requests when you could batch 200 recipients per request (250 requests total). Batching is dramatically more efficient.