Lettr analyzes incoming emails for spam characteristics. You can implement your own spam filtering logic in your webhook handler based on content analysis, sender reputation, and other signals.
Spam Detection
You can analyze incoming emails for spam characteristics by examining the email content and headers delivered in the relay webhook payload. Implement your own scoring based on patterns, sender reputation, and content analysis.
Score Range Interpretation 0.0 - 2.0 Very unlikely to be spam 2.0 - 4.0 Probably legitimate 4.0 - 6.0 Suspicious, review recommended 6.0 - 8.0 Likely spam 8.0 - 10.0 Almost certainly spam
Basic Spam Filtering
Implement basic spam filtering based on the score:
app . post ( '/webhooks/inbound' , express . json (), async ( req , res ) => {
for ( const event of req . body ) {
const relay = event . msys ?. relay_message ;
if ( ! relay ) continue ;
const { msg_from , content } = relay ;
const spamScore = analyzeForSpam ({ from: msg_from , subject: content . subject , text: content . text , html: content . html });
// Reject obvious spam
if ( spamScore >= 8 ) {
console . log ( `Rejected spam from ${ msg_from } : score ${ spamScore } ` );
await logRejectedSpam ( relay );
continue ;
}
// Quarantine suspicious emails
if ( spamScore >= 5 ) {
console . log ( `Quarantined suspicious email from ${ msg_from } : score ${ spamScore } ` );
await quarantineEmail ( relay );
continue ;
}
// Process legitimate emails
await processEmail ( relay );
}
res . sendStatus ( 200 );
});
Configuring Spam Sensitivity
You can configure spam filtering sensitivity for your inbound domain through the Lettr dashboard under Domains → Inbound , then selecting your domain.
Regardless of your dashboard filtering configuration, you can always implement your own filtering logic in your webhook handler.
Advanced Spam Detection
Combine spam score with additional checks:
function analyzeForSpam ( email ) {
const signals = [];
let score = 0 ;
// Check for common spam patterns
const subject = ( email . subject || '' ). toLowerCase ();
const body = ( email . text || email . html || '' ). toLowerCase ();
// Suspicious subject patterns
const spamSubjectPatterns = [
/ \b free \b . * \b money \b / ,
/ \b urgent \b . * \b action \b / ,
/ \b winner \b / ,
/ \b congratulations \b . * \b won \b / ,
/ \b lottery \b / ,
/ \b nigerian \b . * \b prince \b / i
];
for ( const pattern of spamSubjectPatterns ) {
if ( pattern . test ( subject )) {
score += 2 ;
signals . push ( `Suspicious subject pattern: ${ pattern } ` );
}
}
// Check for excessive links
const linkCount = ( body . match ( /https ? : \/\/ / g ) || []). length ;
if ( linkCount > 10 ) {
score += 1 ;
signals . push ( `Excessive links: ${ linkCount } ` );
}
// Check for URL shorteners
const shorteners = [ 'bit.ly' , 'tinyurl.com' , 'goo.gl' , 't.co' ];
for ( const shortener of shorteners ) {
if ( body . includes ( shortener )) {
score += 1.5 ;
signals . push ( `URL shortener detected: ${ shortener } ` );
}
}
// Check sender domain
const senderDomain = email . from . split ( '@' )[ 1 ];
if ( isSuspiciousDomain ( senderDomain )) {
score += 2 ;
signals . push ( `Suspicious sender domain: ${ senderDomain } ` );
}
return {
score: Math . min ( score , 10 ),
signals ,
isSpam: score >= 6
};
}
Allowlisting and Blocklisting
Maintain lists of trusted and blocked senders:
const allowlist = new Set ([ 'trusted@partner.com' , 'noreply@bank.com' ]);
const blocklist = new Set ([ 'spammer@spam.com' ]);
const allowedDomains = new Set ([ 'trustedcompany.com' ]);
const blockedDomains = new Set ([ 'spamdomain.com' ]);
function checkLists ( email ) {
const from = email . from . toLowerCase ();
const domain = from . split ( '@' )[ 1 ];
// Check exact email allowlist
if ( allowlist . has ( from )) {
return { action: 'allow' , reason: 'Email allowlisted' };
}
// Check domain allowlist
if ( allowedDomains . has ( domain )) {
return { action: 'allow' , reason: 'Domain allowlisted' };
}
// Check exact email blocklist
if ( blocklist . has ( from )) {
return { action: 'block' , reason: 'Email blocklisted' };
}
// Check domain blocklist
if ( blockedDomains . has ( domain )) {
return { action: 'block' , reason: 'Domain blocklisted' };
}
return { action: 'check' , reason: 'Not in lists' };
}
app . post ( '/webhooks/inbound' , express . json (), async ( req , res ) => {
for ( const event of req . body ) {
const relay = event . msys ?. relay_message ;
if ( ! relay ) continue ;
const email = { from: relay . msg_from , subject: relay . content . subject , text: relay . content . text , html: relay . content . html };
// Check allowlist/blocklist first
const listResult = checkLists ( email );
if ( listResult . action === 'block' ) {
await logBlockedEmail ( email , listResult . reason );
continue ;
}
if ( listResult . action === 'allow' ) {
await processEmail ( relay );
continue ;
}
// Apply spam filtering for unlisted senders
const spamScore = analyzeForSpam ( email );
if ( spamScore >= 6 ) {
await quarantineEmail ( relay );
continue ;
}
await processEmail ( relay );
}
res . sendStatus ( 200 );
});
Rate Limiting by Sender
Protect against email flooding:
import { RateLimiterMemory } from 'rate-limiter-flexible' ;
const senderLimiter = new RateLimiterMemory ({
points: 10 , // 10 emails
duration: 60 // per minute
});
const domainLimiter = new RateLimiterMemory ({
points: 50 , // 50 emails
duration: 60 // per minute
});
async function checkRateLimits ( sender ) {
const from = sender . toLowerCase ();
const domain = from . split ( '@' )[ 1 ];
try {
await senderLimiter . consume ( from );
await domainLimiter . consume ( domain );
return { allowed: true };
} catch ( rateLimitError ) {
return {
allowed: false ,
reason: 'Rate limit exceeded' ,
retryAfter: Math . ceil ( rateLimitError . msBeforeNext / 1000 )
};
}
}
app . post ( '/webhooks/inbound' , express . json (), async ( req , res ) => {
for ( const event of req . body ) {
const relay = event . msys ?. relay_message ;
if ( ! relay ) continue ;
const rateCheck = await checkRateLimits ( relay . msg_from );
if ( ! rateCheck . allowed ) {
console . warn ( `Rate limited: ${ relay . msg_from } ` );
await logRateLimited ( relay );
continue ;
}
// Continue with spam checking and processing
await processEmail ( relay );
}
res . sendStatus ( 200 );
});
Quarantine Management
Implement a quarantine system for suspicious emails:
class EmailQuarantine {
constructor ( storage ) {
this . storage = storage ;
}
async quarantine ( email , reason ) {
await this . storage . save ({
id: email . id ,
email: email ,
reason: reason ,
quarantinedAt: new Date (). toISOString (),
status: 'pending' ,
expiresAt: new Date ( Date . now () + 7 * 24 * 60 * 60 * 1000 ). toISOString ()
});
}
async release ( emailId ) {
const record = await this . storage . get ( emailId );
if ( ! record ) throw new Error ( 'Email not found' );
await this . storage . update ( emailId , { status: 'released' });
return record . email ;
}
async delete ( emailId ) {
await this . storage . update ( emailId , { status: 'deleted' });
}
async listPending () {
return this . storage . query ({ status: 'pending' });
}
}
const quarantine = new EmailQuarantine ( quarantineStorage );
app . post ( '/webhooks/inbound' , express . json (), async ( req , res ) => {
for ( const event of req . body ) {
const relay = event . msys ?. relay_message ;
if ( ! relay ) continue ;
const email = { from: relay . msg_from , subject: relay . content . subject , text: relay . content . text };
const spamScore = analyzeForSpam ( email );
if ( spamScore >= 5 ) {
await quarantine . quarantine ( relay , `Spam score: ${ spamScore } ` );
continue ;
}
await processEmail ( relay );
}
res . sendStatus ( 200 );
});
// Admin endpoint to review quarantine
app . get ( '/admin/quarantine' , async ( req , res ) => {
const pending = await quarantine . listPending ();
res . json ( pending );
});
app . post ( '/admin/quarantine/:id/release' , async ( req , res ) => {
const email = await quarantine . release ( req . params . id );
await processEmail ( email );
res . json ({ success: true });
});
Reporting Spam
When spam gets through your filters, maintain a local blocklist to prevent future deliveries from the same sender:
// User reports an email as spam
app . post ( '/api/report-spam' , async ( req , res ) => {
const { emailId } = req . body ;
// Add sender to local blocklist
const email = await getEmailById ( emailId );
await addToBlocklist ( email . from );
// Log for analysis and tuning
await logSpamReport ({
emailId ,
sender: email . from ,
spamScore: email . spamScore ,
reportedAt: new Date ()
});
res . json ({ success: true });
});
Best Practices
Start with moderate filtering
Begin with moderate spam filtering and adjust based on your false positive/negative rates. It’s better to quarantine than outright reject initially.
Log all filtering decisions
Keep detailed logs of why emails were filtered, quarantined, or blocked. This helps debug issues and tune your filters.
Provide user feedback mechanisms
Allow users to report spam that got through and legitimate emails that were blocked. Use this feedback to improve your filtering.
Review quarantine regularly
Set up processes to review quarantined emails and release false positives promptly.
Don’t rely solely on the spam score. Combine it with allowlists, blocklists, rate limiting, and content analysis.
Be careful with aggressive spam filtering. False positives can cause you to miss important emails from customers or partners.