Blog/Building an AI-powered email assistant with Sendkit
·8 min read·
Vanessa LozzardoVanessa Lozzardo

Building an AI-powered email assistant with Sendkit

Combine an LLM with the Sendkit API to build an assistant that drafts, sends, and tracks email on behalf of your users.

AIEmail APITutorialAutomation
Building an AI-powered email assistant with Sendkit

Your user types "email John about moving the standup to 3 PM." Thirty seconds later, a well-written message lands in John's inbox — drafted by an LLM, approved by the user, sent through your infrastructure, and tracked end-to-end. That's the AI email assistant pattern, and it's surprisingly straightforward to build.

This tutorial walks through the architecture, the code, and the sharp edges you need to file down before shipping this to production. We'll use OpenAI for the LLM layer and the Sendkit email API for sending and delivery tracking.

What an AI email assistant actually does

Strip away the hype and you get a pipeline with four steps:

  1. Parse intent — The user says something like "email John about the meeting." The LLM extracts the recipient, topic, and tone.
  2. Draft — The LLM generates a subject line and body based on the extracted intent plus any available context (previous threads, calendar data, user preferences).
  3. Approve — The draft is shown to the user. They can edit, regenerate, or send.
  4. Send and track — The approved email goes out via the Sendkit API. Webhooks report delivery, bounces, and opens back to your app.

Every step matters. Skip the approval step and you'll eventually send something embarrassing. Skip delivery tracking and you're flying blind.

A workspace with screens showing code and email interfaces

Architecture overview

Here's the flow laid out:

User input → LLM (draft) → Preview UI → User approves → Sendkit API (send)
                                                              ↓
                                              Webhook events (delivered/bounced/opened)
                                                              ↓
                                              Feed status back to assistant context

The backend is a Node.js service. The LLM call and the email send are separate API calls — never combine them into a single step. You want a human gate between "the AI wrote something" and "that something got sent to a real person."

Building the draft step

The draft step calls your LLM with the user's intent and returns structured output. Use function calling or a JSON schema to enforce the response shape.

import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const draftEmail = async (userIntent, recipientContext) => {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      {
        role: 'system',
        content: `You are an email drafting assistant. Given user intent and context, produce a JSON object with "subject" (string) and "body" (string, plain text). Keep the tone professional but natural. Do not invent facts.`,
      },
      {
        role: 'user',
        content: `Intent: ${userIntent}\nRecipient context: ${JSON.stringify(recipientContext)}`,
      },
    ],
    response_format: { type: 'json_object' },
  });

  return JSON.parse(response.choices[0].message.content);
};

// Usage
const draft = await draftEmail('Tell John the standup is moving to 3 PM tomorrow', {
  name: 'John Park',
  email: '[email protected]',
  relationship: 'teammate',
});

console.log(draft.subject); // "Standup time change — moving to 3 PM"
console.log(draft.body); // "Hey John, quick heads up..."

Force JSON output. If you let the LLM return free-form text, you'll spend half your time parsing it and the other half debugging why it broke.

Human-in-the-loop

This is the part people skip when they're excited about AI. Don't skip it.

Present the draft to the user in your UI. Give them three options: Edit, Regenerate, or Send. Store the draft server-side with a unique ID so the send action references an approved payload, not whatever the client posts.

import { randomUUID } from 'crypto';

const drafts = new Map();

const saveDraft = (draft, recipientEmail) => {
  const id = randomUUID();
  drafts.set(id, {
    to: recipientEmail,
    subject: draft.subject,
    body: draft.body,
    approved: false,
    createdAt: Date.now(),
  });
  return id;
};

const approveDraft = draftId => {
  const draft = drafts.get(draftId);
  if (!draft) throw new Error('Draft not found');
  draft.approved = true;
  return draft;
};

The key principle: the LLM proposes, the user disposes. Never send from a draft that hasn't been explicitly approved.

Sending via Sendkit

Once the user approves, send through the Sendkit API. Install the SDK:

npm install @sendkitdev/sdk

Then send the approved draft:

import { Sendkit } from '@sendkitdev/sdk';

const sendkit = new Sendkit(process.env.SENDKIT_API_KEY);

const sendApprovedDraft = async draftId => {
  const draft = approveDraft(draftId);

  const { data, error } = await sendkit.emails.send({
    from: '[email protected]',
    to: draft.to,
    subject: draft.subject,
    html: `<p>${draft.body.replace(/\n/g, '</p><p>')}</p>`,
    text: draft.body,
  });

  if (error) {
    console.error('Send failed:', error.message);
    return { success: false, error: error.message };
  }

  console.log('Sent:', data.id);
  return { success: true, messageId: data.id };
};

The SDK returns { data, error } instead of throwing. Check error first. The data.id value is your handle for tracking delivery later. Always include both html and text — some mail clients still prefer plain text. See our full Node.js transactional email guide for more patterns.

Code on a screen showing a development environment

Tracking delivery with webhooks

Sending is half the job. You need to know if the email actually landed. Configure a webhook endpoint in your Sendkit dashboard to receive delivery events.

import express from 'express';

const app = express();
app.use(express.json());

app.post('/webhooks/sendkit', (req, res) => {
  const { type, data } = req.body;

  switch (type) {
    case 'email.delivered':
      console.log(`Delivered: ${data.messageId} to ${data.to}`);
      // Update assistant context: "Your email to John was delivered"
      break;
    case 'email.bounced':
      console.log(`Bounced: ${data.messageId}${data.bounceReason}`);
      // Alert user: "Email to John bounced — check the address"
      break;
    case 'email.opened':
      console.log(`Opened: ${data.messageId}`);
      // Optional: "John opened your email"
      break;
  }

  res.sendStatus(200);
});

Feed these events back into the assistant's context. When the user asks "did John get my email?", the assistant can answer with actual delivery data instead of guessing. For a deeper dive into bounce handling, see how to handle email bounces.

Smart features worth adding

Once the core loop works, layer on these:

Recipient validation — Before the LLM even drafts, validate the recipient's email address using the Sendkit email validation API. Catch typos and disposable addresses early. No point drafting an email to [email protected].

const { data: validation } = await sendkit.emailValidations.validate({
  email: recipientEmail,
});

if (validation.result !== 'deliverable') {
  // Ask user to double-check the address
}

Tone detection — Add a system prompt that classifies the draft's tone (formal, casual, urgent) and surfaces it to the user. "This sounds formal — want me to make it more casual?" Small touch, big usability win.

Send-time suggestions — If you have timezone data for the recipient, suggest optimal send times. A Tuesday at 10 AM in the recipient's timezone beats a Saturday at midnight.

Thread awareness — Pass previous email threads into the LLM context so replies stay coherent. The assistant should know what John said last, not just what the user wants to say now.

Security considerations

An AI email assistant with access to a send API is a footgun if you're careless. Lock it down.

Recipient allowlisting — Never let the LLM choose the recipient. The user provides the recipient, your code resolves it to an email address from your contact database, and the LLM only drafts the content. If the user says "email the whole company," that's a business logic decision, not an LLM decision.

Content sanitization — The LLM might generate HTML-like content or injection attempts (prompt injection via email body is a real attack surface). Sanitize the body before sending. Strip scripts, limit HTML tags, and escape anything suspicious.

Rate limiting — Cap sends per user per hour. An enthusiastic user (or a buggy client) shouldn't be able to blast 10,000 emails through your assistant. Sendkit's pricing is usage-based, so a runaway loop hits your wallet fast.

API key isolation — Use a dedicated Sendkit API key for your assistant with domain-restricted sending permissions. Don't reuse your main application key. If the assistant's key leaks, the blast radius is contained.

Audit logging — Log every send with the user ID, draft ID, recipient, and timestamp. When something goes wrong (and it will), you need a trail.

Putting it all together

The AI email assistant pattern is four concerns stitched together: intent parsing, content generation, human approval, and reliable delivery. The LLM handles the first two. Your code handles the approval gate. Sendkit handles the last mile.

Start with the basic send loop — draft, approve, send. Add webhook tracking once that works. Layer on validation and smart features after you've shipped the first version and have real users telling you what they actually need.

The full Sendkit docs cover API keys, domain verification, webhook signatures, and rate limits. If you're building AI agents that send email autonomously (no human-in-the-loop), read our guide on AI agents that send email for the additional guardrails that requires.

Build the simple version first. Ship it. Then make it smart.

Share this article