Using Smart Compose
✨ Nylas Smart Compose is new in v3.
For many people, writing clear, concise, grammatically correct, and well-structured email messages can take a lot of time. Writing isn't everyone's superpower — and it doesn't need to be. With Nylas' Smart Compose endpoint, end users can generate well-written email messages in a few seconds, based only on prompts and email context.
Currently, Smart Compose can...
- Generate new email messages (for example, “Write an email to Ron with a business brief about Nylas”).
- Respond to email messages (for example, “Reply to my friend Emily and RSVP yes to the party”).
Before you begin
Before you start making Smart Compose requests, you need the following prerequisites:
- A v3 Nylas application.
- A v3 provider auth app (Google or Azure), and a connector for that auth app.
- A Google or Microsoft grant.
💡 Make sure your project includes a field where your end users can type instructions for the AI. Otherwise, they won't be able to use Smart Compose.
Future plans
In the future, you'll also need to connect your own LLM (large language model) account to use Smart Compose. For example, you might connect your organization's OpenAI account. This allows you to use your own LLM model and parameters, so you can customize the output of the AI.
Response options
Nylas' Smart Compose endpoints support two ways to get AI responses: you can either receive them as a REST response in a single JSON blob, or use SSE (Server-Sent Events) to stream the response tokens as Nylas receives them.
To receive responses using the REST method, either add the Accept: application/json
header to your request, or omit the Accept
header entirely.
To enable SSE, add the Accept: text/event-stream
to your Smart Compose request. Your project must be able to accept streaming events and render them for your end user.
Smart Compose limitations
Keep the following limitations in mind as you work with Nylas Smart Compose:
- Latency varies depending on the length and complexity of the prompt. You might want to add a "working" indicator to your UI so your end users know to wait for a response.
- Prompts sent to the Nylas LLM can be up to 1,000 tokens long. Longer queries receive an error message.
- For more information about LLM tokens, see the official Microsoft documentation.