Usage Limits
Atom itself has no usage limits. You can send as many messages as you want, whenever you want. The limits you may run into come from your AI provider: OpenAI by default, or Anthropic if you choose it.
What uses your allowance
Every message you send and every response Atom receives consumes tokens. Some things use more than others.
- Simple edits like renaming a layer or changing a color use very little.
- Multi-step automations like building a comp with keyframes and expressions use a moderate amount.
- Large project scans where Atom reads hundreds of layers use more.
- Long conversations where you build on context over many messages add up cumulatively.
Staying efficient
- Be specific. Targeted prompts use fewer tokens than vague ones. “Add a 0.5s ease-in opacity fade to the selected layer” is more efficient than “make it look nice.”
- Start fresh when the context gets stale. Long conversations accumulate tokens. If you’ve switched tasks, start a new chat.
- Use skills for repetitive work. Saved prompts reduce back-and-forth.
- Work in focused sessions. Batching related edits into one session is more efficient than scattering single requests throughout the day.
Provider limits
Anthropic
If you use Claude Code, your limits come from Anthropic. Depending on how you sign in, that usually means:
- an Anthropic plan that includes Claude Code, or
- Anthropic API billing
OpenAI
If you use Atom’s built-in chat, your limits come from OpenAI. Depending on how Codex is configured on your machine, that usually means:
- an OpenAI plan that includes the AI features Atom uses,
- a ChatGPT Business or Enterprise account, or
- OpenAI API billing
In both cases, subscription plans usually have rolling limits, while API billing is typically pay-per-token.