AI Writing Features Roundtable — Thank You!

Huge thanks to everyone who joined us for Tuesday’s exclusive demo of upcoming AI writing features in Coda!

For those who couldn’t make it, here’s what you missed:

What We Showed

We demonstrated some exciting capabilities coming to Coda in the next few months:

  • Multi-page AI workspaces — Your entire doc becomes context for AI, with research pages, outlines, and drafts all working together

  • Internet-powered research — AI can generate research pages with citations and structured findings

  • Context-aware writing assistance — Smart suggestions that understand where you are in your document

  • Cross-page content generation — Pull insights from one page and draft content in another

  • Show changes mode — Visual diff to review and revert AI additions easily

  • Voice transcription — Speak your ideas and refine them with AI

Your Feedback Shapes Our Roadmap

We heard you loud and clear on several important topics, and we’re bringing these insights back to the team:

  • Grammar assistance needs context awareness — You want control over when and where inline suggestions appear (especially in tables or informal notes)

  • Table users need love too — AI features for data-heavy workflows are on your radar

  • Security and privacy clarity — Questions about page-level permissions for AI context and how chat history works in collaborative docs

While not every piece of feedback will translate into product changes, your input directly informs how we think about and prioritize these features. We’re committed to keeping this conversation going.

Open Questions We’re Tracking

Several great questions came up that we’re still working through:

  • Custom LLM model options and API key integration

  • Idea bar customization and memory

  • Templating mechanisms for compiled document outputs

  • Sync page integration with AI-generated content

We’ll keep you updated as we have answers!

Timeline

These features are being developed now and will roll out to Coda in the coming months. We’re combining the strengths of the Coda and Grammarly teams to make sure we get this right.

Our Commitment

Coda remains the surface where your work gets done. We’re building AI that enhances your workflows without getting in the way, whether you’re a document person or a table person.

Thanks again for your time, feedback, and continued passion for Coda. Stay tuned for more updates, and have a great weekend!

6 Likes

Nice, I expect Superhuman to become the integrator of Coda, Grammarly, and Superhuman Mail in an increasingly concrete way, so that the Coda workspace is not siloed from my Grammarly or Mail, so it’s nice to see that you are thinking to make writing more at the “Workspace level“ and not at the page level like now.

One thing that I still cannot wrap my head around is lead times, which are enormous (a.k.a if you say coming months, I’ll understand around Christmas), as it was the same with “single-page sharing or workspace tables“and the list is very long.

Btw… is Superhuman actually using Coda as the surface where your work gets done or is this message promoted only outwards but internally not much?

2 Likes

@Ruggy_Joesten1
Any chance to share a recording?

2 Likes

Any plans on adding AI features for the free plan? I’d like to have a few daily credits to spend on testing workflows, prompts, and analysing results. Then, when I validate a working workflow, I upgrade to a paid plan for a complete experience.

3 Likes

When I see feature requests like this, it troubles me because it indicates a market disconnectedness of the Coda product manager.

The ability to align tools with designated frontier model providers has topped the list of customer expectations for almost three years. How something so basic can miss the opportunity to be a straight up checkmark amazes me because enterprises often have provider contracts that insulate their data.

When asked, if this doesn’t trigger an immediate and exuberant YES!, it telegraphs a sketchy awareness of market reality. It causes me to wonder and investigate several other AI integration factors as I attempt to understand what else has been missed.

I know that some well-versed AI enthusiasts will counter - “Models are unique and require system prompt dependencies that optimize their performance, so supporting all models is not practically possible.” If this is the reason, it’s another signal—not enough was invested to insulate customers from AI change and an over-dependence on prompt architectures which has profound implications of costs and latency.

3 Likes

Unfortunately, due to privacy concerns about sharing recording links without first getting consent from everyone on the call, we can’t share publicly. But we will continue to send over recaps like these after our roundtable events.

1 Like

We’re excited to share more specifics soon, but here’s what we can say: unified branding is coming, and our vision is clear. We’re building toward a future where work happens on one integrated surface, from start to finish. This isn’t just about the final output, it’s about the entire journey of getting work done.

1 Like

I hear you, and I really appreciate you taking the time to share this. Honestly, getting my arms around all the product requests, especially the ones that came up before I joined, is one of my top priorities right now. Your feedback, both what you’ve shared before and what you’re sharing today, genuinely helps shape how we’re thinking about the path forward. Once I’ve got that data organized, I’d love your help unpacking what’s most important and understanding the full context behind it.

2 Likes

I’ve always felt that the Codans have a good handle on short and long-term feature strategies. Overall, the company’s track record of decisions concerning fixes and feature additions has been excellent. Most importantly, the quality of the implementations ranks very high in my experience. But that’s not what my comment is fundamentally about.

In and of itself, this is a key issue in AI architecture. But again, this instance pales in comparison to a fundamental business requirement that becomes obvious when demonstrating ANY generative AI feature that magically accomplishes a task. Regardless of cohort—users, Makers, your own internal developers—the first question asked is almost universally:

What model is this?

The second question, again, is almost universally:

Can I configure it to use a different model with my API key?

Predicting these questions well in advance is not difficult because they’ve been asked repeatedly for at least two AI eons.

As such, two disturbing signals emerge:

  1. Opaquely anchoring the product to a single frontier model provider speaks volumes to an internal business requirement that is not in alignment with your [Coda] customers’ interests.
  2. It subliminally forecasts a near-term future of AI-related customer dissatisfaction.

As to #2, we saw first-hand the erosion of AI performance in Coda. As frontier models advanced rapidly, Coda AI was stuck in first gear. Customers had no “out”. They couldn’t adapt the product in the face of change—certain change.

As to #1, most customers expect agility in Coda. Its founding principles are based on agility, particularly in configuration and integration.

The hidden side of these observations suggests that the AI team is not moving toward a “thick-prompting” strategy that allows AI users to trade CFL, or Pack code, with heavier [custom] system prompts. This is typically the third predictable question concerning AI.

Can I change the system prompt?

Products designed to fully insulate users, especially Makers, from modifying system prompts are deeply disconnected from AI trends and market realities. Ideally, this should be made possible dynamically through CFL, a clear opportunity with explosive potential.

1 Like

I understand, but this is a shame because it’s a bit late hours for some time zones

1 Like