Thank you to everyone who joined yesterday’s roundtable on the future of tables and databases in Coda. The energy in the room was great: your questions, ideas, and real-world use cases made for a genuinely rich conversation.
@nathan from the product team walked through where we’re headed: improvements to scale, performance, and how shared data will work across docs, then opened things up for live Q&A with the community.
Here are the key takeaways.
What got people excited
Performance at scale
Teams managing large tables or running operational workflows in Coda felt this one immediately. The new architecture is built to handle significantly larger datasets while keeping docs fast and responsive. For many attendees, it was the most anticipated improvement on the list.
A real home for shared data
One of the themes that came up again and again: important data shouldn’t have to live inside an individual doc. The idea of databases as a dedicated source of truth, something multiple workflows can connect to reliably, resonated strongly with teams who’ve been working around this limitation for a while.
Live data across docs
The move toward live syncing landed well. Rather than managing manual refreshes or chasing delayed updates, the new model keeps changes connected wherever the data is used. Makers could immediately see how much this would simplify their setups.
More powerful relational workflows
This one sparked considerable imagination. Attendees immediately mapped it to real workflows: planning systems, project tracking, CRM-style use cases, and operational dashboards. The ability to connect related tables more reliably across docs opens the door to building more sophisticated systems on top of Coda.
What y’all are watching closely
Alongside the excitement, the conversation raised sharp, practical questions about how this will work in practice.
Feature parity for advanced workflows
Many builders rely on buttons, automations, and complex formulas to run their day-to-day operations. Several attendees asked how these capabilities will evolve alongside the new architecture and when full feature support will be available.
Migration from existing setups
A common thread: what does the transition actually look like? Migration tools are planned, but people wanted to understand how their current tables and formulas will carry over, and what adjustments might be needed along the way.
Permissions and access control
Teams managing shared operational data had questions about row- and column-level access controls—how robust the permission model will be, and when support will be available.
External integrations
Teams syncing from HubSpot and other packs asked how those integrations will work with the new architecture and when support for larger synced datasets will be available.
Local flexibility within shared data
Some asked a more nuanced question: how will local calculations, annotations, and doc-specific views work when you’re pulling from a shared database? It’s the kind of question that shows people are already thinking seriously about how to build with this.
All of these are exactly the right questions to ask, and they’re actively shaping the roadmap as development progresses through beta.
Q&A from the live call
@Max_OBrien — How will buttons work in the new database model?
No fundamental architectural blockers to supporting buttons. Simpler button behaviors are already available today, and broader support will follow as features are integrated into the new architecture. Most button functionality is expected to be supported over time.
@Heather_Donnithorne — Will buttons work across connected docs, and how will teams interact with shared databases?
The goal is for button columns to remain functional when database tables are used across connected docs. The new system should also offer more flexible permissions, allowing teams to interact with shared data without requiring full control over the underlying database structure.
@Turner_Gunn2 — Will existing large tables be able to transition into databases?
Migration from existing tables is a priority. The team plans to provide tools to move current tables into databases while preserving existing connections where possible. Some migration capabilities may come later in the beta as the tooling matures.
@Turner_Gunn2 (follow-up) — What about complex formulas during migration?
Some formula patterns may need adjustments. Certain behaviors don’t scale well in the new architecture. During the beta, the team plans to gather real examples from users and provide concrete guidance on adapting those workflows.
@Tim_Richardson1 — Will the new model make it easier to reuse shared tables across multiple docs?
Improving discoverability and consistency of shared data is a core goal. Shared data sources will be easier to find and connect to, and references will always point back to the original source table rather than disconnected copies.
@Hugo_Assuncao1 — Will the system support loading only the rows relevant to a given user or view?
Yes, the architecture loads only the rows needed for the view the user is actively using, which is a significant performance improvement. Initially, users will be able to filter which rows sync into each doc. Over time, the architecture may support more advanced row-level permissioning.
@Nick_HE — Is column-level security planned?
Not for the short term, but the architecture is designed to support this type of access control down the road.
@Tim_Richardson1 — Will databases be included in existing plans or be a separate offering?
Pricing hasn’t been finalized yet. When Nathan referred to databases as a “separate product,” he was describing the user experience, not necessarily how it will be packaged commercially.
@Chris_Strom — How will databases work with external data sources like HubSpot packs?
Pack integrations will be supported on the new architecture later in the year. The plan is to build the core database foundation first, then extend it to support pack-backed tables, which will enable syncing of much larger datasets from external systems.
@Hugo_Assuncao1 — Will it be possible to add local columns or calculations to synced database data?
Yes, through what Nathan called “annotation columns” — columns created locally within a doc that attach to synced database data. This lets teams layer in calculations or context-specific to their document without touching the underlying database.
@Turner_Gunn2 — Will lookup and relation columns be supported from the start?
Lookup and relation columns are already working in the new architecture and are expected to be available from the initial release.
Outstanding questions, answered!
You can find a list of the many questions we couldn’t get to during the live call, answered by Nathan here. Click on the comment triangle in the top left-hand corner of each row to see the answers.
What’s next
A beta is planned for the coming months, with a broader launch later this year. If you’re interested in testing the new database capabilities and sharing feedback, click here!
Thanks again to everyone who joined and brought such thoughtful questions to the call! If you have more to add, keep the conversation going right here. This kind of community input is exactly what shapes the product’s next steps.