Roundtable Recap: Tables + Databases

Thank you to everyone who joined yesterday’s roundtable on the future of tables and databases in Coda. The energy in the room was great: your questions, ideas, and real-world use cases made for a genuinely rich conversation.

@nathan from the product team walked through where we’re headed: improvements to scale, performance, and how shared data will work across docs, then opened things up for live Q&A with the community.

Here are the key takeaways.

:clap: What got people excited

Performance at scale

Teams managing large tables or running operational workflows in Coda felt this one immediately. The new architecture is built to handle significantly larger datasets while keeping docs fast and responsive. For many attendees, it was the most anticipated improvement on the list.

A real home for shared data

One of the themes that came up again and again: important data shouldn’t have to live inside an individual doc. The idea of databases as a dedicated source of truth, something multiple workflows can connect to reliably, resonated strongly with teams who’ve been working around this limitation for a while.

Live data across docs

The move toward live syncing landed well. Rather than managing manual refreshes or chasing delayed updates, the new model keeps changes connected wherever the data is used. Makers could immediately see how much this would simplify their setups.

More powerful relational workflows

This one sparked considerable imagination. Attendees immediately mapped it to real workflows: planning systems, project tracking, CRM-style use cases, and operational dashboards. The ability to connect related tables more reliably across docs opens the door to building more sophisticated systems on top of Coda.

:eyes: What y’all are watching closely

Alongside the excitement, the conversation raised sharp, practical questions about how this will work in practice.

Feature parity for advanced workflows

Many builders rely on buttons, automations, and complex formulas to run their day-to-day operations. Several attendees asked how these capabilities will evolve alongside the new architecture and when full feature support will be available.

Migration from existing setups

A common thread: what does the transition actually look like? Migration tools are planned, but people wanted to understand how their current tables and formulas will carry over, and what adjustments might be needed along the way.

Permissions and access control

Teams managing shared operational data had questions about row- and column-level access controls—how robust the permission model will be, and when support will be available.

External integrations

Teams syncing from HubSpot and other packs asked how those integrations will work with the new architecture and when support for larger synced datasets will be available.

Local flexibility within shared data

Some asked a more nuanced question: how will local calculations, annotations, and doc-specific views work when you’re pulling from a shared database? It’s the kind of question that shows people are already thinking seriously about how to build with this.

All of these are exactly the right questions to ask, and they’re actively shaping the roadmap as development progresses through beta.

:woman_raising_hand: Q&A from the live call

@Max_OBrien — How will buttons work in the new database model?

No fundamental architectural blockers to supporting buttons. Simpler button behaviors are already available today, and broader support will follow as features are integrated into the new architecture. Most button functionality is expected to be supported over time.

@Heather_Donnithorne — Will buttons work across connected docs, and how will teams interact with shared databases?

The goal is for button columns to remain functional when database tables are used across connected docs. The new system should also offer more flexible permissions, allowing teams to interact with shared data without requiring full control over the underlying database structure.

@Turner_Gunn2 — Will existing large tables be able to transition into databases?

Migration from existing tables is a priority. The team plans to provide tools to move current tables into databases while preserving existing connections where possible. Some migration capabilities may come later in the beta as the tooling matures.

@Turner_Gunn2 (follow-up) — What about complex formulas during migration?

Some formula patterns may need adjustments. Certain behaviors don’t scale well in the new architecture. During the beta, the team plans to gather real examples from users and provide concrete guidance on adapting those workflows.

@Tim_Richardson1 — Will the new model make it easier to reuse shared tables across multiple docs?

Improving discoverability and consistency of shared data is a core goal. Shared data sources will be easier to find and connect to, and references will always point back to the original source table rather than disconnected copies.

@Hugo_Assuncao1 — Will the system support loading only the rows relevant to a given user or view?

Yes, the architecture loads only the rows needed for the view the user is actively using, which is a significant performance improvement. Initially, users will be able to filter which rows sync into each doc. Over time, the architecture may support more advanced row-level permissioning.

@Nick_HE — Is column-level security planned?

Not for the short term, but the architecture is designed to support this type of access control down the road.

@Tim_Richardson1 — Will databases be included in existing plans or be a separate offering?

Pricing hasn’t been finalized yet. When Nathan referred to databases as a “separate product,” he was describing the user experience, not necessarily how it will be packaged commercially.

@Chris_Strom — How will databases work with external data sources like HubSpot packs?

Pack integrations will be supported on the new architecture later in the year. The plan is to build the core database foundation first, then extend it to support pack-backed tables, which will enable syncing of much larger datasets from external systems.

@Hugo_Assuncao1 — Will it be possible to add local columns or calculations to synced database data?

Yes, through what Nathan called “annotation columns” — columns created locally within a doc that attach to synced database data. This lets teams layer in calculations or context-specific to their document without touching the underlying database.

@Turner_Gunn2 — Will lookup and relation columns be supported from the start?

Lookup and relation columns are already working in the new architecture and are expected to be available from the initial release.

:exclamation_question_mark: Outstanding questions, answered!

You can find a list of the many questions we couldn’t get to during the live call, answered by Nathan here. Click on the comment triangle in the top left-hand corner of each row to see the answers.

:next_track_button: What’s next

A beta is planned for the coming months, with a broader launch later this year. If you’re interested in testing the new database capabilities and sharing feedback, click here!

Thanks again to everyone who joined and brought such thoughtful questions to the call! If you have more to add, keep the conversation going right here. This kind of community input is exactly what shapes the product’s next steps.

18 Likes

We’ve been talking a ton about “lowering the floor of Coda” for new users. As much as that is really important, this one announcement is what all of us professional makers where eagerly awaiting for to finally start “raising the ceiling” for more app-like workflows! I’m so thrilled to have this amazing confirmation that Coda is heading in the right direction! Can’t wait to have so many use cases unleashed for our clients!!

12 Likes

@Daniel_Robertshaw1

You asked for an update on some improvements in the “we’re listening” tread.

This is the latest information.

3 Likes

@Piet_Strydom

Thank you! I totally missed this, exactly the update I was after. Can’t wait to see more!

3 Likes

merci @Nathan_Penner for your feedback in doc Databases Roundtable Dory. Glad to see we are moving forward!

3 Likes

I think I missed everything about this round table. I see all the questions being asked but I don’t really understand what is happening. Are tables just changing to be DBs? Or tables and DBs going to be two different things?

4 Likes

hi @Samuel_Langford , to my understanding we will have 3 types of tables
simple tables (grid)
standard tables (we have today)
power tables (up to 1M rows)

the power tables will live on workspace level and can be brought into your docs, eventually linked to permissions on row & column level and as such offer a faster and more reliable alternative for cross docs due to larger data sets and instant modifications.

I expect an update this spring from @Nathan_Penner on it when a (closed) beta will be launched. Make sure to be part of it :wink:

Cheers, C

Christiaan

12 Likes

Thanks! It kind of sounds just like a how the cross doc system works now right?

1 Like

hi @Samuel_Langford , I did not yet see it in action, but I believe we can expect a smarter set up compared to the current cross doc logic due to the architecture Nathan’s team is putting in place. That said the 10000 row limit will be gone as well.

An essential element from my point of view are the permissions on the base table and on the views of these tables we can bring into our docs. Performance without permissions is like a sportscar without breaks.

Cheers, Christiaan

8 Likes

I’m excited for it. I’m also really happy to see updates being made to usability rather than AI. This is rad.

4 Likes

Please consider adding some features like:

  • Data input validation
  • Required fields
  • Possibility to add optional column display names to make them more reader-friendly for consumers

:crossed_fingers:

7 Likes

that is great feedback @Gabriel_Lopes1 , can you add it here as well: Databases Roundtable Dory so the team can follow up in an organized manner.

Merci, Christiaan

1 Like

Two questions:

  1. Are there any column types currently supported in tables that the team does not plan to support in the Power Tables? I am thinking of canvas columns, specifically.
  2. Is the thinking that permission will be dictated by the doc we pull the data into? So, pull a database/power table in, specify data-based filters, and then anyone in that doc has full access to that data?

Thank you so much for navigating the immense complexity of this transition!!
Astha

4 Likes

Hi @Astha_Parmar!
We plan to support all column types, though we’ll enable some post-beta and a few post-launch as we sequence the integration work.

Canvas columns will have some changes to support scale. We’ll limit to one canvas column per table, accessible from row modal. These won’t support filtering, sorting, or formula access, as these are inherently difficult to support at scale for rich content that supports live collaboration. There may also some limits to nested objects.

Would love to hear where that does vs doesn’t support your use cases.

Permissions will be similar to cross-doc today and work as you describe. You can pull in a full database to a doc or a filtered subset and then anyone in the doc will have access to that data. Let me know if that’s what you’d expect, or if you prefer a different model.

6 Likes

simple tables (grid)
standard tables (we have today)
power tables (up to 1M rows)

I love the sound (and promise) of these forthcoming improvements. The announcement of the power tables announcement should be a gamechanger and I’m really looking forward to it.

But…
(and I hope this does not come across as ingratitude) I’d love to see the team consider pushing the row limit well beyond 1M rows if at all possible.

While 1M rows is a massive step up from where Coda is today, it’s also Excel’s row limit, which these days frequently becomes so limiting.

Platforms like RowZero.io have shown that a spreadsheet-like experience can be put in front of large-scale analytical storage (of billions of rows), and the gap between ‘collaborative document tool’ and ‘analytical platform’ is narrowing. Coda is already in my daily toolkit precisely because it offers the combination of structured data with flexible docs.

Reducing the friction for me of not having to think about going elsewhere to handle large files (eg that Excel can’t handle, and so I need an alternative) would be amazing.

There are no doubt real architectural trade-offs involved (with formulas, and real-time collaboration, being quite different to columnar storage for analytical workloads) , and the team would understand those constraints far better than I do. But if power tables are built on a new, modern storage layer, I’d love it if higher limits eg just 10s of Million rows could be possible.

I guess the obvious push back would then be about having to draw the line somewhere - if 10s, why not ‘just’ 100 Million rows, why not 200, 500?? etc. But at least just significantly higher than Excel (and more performant at that scale) would be so helpful.

simple tables (grid)

I love grids and frequently like using them in preference to a table. However I have often thought that it would be so useful to have cell-based, row/column-style referencing for even just pretty basic spreadsheet-like interactivity and formula capabilities. Even for canvas formulas, to quickly reference the value of a specific cell in a grid.
Perhaps this is something that will come along with the Rows acquisition? I’ll keep my fingers crossed.

Thanks for focusing on these DB / table improvements. Genuinely excited for what they will bring and unlock.

3 Likes

Oh wow, I’ve sure been missing out lately :smiling_face_with_tear:

Is there a recording of that call anywhere? Super excited for tru(er) database functionality in Coda. And super excited to hear that Coda ain’t going to be demoted to just a “Grammarly doc editor” but you guys are actually going to grow it into a backoffice powerhouse.

11 Likes

Recognising I am an external party and missing a lot of context, can you please share the logic behind having super tables and regular tables?

I understand the difference but I think super tables will simply make regular tables moot. If I have the option to build a supertable, why would I build a regular one?

By keeping both you introduce a two-tier database systems, which is inviting complexity and mess (read higher churn).

My recommendation is to take the Fibery approach:

  1. Make all tables super tables.
  2. Allow users to create databases at a workspace level. This will be important to define access between teams.
  3. Each record is a first class citizen. Docs are also first class citizens. Everything is interconnectable with the standard relationships - one-to-one, one-to-many, etc.
  4. If you implement this architecture, you eliminate the need for cross-doc syncing (which, frankly, is a workaround necessitated by poor achitecture design early on).
2 Likes

@Victor_Kalchev two posts in and already asking the hard questions. We love to see it! :sparkles:

On the “why keep both” question: the tiered approach isn’t about creating two classes of tables. It’s about the fact that power tables and standard tables are genuinely optimized for different jobs, and the engine that makes one great would actually make the other worse.

Power tables are being built on a storage layer designed to scale to millions of rows. That same engine, by design, isn’t built for the kind of instant, real-time, cell-level responsiveness you want when you’re collaborating on a project tracker or a simple contact list. If everything defaulted to the power table engine, your everyday tables would feel slower and heavier for no good reason. It’s less “premium vs. budget” and more sports car vs. semi truck: the semi can carry more, but you don’t want to drive it to the grocery store.

The goal over time is for Coda to make that distinction invisible, so you’re never really choosing between them. The right engine just runs under the hood based on what you’re actually doing.

On Fibery: fair comparison to raise. They made a principled architectural bet that everything is a database and every record is a first-class entity, which creates a really coherent, powerful system. It has genuine fans, especially among technical teams building complex relational workflows. The tradeoff is a steep learning curve, and getting the most out of it requires real technical comfort around setup and configuration. Coda is trying to serve a much wider spectrum of users in the same product, which is a harder design problem and one worth watching closely as this roadmap unfolds.

On cross-doc syncing: fair point. Some of that friction comes from early architectural constraints, and reducing it is genuinely on the roadmap. Still in progress, but real.

4 Likes

Thank you for the thoughtful response, @Ruggy-Joesten. I love the future state of the engine choosing for the user what table type to be deployed depending on the use case.

Related to the supertables:

I’m curious, on what dbms will they be built? I assume Postgres?

In their fully implemented commercial state, how close will they by to a platform like Supabase?

On Fibery: you’re right, their product is quite complex and they’ve turned this complexity in a positioning moat. Notwithstanding, I love the fact that there is virtually unlimited ways you can connect your data (except for an externally shareable dashboards, but that’s a different story).

On the Coda side, I’m personally struggling with the data siloing at a doc level. Just today, I ran into an issue where a cross-doc sync wouldn’t load and remained stuck at the permission verification step. Within a doc, things work wonderfully. I’d love the same connectability at a workspace level.

3 Likes

My pleasure! And to answer some of the other items you flagged:

On the DBMS question: I don’t want to speculate on the infrastructure specifics and get something wrong in public. Going to loop in the product team and see if I can get you a real answer, so hang tight.

On the Supabase comparison: it’s an interesting one, though I’d push back slightly on the framing. Supabase is purpose-built for developers as a backend platform, so the analogy only goes so far. Where Coda is headed is more about bringing that level of data power into a collaborative workspace that anyone on a team can use, not just the technical members. Different destination, even if some of the underlying ambitions rhyme.

On the cross-doc sync issue: that’s genuinely frustrating and I’m sorry you hit it. Same deal, I’m going to flag this to the product team and see if I can get some clarity on where things stand and what’s coming. Your framing of “the same connectability at a workspace level” is honestly a perfect way to put it, and it’s the direction things are heading.

More to come!

4 Likes