A user is seeking advice on a simple triple store solution for a client project, indicating a need for recommendations on suitable databases for small datasets.
Got a client project coming up and I'm trying to avoid over-engineering it. They want something similar to a tiny Wikibase. Basically a bunch of triples: subject - predicate - object Nothing massive. Probably a few million rows at most. Reads will dominate. My first instinct was to just keep it boring and do it in Postgres. One table like: (subject_id, predicate_id, object_id) Indexes on subject/predicate. But the queries might start chaining relations a bit (follow links across entities, filter by properties, that kind of stuff). So now I'm wondering if I'll regret not using a graph / RDF DB later. At the same time… introducing another database engine for a relatively small dataset feels unnecessary. If anyone here actually ran something like this in Postgres. Did it hold up fine? Or did you end up moving to a graph DB once queries got more complex?