System architects face a critical choice between specialized vector databases like ChromaDB and general-purpose options like PostgreSQL with the PGVector extension. This decision profoundly impacts TCO and system viability, yet holistic performance data under resource constraints is scarce. We answer whether a specialized or generalized architecture provides superior operational efficiency and accuracy when resources are limited and providing an evidence based guide for navigating the trade offs between cost, speed, and accuracy. We conducted 119 tests on the Deep1M dataset within a resource-constrained 4GB RAM Docker, measuring latency, ingestion speed, storage overhead, and recall accuracy. The results reveal a stark architectural trade-off. ChromaDB delivers highly consistent, low query latency, with only a 1.3-fold performance degradation as data scales. However, this speed comes with significant operational costs:-massive storage inefficiency averaging 395 times the raw data size and severe ingestion bottlenecks, showing a 491.7 fold slowdown. Conversely, PostgreSQL with PGVector demonstrates resource efficiency. Its storage overhead is minimal at 3-4 times the raw data size, and it provides 7.0 times better ingestion scalability. Crucially, it achieves statistically superior accuracy at production scale (≥250K vectors), delivering near-perfect 99.6-99.8% recall compared to ChromaDB's 91-95%. The trade-off is performance variability, where poorly tuned PostgreSQL queries can be up to 16.6 times slower than ChromaDB. We conclude that for dynamic production applications where TCO, scalability, and high accuracy are priorities, PGVector is more viable. ChromaDB's predictable latency is better suited for latency-critical applications with static data, but only if its high operational costs are acceptable.