In a partitioned world, don’t violate core directive
This is another short post steming from a recent talk I gave on Azure Cosmos Db vs. SQL Database, and there will be more based on discussion and feedback I received and things I learnt along the way.
The point I want to make is that when you are implementing a scale out data stroage then regardless of whether you are considerng Azure SQL Database, Cosmos Db or another storage engine, you have to think differently about your read and write patterns. To paraphrase Conor Cunningham linkedin|blog from his excellent OLTP Sharding Techniques for Massive Scale presetation at SQL PASS in 2014, “don’t violate the core directive”.
For this reason, I encourage anybody considering am implemention against Cosmos Db to take the time and watch the presentation. In fact, I recommend that everyone watches the presentation.
Coming back to Cosmos Db for a moment, if we think about it simply then at its core it is a scale out database. To compare terminology, a physical partition in Cosmos Db is equivalent to a shard in SQL Database, and a Cosmos Db logical partition, defined by a partition key, is equivalent to a shardlet in SQL Database.
Finally, the relevancy of the presentation is that when working with scale out databases the design and performance concerns are largely the same regardless of vendor or technology. And again to think about it simply, if viollating the core directive is unavoidable then perhaps a scale out dataabse solution is not what you need.
Update Nov 23,2017
A colleague pointed out that I have not explained the core directive, which is true since I want readers to watch the presentation. The response to which was “A short post that requires me to watch a 90 MINUTE video!”.
True, and you don’t have to do both together. Really the point I want to emphasize is the presentation will introduce you to the findamentals of a scale out database implementation and the potential pitfalls that are relevant regardless of vendor or technology.