Hi! It’s Anton here and I’m the author of datastrophic.io. I’m a technical leader and a software engineer specializing in distributed systems, data platforms, and AI infrastructure. My tenure in the industry passed the mark of 15 years during which I was working on high-load, distributed, big data, workload and container orchestration systems. If you’d like to connect or to learn more about my background - the best way to do it is via LinkedIn.
So what does datastrophic mean at all?
The name originates from circa 2015 when I was helping startups to design their in-house data platforms and we talked a lot about various issues related but not limited to data loss, consistency, delivery semantics, idempotent writes, backups etc. and the impact they might have on the business. Not surprisingly, quite often a breach of SLAs, or an irrecoverable data loss were having catastrophic consequenses.
datastrophic (adj.) - conserned with a critical data [platform] architecture that can lead to a catastrophic situation resulting in a severe damage. Architecture drawbacks, technology choices, scalability (or lack of it thereof) are some of many reasons that lead to a datastrophic event.
Although there’s no specific theme in this blog its main focus is on architecture design and knowledge sharing. Most recent posts are related to Kubeflow and Kubernetes. Prior to Kubernetes, it was mostly about Spark and Mesos but the time flies fast.
All the information in this blog is the author’s personal opinion and does not represent any person, company, or organisation views or position. None of the content is sponsored, and all the information represented in the blog is for educational purposes only.