
SpecMesh - Provisioning Kafka resources using the AsyncAPI Spec, May 2023
In this blog, I will discuss how SpecMesh utilizes the AsyncAPI spec to …
Read more ⟶Specification driven data mesh for the enterprise
Or visit GitHub
For organisations to successfully adopt data mesh, setting up and maintaining infrastructure needs to be easy. We believe the best way to achieve this is to leverage the learnings from building a ‘central nervous system‘, commonly used in modern data-streaming ecosystems. This approach formalises and automates of the manual parts of building a data mesh.
SpecMesh combines Kafka best practice, blue prints, domain driven design concepts, data modelling, GitOps and chargegback. But rather than talk about it, we decided to build it!
A developer toolkit for streaming data mesh built on Apache Kafka
Build your Kafka applications using the industry standard AsyncAPI spec.
Read more ⟶Specs capture 'Aggregates' that represent units of business functionality. This hierarchy incorporates a structure that allows for clear and concise ownership and governance rules.
Read more ⟶SpecMesh will model topics, schemas, and permissions as a unified configuration, rather than using the disparate configurations employed by other tools.
Read more ⟶By modeling a collection of topics under a single app that uses a common hierarchy, SpecMesh is able to report on storage and consumption metrics that can be used to build chargeback systems.
Read more ⟶Topic names can use a '_protected' label, this allows a tag to be incorporated where the application owner can grant permissions to other domain-ids (principles)
Read more ⟶SpecMesh will support a dataflow-centric visualization of all related specifications (apps), their relationships, as well as producers and consumers (coming soon).
Read more ⟶With a decades of insights and expertise, we’re reimagining streaming data so that you can focus on your business.
In this blog, I will discuss how SpecMesh utilizes the AsyncAPI spec to …
Read more ⟶Looking back at Kafka Summit 2023 (OSO) Kafka Summit 2023: Key takeaways …
Read more ⟶A Kafka Meetup Tech talk that is targeted at Kafka Devs. Summary Kafka is …
Read more ⟶‘We spent 2 years trying to build this and what your guys have built is better’ Architect @Tier-1 Investment bank - London
‘3 years ago we started evolving to this, instead i wish we could use SpecMesh now’ Manager @One of Europes largest retailers
‘Specmesh is much better thought out that our current solution that took a team over 2 years to develop and we still can't solve chargeback’ Architect @Tier-1 Investment bank - London
‘Why doesn't kafka have this already? It just makes sense...’ Attendee @Kafka Meetup London
‘We are planning to ditch our broken solution and use to this approach,’ Manager @Nordic Shipping firm - Big Data London
‘Self governance and chargeback and modelling using an AsyncApi Spec just makes sense’ Attendee @BigDataLondon
‘We will now use terraform just for server infra, and make specmesh gitops developer led. It just makes sense’ Founder @EdTech startup
‘We currently use julieops but need features that it looks like your guys will develop (and it doesn't have)’ Tier-1 Investment bank - London
It was created by Neil Avery (Ex-Confluent), Sion Smith (OSO DevOps CTO) and Andy Coates (Ex-Confluent)
Yes absolutely, either start a chat via a Issue, Enhancement or PR and we can go from there
It's first-come, first-served; however, ultimately, it's the three of us and whoever else is showing interest.
It uses the Admin Client (like all Kafka admin tools - ansible, terraform and kafka scripts). This means it works with Open Source Apache Kafka, AWS MSK, Red Panda and Confluent Cloud and Confluent Platform (RBaC pending)
Its unlikely - there is too much value to build up the stack?
It will focus on LinkedIn's DataHub?
Liquidlabs and OSO DevOps?
Create an issue in GitHub and we will be notified!