I find myself saying these repeatedly in numerous meetings, discussions, and architecture reviews. I have no idea where these ideas originated from, but they are harmful and lead to entangled, distributed messes.
I find myself saying these repeatedly in numerous meetings, discussions, and architecture reviews. I have no idea where these ideas originated from, but they are harmful and lead to entangled, distributed messes.
- Using events does not magically make you loosely coupled
- Using events does not mean you shouldn't care and understand what data your consumers want.
What I've started noticing more and more lately is that managers preach psychological safety, empowerment and openness... until people disagree with them on something. Then it all goes out the window. I reckon it's correlated to the current job market conditions.
Yet countless other organisations implement those same "radical" ideas brought about by the same "purists" to thrive in increasingly competitive markets because they are not afraid to experiment, fail fast and deliver value sooner (& safer & happier).
There is a fixed mindset in many organisations that makes change infinitely harder. There is an inexplicable clinginess to the status quo. New ideas are labelled "radical" and people bringing them are "purists".
I am no security expert, but making everything publicly accessible because "we implement 'zero-trust'" doesn't sit right with me. You instantly significantly increase the attack surface area and neglect 'defence in depth'.
@einarwh.bsky.social I could have sworn that this image had a post/explanation attached to it, no? I wanted to refer someone to it. Am I imagining it, or did I read it on another platform?
to aggregate and transform logs to a vendor-specific format before forwarding it. #Observability
I was just told at work that we don't need OpenTelemetry and this Collector malarkey. We are already decoupled from any vendor - every microservice (400 in total) logs everything to stdout, we then have scrapers (managed/deployed individually, mind you, so huge maintenance overhead)
In databases (and lakes and warehouses) you handle data you have. In streaming you handle data that is coming at you while it is coming at you. You have a "primary key" that is the time axis and a context set by a partition or sorting key. What's special about is that you don't have to query for that data. It shows up from the end of a pipe that others push into. The data is a sequence of events, which are by nature change notifications. "In the stream" you can uniquely react to things happening and accumulate aggregates because you can understand time and context and the specific change in the moment. You leverage the "impulse" of the event. That is much harder to do once the event has been poured into the vastness of a lake and everything you need to do with a query is far slower. Example: Inside the stream you can fairly easily compute 30, 15, 5 and 1 minute avg/stddev/min/max/etc over a sensor value that comes in at 100Hz **at 100Hz**, amend the stream with those values as you stream that onwards into a database, and also instantly spawn an alert event as, say, the averages leave a threshold corridor. Event metadata registries like the one we're defining in hashtag#xRegistry provide the metadata framework for streams. Databases have a schema for the data you have. Event stream metadata registries hold schema for data that you don't have yet.
In databases (and lakes and warehouses), you handle data you have. In streaming, you handle data that is coming at you, while it is coming at you.
www.linkedin.com/posts/clemen...
How can an organisation accept that observability is essential and not optional when its people's mental model of distributed systems is simple lines and boxes where everything is predictable and known?
Which is why I moved away from that initial requirement. The problem is they can't agree on what they want + no one wants to put the work in. They want to change nothing but somehow get everything.
'Architecture' wanted to understand system behaviour in prod, Developers didn't want to change anything as they are under constant pressure to deliver features, Compliance didn't know PII data was logged and want it removed, Security wanted to analyse logs in specialised tools, etc..
However, it quickly became apparent that that meant different things to different people and speaking to them surfaced a few. Top brass wanted to cut costs, SREs wanted more detailed telemetry + better alerting, DevOps (yes, they have a separate team) wanted standardised logging across microservice
That is part of the problem. They don't know what they want and the deeper I go the more dysfunctional I discover the org is and the more cans of worms I open. They work in silos and don't talk to each other. But when I inherited it the goal was centralised logging.
Today felt like the last straw. People who have tried/are selling observability to/at an org: at what point do you give up?
"We can't send our telemetry data to SaaS vendors; they contain all sorts of PII data", "how do you investigate issues without PII data?", "OK. We realise we shouldn't log PII data but..", "Don't care about all these benefits, only costs."
"We don't need this", "engineers don't have time to implement it", "every engineering director needs to be the 'go-to person' in big outages", "we build it, but we don't run it or design it - so what's in it for us?",
Joined a new company and took over an enterprise-wide initiative to improve logging/monitoring across their microservices arch. Directed the org towards OTel and Observability and have been extolling their virtues since only to be continually met with:
The "Is TDD Dead?" debate with Kent Beck & Martin Fowler vs DHH helped me see the light way back then
Someone did create a... oh, I get it :) I was rejected by JustEat around 10 years ago because my tests didn't isolate the units of code, and the structure of my test project didn't mirror that of the production code. Hopefully, that someone fixed all this now that he works there
Started building a new "dotnet new" template for creating a library as a NuGet package, including @nuke.build build pipeline, unit tests, proper .editorconfig, Roslyn analyzers, a nice readme and more. See github.com/dennisdoomen...
I used consumers here because I have seen this implemented in many ways:
- Using orchestrator services
- Using the UI to orchestrate
- Relying on knowledge in users' heads
www.ashrafmageed.com/the-hidden-c...
Instead of business processes and rules being baked into your services with the data they operate on, they are pushed up to the consumers and the interactions between them.
CRUD (micro)service usually have no, or very little, business logic under the guise of simplicity. However, this doesnβt simplify your system; it just relocates complexity.
Thanks for the link. I'll go check it out and see if I can spot the ones that are wrong!
That would come in handy where I work :) Well, that + "97 Things You Should Know About Microservices".
You said in your latest blog post that you had to whittle it down from 160... I'd love to read a book about the ones that didn't make the cut and why - "The Forgotten 63" maybe
I know the feelingβdigital unread books for me.
I was watching a talk by @kevlin.bsky.social last week where he said: "many people buy certain books but never
read them but they hope the fact that they've got them will transfer the knowledge to their head".. so now I'm remedying that
Does an ARB fit with agile/modern development & Architecture? Should you try to make it fit? Regardless though having to re-book an ARB session (and wait for the next weekly one) every time there is a change to the design is not (never?) a good idea.