WHY CIAM DISCUSSED WITH GEETHIKA COORAY AND DAVID DEBOISBLANC

DAVID DEBOISBLANC INTERVIEWS GEETHIKA

David deBoisblanc has an interview with VP and GM of Identity Access Management (CIAM) of WSO2. WSO2 is an exciting company that we have found to be powerful at our Fortune 500 clients. They have both API management and CIAM/IAM. Their CIAM has over a Billion identities under management. IAM and CIAM capability is in one code and is available in IDaaS, private cloud or on premise.

While CIAM is a very strong security tool, it is differentiated from IAM in that its focused on external users and is an experience enhancement tool as well. Whether B2C, B2B or B2C (government) it has scale, rich connectivity API and SDK sets for enterprise systems and the ability to track and develop customer in their relationship journey with the company.

The links below are to the 3 minute trailer and the full half hour interview.

Trailer:

Fulle Video

Thanks

David deBoisblanc

Message in a Bottle-Stream Processing

3d illustration of a constricted and narrowed artery and the blood cannot flow properly called arteriosclerosis

I couldn’t resist the reference to “a message in a bottle”. The name is good but a better reference would be data flow as a life blood flow. Unfortunately, that is probably not a great title.

In 2018, I had the opportunity to be hosted in Sri Lanka by WS02 for a dive into their technology stack (both present and future). One of the applications that caught my attention at that time was their open source WSO2 Stream Processor. That led me into investigating more deeply into Stream Processing technology, methods and use cases.

Stream Processing is a big data method and technology. It also falls into the category of “event driven design”. A continuous stream of data is queried, identifying and reacting to defined conditions within a range of time (milliseconds). This is WSO2’s definition which is a pretty accurate encapsulation.

Here is another useful definition:

Stream processing is the processing of data in motion, or in other words, computing on data directly as it is produced or received.

This post is not meant to be a deep dive but to be an overview of the theme.

Use Case Overview

Recently I have had the opportunity to work in the health care space and in the logistic space; a key part of both projects leveraged this technology.

Predictive patient care for healthcare and in the logistics company; supply chain optimization, traffic monitoring and route optimization. Both of these domain-driven designs had streaming processing at their core.

Other use cases that I have been involved with:

  • Oil Company Spot Trading
  • IOT Equipment Monitoring
  • Auction Applications
  • Manufacturing Production Line Monitoring and Optimization

Additionally, it has cross cutting services application, from complex system security and intrusion monitoring to predictive performance monitoring.

Deploying Stream Processing

One of our solution architects and I were introduced into a client who had “rolled their own” streaming process application. They had routed streams into RabbitMQ as events, coded event topics and published the results. This is a simplification of what was involved but it worked, however it was labor intensive to design, deploy and maintain.

Now in 2021 it is simplified because there is a rise in the adoption of platforms for stream processing. These products capture the data, route it to its domain specific logic (called actors), orchestrate the flow, handle performance scaling and provide error handling.

So what is the point of streaming data through a processor as events? There is a whole catalog of ever expanding tools and techniques for consuming this data in useful ways. Central to them is the concept of Streaming SQL.

Streaming SQL language runs streaming queries against the streaming data. This is a continuously running process whose output is itself a stream. This output stream is filtered for “events” configured to be acted upon. Those resulting events can either trigger a service or be published for subscribers. The event streams are processed directly, and only a meaningful subset from the data is subsequently persisted.

 Each stream process maintains its own database and state this result in a decoupled and more cohesive design. For example:

  1. A certain product is viewed online and a second related product is then examined, those events are added to a persistent layer so that the merchandizing analytics can have access to that correlation and an advertisement can be targeted at the purchaser.
  2. A traffic accident causes a slow down of a delivery vehicle the data could be used for tracking the frequency and timing of that type of event and to handle a series of late deliveries.

The elimination of reliance on a heavy centralized database coupling infrastructure and data handlers in an anti-pattern for microservices.

Stateful stream processing joins the database or key/value tables and the event-driven application or analytics logic into an efficient unit.

If you would like to dive deeper into this subject just comment below and let me know.

David deBoisblanc is the managing partner at Duczer East. David has 25 year history in the software and systems engineering domain.

Importance of “And” in Microservices

David deBoisblanc, April 13, 2021

What is the purpose of this particular microservice ?

If the answer contains the word “and” maybe you need to reexamine your design.

The result of “and” is a microservice that has more than one functional purpose. As you scale and introduce new business requirements, the application becomes increasingly difficult to maintain. Perhaps more egregious, the communication pattern has an additional multiplier that can lead to ‘circuit breaker’ hyperactivity. Also, it increases complexity in isolating issues for resolution and remediation.

Combining purposes in a particular microservice can appear to be a more expedient way to get new functionality implemented. Hence, the temptation to do it. The usual result, however, is an overall extension of the timeline as challenges arise.

Single Responsibility Principle

What we are talking about is the “S” in SOLID design principles as first laid out by Robert C. Martin. The Single Responsibility Principle is the first tenet of Solid:

  • Single Responsibility Principle
  • Open/Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion

It was originally defined as:

“A class should have one, and only one, reason to change.”

-Robert C. Martin

Many experienced developers already strive to practice this in their development and know the idea even if they have never studied SOLID design. At the coding level it drives plenty of refactoring into objects or functional design by developers with pride in their code. The principle is true for classes, components and microservices.

Single Responsibility Principle in Microservices

Design

Robert Fowler has laid out 9 principles for microservices that we strongly agree to and have applied in our deployments. Here is my version based on those:

  1. Componentization via services organized around business capabilities and cross cutting services.
  2. Smart Endpoints and Dumb Pipes
  3. Decentralized Governance and Data Management
  4. Infrastructure Automation
  5. Design for Failure

The Single Responsibility Principle is encapsulated in the first of these principles, 1) Componentization, and is an enabler of the 5th principle, 5) Design for Failure.

Often the technology stack is selected first and often driven by the current inventory of skills and licenses. As a result, there is plenty of focus on that selection process with some notional understanding of the application architecture. However the need to put strong focus on the following two steps is a greater use of energy towards a successful product.

  1. Mapping the components: Utilizing the Single Responsibility Principle, the services and their respective boundaries are defined.
  2. Communication Pattern is Designed.

These two steps are critical for a successful and super performing product and are very difficult to reverse when inadequately designed. As an extra benefit, these steps can lead to a modification of the technology stack, especially since the technology stacks can be independent from each other.

Changing or new requirements is a fact of life for developers and architects. In the Agile methodology this fact is embraced and embedded in the process, yet this does not reduce the challenges in design and development of systems. In fact, it amplifies the need for proper design and proper coding execution of those designs.

Changes not only to business functions but changes to operational code (eg, new security settings, etc.) can be relentless. Operational code changes are especially prevalent because of the rapid evolution of the technology stack components. As an aside, the decoupling of operational code by a service mesh, greatly mitigates the complications of these operational code changes. This involves several of the other SOLID principles but more on this in a later article. For those interested in the service mesh, I have another article that gives a good overview (Should I Consider a Service Mesh).

If your microservice encompasses multiple responsibilities, those responsibilities are no longer independent. Due to simple probability that microservice needs to be changed more often if it has more responsibilities.

That may appear a small problem at first blush but if also affects all the other dependencies of that service. Furthermore, despite our best coding efforts, the odds of the Single Responsibility Principle also being violated with the code within the microservice are increased substantially. This would result in changes being required in the other responsibility that was not the initial target of the change. Unintended consequences likely then would cascade through its dependencies.

The microservices design principle of smart endpoints and dumb pipes is also impacted by the Single Responsibility Principle. It is not only an outcome of the principle but is subject to it as well.

Tactical Execution

Good tactics can save even the worst strategy. Bad tactics will destroy even the best strategy.

General George S. Patton

I love that quote as I have seen its truth in many different manifestations. The second sentence can be applied to our domain with this modification “bad coding will destroy even the best design”.

As mentioned earlier, the Single Responsibility Principle is extremely important in the actual coding of the microservices applications. Microservices “creep” happens when coding starts to create functionality that expand the purpose of the original design. This can happen as a result of expediting a problem resolution or from a misunderstanding of where a function should reside. Whatever the case the project management effort of team leads and architects must have rigor around the execution of development code. Code reviews with the Single Responsibility Principle as one of its checkpoints greatly improves outcomes. This is a process and a culture characteristic that takes a team to the next level of maturity.

The principle is of course still critical at the individual developer level. For example, if OOP in Java is the paradigm, the key concepts of OOP come into play:

  1. Abstraction
  2. Encapsulation
  3. Inheritance
  4. Polymorphism

The first 2 of these concepts are an embodiment of the Single Responsibility Principle. As teams grow bigger the presence of new developers increases. These team members need to be trained, mentored and evaluated on learning and executing these principles

Dogmatism and Antipattern

The Single Responsibility Principle should not be taken to the extreme. The less common but opposite problem is a microservices, class or function defined down to a granular level that defines purpose into such atomic particles that there is a boomeranging consenquence of the inability to understand the design due to the complexity of dependencies and communication pattern.

As in many things, related to system and product development a full grasping of the principle with a healthy balance of practicality is where the art in architecture and development lives.

The emergence of the hybrid, “modular monolith” is a manifestation of this practicality. It is important to discern the difference between modular monolith being intentionally deployed than one that is unintentionally arrived at through poor microservices design.

The former is rationally understood and validated, it usually involves stable and established flows, for example; combining GL, AR and AP. The latter is accidental and likely chaotic.

The principle is not the Bible.

There is a time and place where it makes sense to violate the boundary, however, they are rare. So transgressing the principle should be eyed with scrutiny and the case for this “sin” must be airtight.

What is more common is a violation of the principle with full consciousness and with the intention of correcting it later. This, of course, is the genesis of technical debt as “someday never comes”. Or worse someday arrives as a catastrophic system event.

The Single Responsibility Principle is time proven and applies through all layers of system design. I know that many of you are aware of it, but like many things fundamental, we all need to be reminded of it and conscious of it in our daily efforts.

If I can help with you in any way, please contact me through the contact page.

David Duczer deBoisblanc is the managing partner at Duczer East. David has 25 year history in the software and systems engineering domain.

Should I Consider a Service Mesh

-David deBoisblanc, April 7, 2021

Having been heavily involved in microservices over the past 9 years, at multiple clients and across many projects, I have watched the entire ecosystem evolve. What was on IT thought leadership’s mind about the subject has changed in just the last year and a half.

The bewildering array of cloud options, development paradigms, toolsets and open source offerings is converging with the trend towards distributed microservices architecture. This has opened the door to performance possibilities and system maintainability.

Increasingly the challenge of performance optimization has been centered on service to service communication, which has emerged as the critical path for many implementations. Simply put the network is a very key constraint. This challenge has sharpened focus on containerization, orchestration and proxies/ gateways (for ex. Docker, Kubernetes and Envoy).

The simplified view of the not so simple set of the challenges are:

  1. Routing (service to service communication)
  2. Security and Reliability of communications
  3. Observability

Enter the service mesh which manages service to service communication.

Service Mesh: What and Why

The key attributes of the service mesh are:

  1. Routing:

Traffic needs to be routed and controlled while also being extremely resilient.

Proxies (sidecar) deployed against each service seamlessly routing all traffic. These proxies are operating and are aware at Layer 7 in the stack. The result is that routing decisions and metric classification can utilize application layer metadata such as HTTP headers. This presents tremendous opportunity with service mesh configuration to optimize communications and to observe those patterns.

Service mesh provides dynamic service discovery and traffic management. From a DevOps perspective; testing, canary release and incremental rollout are simplified and reliable through traffic shadowing and splitting.

  • Security and Reliability:

Through the service mesh, “cross cutting” functions such as security standards and reliability requirements can be systematically designed and enforced. Things like ensuring root certificate, access control lists (ACLs) and mTLS can be managed. Also, circuit breakers, retries and rate limiting are all managed by the control plane of the service mesh.

  • Observability:

In a distributed microservices environment a service makes a request to another service but because you don’t know which replica will actually take that request there is an observability problem. Of course, this is an exponential problem because of the complexity of traffic. This makes diagnosis of problems and predictable avoidance of problems a great challenge.

Because the service mesh is managing all traffic it can also observe the distributed tracing of a request, error codes and latency. Whether in production or in development of new services this is invaluable in system stability and for remediation. In fact, this feature alone has driven the decision to deploy service mesh at some enterprises.

Network Abstraction

At a higher level, network abstraction is achieved. In a microservices architecture we most frequently find the operation code and the business logic coupled within each respective logical microservice. The consequence of this is any small change to the operation layer, like a security setting, results in a need to redeploy that application.

A service mesh decouples the operation code from the business logic. This enables the ability to change operation code without impacting the business logic and change the operation code with more frequency and much more simplicity.

The higher level abstraction and human-focused control planes gives capabilities into the hands of both the global and the technical architects.

High Level View

A service mesh has two basic conceptual components the data plane and the control plane.

The data plane does the heavy lifting. It handles packets/request, service discovery, routing, load balancing, health checking, authentication and observing.

The control plane is the “brains” and creates a distributed system from the sidecars.

As mentioned earlier, there is a decoupling of operational code from business logic as the proxies under the direction of the control plane are handling the operational requirements.

Also as mentioned, there are differences in where and how the proxies are deployed by the various providers. However, the conceptual architecture is the same. Very typically service mesh is deployed in conjunction with Envoy.

The communication pattern focus is “east-west” remote procedure calls that are service to service. This as opposed to an API gateway which handles “north-south” traffic that travels into the network.

This is important distinction as I have been asked about the need for a service mesh if an API gateway is deployed. They serve different purposes. An API gateway provides traffic control for the ingress, or the outside traffic into the network. Basically, coordinating API and related services. This is the north-south axis.

The east west axis of a service mesh is concerned with sibling traffic between services. As mentioned earlier controlling and optimizing that routing, standardization and observability.

Adoption and Risk Considerations

Service meshes are increasingly becoming part of the landscape in the cloud native application platforms. While it brings plenty of capability to solve problems arising in the microservices environment, as in most things IT, a service mesh is not a magic bullet free from risks.

Considerations:

Some potential issues to consider are the following:

Operational cost: There is an operational cost to deploying and operating a service mesh. Deployment time, adoption by the production team, an additional layer in the microservices environment. There may also be a CPU impact though it is typically very minimal. Even though it introduces 2 new “hops”, these typically happen over the localhost or loopback network interface with minimal latency. However, testing should be performed first to ensure acceptable performance as a risk reduction step prior to moving forward. 

Traffic management layer conflict: The possibility of layer conflicts with issues like retries having policy control from the service mesh control panel and also being handled within the code of the application. Serious conflicts can arise like duplicated transactions.

Multiple Service Meshes: Through acquisitions and multiple business segments several disparate service meshes may be deployed. This presents a consolidation challenge as several control layers are present and there are some differences in the architecture of the various service mesh options. For example, Optio and Linkerd have a very different paradigms on where to deploy the “sidecar” proxies. The APIs are different, and the configuration parameters can be conflicting.

“Enterprise Service Bus” pitfalls:  You may have recognized the similarity between a service mesh concept and the ESBs of the SOA era. And there are some lessons to be learned from the SOA mistakes. For example, the tight coupling of business logic into the communication bus. And the lack of interoperability between disparate ESBs (see above). However, there are design decisions that greatly mitigate this risk and the previous one.

Incorrectly planned deployment: Big Bang vs. iterated rollout. Both have their pros and cons though the iterated rollout usually has less risk. But even the iterated rollout needs to be carefully planned and then a consistency of vision needs to be extended across a longer period.

This list of antipatterns is not exhaustive but they top the lists of the issues that we and others have seen.

Deployment Considerations

In deployment, careful planning around several topics is critical

Incremental and iterated rollout is almost always the best option but there can be use cases for big bang. Nevertheless, incremental rollout also requires careful planning. I am probably stating the obvious though I have seen cases where the team just started with no real plan and generally had to retreat and start over.

We have been deploying with an ‘abstract the abstraction’ step where we utilize tools and architecture to simplify the control panel deployment and allow for flexibility in regard to disparate service meshes and ease the adoption of the support team that will manage the system in production.

A careful understanding of deployed processes like retries, etc. to make sure that duplicate and conflicting policies and procedures do not plague the system

Service Mesh Providers:

As mentioned before there are several popular options being deployed today. 

Providers of service mesh tools and frameworks:

IstioKuma
LinkerdAWS App Mesh
MaeshConsul

There is plenty of buzz surrounding the subject of service mesh and with good reason. As a concept it is increasingly being embraced and deployed. 

As always if you have any questions, please contact me and I will be happy to help.

David Duczer deBoisblanc is the managing partner at Duczer East. David has 25 year history in the software and systems engineering domain.

Introducing DE IQ

We are consolidating our blog activities into this new blog website, DE IQ

We will have regular posts on architecture subjects like microservices, CIAM, Sever mesh, serverless, low code etc. 

The complexity in the post will vary from high level to deep (technical) bringing something for everyone. The majority will be high to med in complexity to deliver news and overviews of emerging technologies and trends or to reinforce concepts that are sometimes forgotten during execution.

Client architects and thought leaders are encouraged to contribute, I think that most people are especially keen to hear your success or war stories. We will create a submission link and look forward to your participation. Our competitors are also welcome to submit posts, we just ask that no direct advertising be used.

We hope that this brings value to your world.

Any suggestions or thoughts are welcome.

David Duczer de Boisblanc, Duczer East