開源日報 每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,堅持閱讀《開源日報》,保持每日學習的好習慣。
今日推薦開源項目:《暗中編碼 codeinthedark.github.io》
今日推薦英文原文:《Microservices Overview》

今日推薦開源項目:《暗中編碼 codeinthedark.github.io》傳送門:GitHub鏈接
推薦理由:這次要介紹的是一個很有意思的活動——Code in the Dark。用簡單的話來描述的就是,參賽者要在 15 分鐘內使用給定的資源和自己的本事做出一個網頁……的截圖,只需要讓所有人知道它長啥樣就好,然後讓觀眾決定結果。但是這個比賽的最大限制就是你不能使用預覽,只能大概腦補出這玩意大概什麼樣,所以要想做好必須基本功要紮實。有興趣的話可以作為現在的娛樂活動來玩一玩。
今日推薦英文原文:《Microservices Overview》作者:Josep Bernabé
原文鏈接:https://medium.com/@jbgisbert/microservices-overview-30e505316a8
推薦理由:微服務的介紹

Microservices Overview

The Origin of Microservices

One of the first scenarios we at Kumori thought about when designing our platform was how to survive an eventual success. Creating an SLA-driven platform to automatically deploy, configure and run a bunch of small services and applications can be complex but what if we got a humongous amount of services and applications instead, some of them huge?

Several companies have faced similar problems in the past. I personally like Jim Gray』s interview with Amazon CTO and VP Werner Vogels in 2006 (A Conversation with Werner Vogels). Actually, that interview probably describes the first and one of the most famous microservices-based system put into production (Netflix is another great example). This system was designed in the late XX century, before the microservice buzzword even existed. At the time, Amazon was basicaly a monolithic application running on a web server connected to a database. At some point, they realized that this architecture would not scale anymore. Evolving the application was nearly impossible due to the complexity of the code and the high coupling level among its pieces, mainly because of the resources they share (like the database). It was difficult to guess who owned which part of the system and who was going to be affected by a given change.

As a result, Amazon came up with a new design based on a radical interpretation of Service Oriented Architectures (SOA). The single monolothic application became a net of interconnected services. Each service was responsible of a very specific set of business capabilities and data. The one and only access to services was through a well defined REST-like or SOAP interface. Each service was also assigned to a single team, usually small, which was in charge of the service entire lifecycle, that is, from its definition and design to the service operation once in production. In some sense, they were using devops before the word devops became popular. The amount of services was (is) so huge that a single hit in amazon.com may call more than 100 services before having all necessary data to construct the webpage.

Elasticity

Microservice architectures are usually elastic systems. A system is considered elastic if it efficiently adapts to volatile environments. Users/clients and infrastructure are part of those environments. So, an environment is volatile if the workload generated by users/clients changes frecuently and, sometimes, dramatically and the same happens with the underlying infraestructure. The infraestructure changes if its topology varies (for example, machines are added or removed) or its elements crash, malfunction or underperform. The individual probability of crash, malfunction or degradation can be small but the cumulative probability can be high for big hardware topologies.

Managing such a system efficiently usually involves:
  • Scalability: the amount of infrastructure needed should be able to grow and shrink with the amount of cashable workload, and do so at a cost lower than the generated income.
  • Quality of service: the system must behave as expected by users. This perception is usually a combination of usability, availabilty, performance and security. Usability is a very important issue but it mainly depends on the user interface design. So, from the microservices perspective, I focus on the last three: availability, performance and security.
To accomplish efficiency in a volatile environment we must design our system carefully to avoid compromising scalability, introducing bottlenecks, provoque cascade failures or introduce security vulnerabilities. This becomes even harder if the system cannot be tested under production-like conditions, which can be due to a variety of reasons. Two frequent reasons are the cost of simulating a production environment for large systems, and the impossibility of predicting production workload patterns due to their variability. As a result, the system cannot trust its own pieces in production, specially when they are under pressure.

The Microservices Architectural Pattern

For me, microservices is more a buzzword, a concept or a set of high-level architectural recomendations than an architectural pattern. Like with many other buzzwords, there is not a single and common definition of what a microservices-based architecture is, and what it is good for. However, it is commonly considered a good approach to develop elastic software.

As we have seen before, elastic systems run software on top of volatile environments, and must be prepared to work on a permanently degraded state (i.e., most of the time something is not working properly or not working at all). Some other elements usually associated with elastic systems are:
  • Pay-per-use approach. The amount of money paid by costumers depends on how many times, and how they use the elastic software. That is because you also pay-per-use your infraestructure, as explained in the following point.
  • Infrastructure as a Service (IaaS). Elastic software usually runs over infrastructure provided as a service and billed following a pay-per-use approach. That』s why systems should be elastic, to book just what you need to fulfill your Service Level Agreement (SLA) with your costumers, and stay at reasonable costs.
  • High availability. Customers expect the software to always be available for them.
  • Continuous evolution. The software is continiously being upgraded, either to add new features, to improve the existing ones or to fix bugs.
  • Information is distributed and heterogeneus instead of persisted in a single central database.
To achieve these goals, microservices commonly promote the following precepts (Microservices: a definition of this new architectural term):
  • Components are deployed as services: software is usually split into pieces or components. In a microservice-based architecture, components are deployed as autonomous services, which can only be accessed through a well defined API (like a REST API). Each service is executed as a separated process in a separate context. This approach enforces component encapsulation, preventing dirty accesses between components, since they do not share the same memory space or even the same computer.
  • Design following a business capabilities driven architecture: each component or microservice covers one business capability or a small set of them. Each microservice should be also small and with a well defined set of responsabilities. A business capability represents a feature from the business perspective. For example, package shipping can be a business capability but data persistence cannot (Using domain analysis to model microservices).
  • Smart endoints and dump pipes. A distributed communication topology is preferred over a monolithic centralized communication mechanism like a central bus (Microservices Principles: Smart Endpoints and Dumb Pipes). Central buses and communication structures can easily become complex to manage and a potential bottleneck and/or scalability limitation.
  • A single team is responsible of a microservice during its entire lifecycle (i.e., from design to operation). Teams should also be small. Two pizza teams (8–9 individuals) are commonly considered the maximum size (「If you can』t feed a team with two large pizzas, it』s too large.」 — Jeff Bezos). This you build it, you run it approach forces development teams to be in touch with their software users and maintenance pitfalls.
  • Decentralized governance: there is no wise people committee defining a reference architecture for all microservices and blessing each team designs and technical decisions. Teams can choose their own tools and technologies to develop and manage their services. There are always common tools like ticketing or CI/CD systems but microservices should have a considerable amount of flexibility to choose their own technology stack.
  • Decentralized data: each microservice manages its own data using its own format and database management systems. There is no central database accessed by everyone. If microservice A needs data managed by microservice B, A should ask B for that data using B』s well known API. A microservice only has direct access to its own database (if it has one). With this approach, a central database will never be a bottleneck and an update in a database schema will only affect a single microservice.
  • Automated management: microservices are automatically deployed, configured, updated, scaled and recovered when they crash. Human intervention is obviously allowed but the system must be able to react by itself if needed. Autonomous predictive analysis algorithms can be also included to foresee hazardous scenarios.
  • Fault-tolerance: the system must be built to tolerate the crash or malfunction of some of its microservices. That usually means redundancy by replication but not only. Each microservice must also withstand crashes or malfunctions of its dependencies. For example, if microservice A needs something from microservice B, A must keep running even if B crashes, malfunctions or underperforms to avoid cascading failures. It might be also necessary to over replicate some critical microservices to avoid chain reactions due to pressure increase on the suriving instances when one of the replicas fails (Release It! Second Edition).
These precepts have the following advantages:
  • Divide and conquer: microservices approach divide huge problems into small pieces called microservices. Each microservice is managed and mantained by its own team and can be developed using standard well known development tools.
  • Improved encapsulation: microservices approach enforces encapsulation, which facilitates setting-up fault tolerance and security countermeasures.
  • Fine grained monitoring and scaling: since each microservice replica is executed in its own process, each process can be monitored separately, providing a better overview of our system behaviour and fine-grained replication policies can be applied. Replication of the entire system is not needed anymore when a single component is overloaded.
  • Weaker dependencies on a specific technological stack: since microservices do not share the same technology, one service stack can be changed without affecting the others as long as the API remains unchanged.
But also have some disadvantages:
  • Complex global design and topology: each microservice can be simple but the overall system composed by hundreds of microservices is complex to deploy, coordinate, manage and test.
  • Complex data integrity management: data integrity in classic monolithic systems can be enforced by the underlying database management system. With microservices architectures, the data is spread among the microservices. Atomic operations involving data from several microservices can result in integrity violations if not managed carefully. Dealing with distributed transactions can be challenging and jeopardize the entire system scalability. For this reason, such operations are strongly discouraged unless they are strictly necessary.
  • Network congestion and increased latency: calls between layers in monolithic applications are performed inside the same process. Calls between microservices are performed between processes and even between machines. This increases the latency and might cause network congestion in case of chatty communications. So, fewer messages with bigger payloads should be preferred over too many small messages.
  • API Coupling: microservices might decouple components code but increase coupling between APIs if they are not designed carefully. So, techniques like be liberal in what you accept and conservative in what you send (Enterprise Integration Using REST: Use versioning only as a last resort) are strongly encouraged to avoid unnecessary headaches when microservices APIs change.

Conclusions

Microservices were born to face the complexities of managing elastic systems. These systems must eficiently and effectively service users 24/7 in environments with complex and dynamic workload patterns and prone to system degradation due to frequent updates, failures and malfunctions. It is what we call a volatile environment. Resource consumption must also be efficient, specially if the system is hosted in an IaaS.

The underlying idea of microservices-based architectures is to design the elastic system as a topology of microservices. Each microservice runs on its own process and can be hosted on a different machine. There can be hundreds of microservices in a single system. A microservice is small and responsible of a single business capability or a small subset of them, including the related data. A single team is responsible of a microservice entire lifecycle. Each microservice can evolved independently and has its own technological stack but API coupling must be taken into account.

Fine-grained scaling, failure and security management policies can be applied with this architectures. However, the entire system management and orchestation/choreography becomes more complex and automation mechanisms become mandatory. Special attention should be paid when designing the system internal communications protocols to avoid network congestion and overcome the increased latencies.
下載開源日報APP:https://openingsource.org/2579/
加入我們:https://openingsource.org/about/join/
關注我們:https://openingsource.org/about/love/