The cloud native concept emerges as cloud computing enjoys rapid development. Cloud-native has become extremely popular. You will be considered outdated if you do not understand Cloud-native as of this year.
Although many people are talking about cloud native, few have told you exactly what cloud native is. Even after finding and reading some cloud-native materials, most of you may still feel confused and lack a complete understanding of cloud native. At this point, you might start to doubt your own intelligence. In my case, I always tend to blame the authors’ stupidity for my incomprehensibility of certain articles, though this is not necessarily true. However, this way of thinking prevents me from being held back by my own self-doubt and I can try to remain positive.
The reason why cloud native cannot be explicitly described is the lack of a clear definition. Since cloud native is undergoing constant development and changes, no individuals or organizations have the absolute right to define cloud native.
Technical changes are always heralded by certain ideologies, just as the invincible Marxism leads to the prosperity of proletarian revolutions.
Cloud native is an approach to building and running applications. It is a set of systematized techniques and methodologies. Cloud native is a compound word made up of “cloud” and “native”. The word “cloud” represents applications residing in the cloud instead of in traditional data centers. The word “native” represents applications that are designed to run on the cloud and fully utilize the elasticity and the “distributed” advantage at the very beginning of application design.
The cloud native concept was first put forward by Matt Stine at Pivotal in 2013. In 2015 when cloud native was just becoming popular, Matt Stine defined several characteristics of cloud-native architectures in the book Migrating to Cloud Native Application Architectures: 12-factor applications, microservices, self-service agile infrastructure, API-based collaboration, and anti-fragility. At an InfoQ interview in 2017, Matt Stine made some changes and indicated six characteristics of cloud-native architectures: modularity, observability, deployability, testability, replaceability, and handleability. The latest description about cloud-native architectures on the official Pivotal website shows four key characteristics: DevOps, continuous delivery, microservices, and containers.
In 2015, the Cloud Native Computing Foundation (CNCF) was founded. CNCF originally defined four characteristics of cloud-native architectures: containerized encapsulation, automated management, and microservices. In 2018, CNCF updated the definition of cloud-native architectures with two new features: service meshes and declarative APIs.
As we can see, different individuals and organizations have different definitions for cloud-native architectures, and even the same individual or organization has different definitions for cloud-native architectures at different points in time. This complexity makes it hard for me to clearly understand cloud-native architectures. After a while, I came up with a simple solution: to choose only one definition that is easy to remember and understand (in my case, DevOps, continuous delivery, microservices, and containers).
In a word, cloud-native applications are required to meet the following: Implement containerization by using open-source stacks like K8s and Docker, improve flexibility and maintainability based on microservices architectures, adopt agile methods, allow DevOps to support continuous iteration and automated O&M, and implement elastic scaling, dynamic scheduling, and efficient resource usage optimization by using cloud platform facilities.
Cloud native supports simple and fast application building, easy deployment, and allows applications to be scaled as needed. Cloud-native architectures bring many advantages over traditional web frameworks and IT models, and have almost no disadvantages. Cloud-native architectures are definitely a powerful secret weapon in this industry.
Microservices: Almost all the definitions of the cloud native concept include microservices. Microservices are opposite to monolith applications and based on Conway’s law, which defines how to split services and is not easy to understand. In fact, I think that any theories or laws are not simple and easy to understand, otherwise they would not sound professional as theories or laws are. The main point is that system architectures determine product forms. I am not sure if this also results from Marx’s view on the relationship between the productive forces and relations of production.
In a microservices architecture, after split by function, services have stronger decoupling and cohesion and therefore become easier. It is said that DDD is another technique to split services. Unfortunately, I don’t know much about DDD.
Containers: Docker is the most widely used container engine. For example, it is used a lot in infrastructures of companies like Cisco and Google. Docker utilizes LXC. Containers provide guarantees to implement microservices and play the role of isolating applications. K8s is a container orchestration system built by Google to manage containers and balance loads between containers. Both Docker and K8s are developed in the Go language and are really good systems.
DevOps: DevOps is a clipped compound of “development” and “operations” and has a relationship different from that between development and production. In fact, DevOps also includes testing. DevOps is an agile thinking methodology, a communication culture, and an organizational form with the goal of enabling continuous delivery for cloud-native applications.
Continuous delivery: Continuous delivery enables undelayed development and updates without downtime and is different from the traditional waterfall development model. Continuous delivery requires the co-existence of development versions and stable versions. This needs many support processes and tools.
First, cloud native is a result of cloud computing. Cloud native would not have come into existence without the development of cloud computing. Cloud computing is the basis of cloud native.
With the growing maturity of virtualization technologies and the popularity of distributed frameworks, it is now an irreversible trend to migrate applications to the cloud. This trend is also driven by open-source communities like container technologies, continuous delivery, and orchestration systems as well as development ideas like microservices.
The three layers in cloud computing, which are IaaS, PaaS, and SaaS, provide a technical base and directional guidance for cloud native. True cloudification does not only involve changes in infrastructures and platforms, but also requires proper changes in applications. Applications are required to abandon traditional methods and be re-designed based on the characteristics cloud in different phases and aspects of architecture design, development models, deployment, and maintenance. This allows us to create new cloud-based applications, that is, cloud-native applications.
It is obvious that we need new and cloud-native development to implement cloud-native applications. Cloud native includes many aspects: infrastructure services, virtualization, containerization, container orchestration, and microservices. Fortunately, open-source communities have made many significant contributions to cloud-native applications, and many open-source frameworks and facilities are directly available. After released in 2013, Docker quickly becomes an actual container standard. Released in 2017, k8s stands out among many container orchestration systems. These technologies have significantly reduced the threshold of developing cloud-native applications.
Although the cloud native introduction document may seemingly show a trace of exaggeration, as nitpicky as I am, I feel totally amazed at the advantages listed in the document. Cloud-native architectures are perfect. Does this mean that applications should immediately switch to cloud-native architectures? The ideal is perfect and tempting, until you try to switch the reality to that ideal. My view on this is to make a decision based on actual needs and consider if the current problems really affect your business development and if you can afford to re-design your applications.
Software design comes with two critical goals: high cohesion and low coupling. These two critical goals further lead to many specific design principles, including the single responsibility principle, the open–closed principle, Liskov substitution, dependency inversion, interface segregation, and least knowledge.
Software engineers have always been striving to achieve the two goals and write clearer, more robust software that is also easier to scale and maintain.
Later, more needs are added. Software development is expected to be simpler and faster. Programmers want to write fewer lines of code, and non-professionals also want the ability to develop applications. Easier programming languages are developed for people who do not have programming skills. More programming technologies and ideas are developed, including libraries, components, and cloud infrastructures.
As a result, many technologies are of less practical value, although they themselves are highly advanced. Many software engineers have new roles as parameter adjustment engineers, API call experts, library masters, and component specialists. This is an inevitable result from efficient division of labor and technological development.
From nearly 20 years of Internet technological development, we can see that the mainstream trend is the application of technologies in specific fields. Especially with the development and popularity of cloud computing in recent years, infrastructures have become more solid and business development is increasingly easier and has less technology requirements. At the same time, small enterprises and teams are no longer plagued by problems related to aspects like performance, loads, security, and scalability. This situation worries many middle-aged people working in the Internet industry, who may feel as if they could soon be out of a job.
Although it is undeniable that the technology is becoming a less important threshold in this industry, we do not have to be that pessimistic. Similar arguments also occurred when VB, Delphi, and MFC appeared in the PC programming era: What you see is what you can use and you can develop PC programs simply by clicking your mouse. Isn’t this cool? At the time, programmers probably have more concerns. However, after the division of the backend development with the development of the Internet industry, they soon found their new battleground came up with many new ideas about networking, distributed services, databases, support for large amounts of services, and disaster recovery.
If the basic infrastructure of the PC programming era is the control library and the infrastructure of the Internet era is the cloud, what about the infrastructure of the AI era? What high-end technologies will emerge in the AI era?
First hand and in-depth information about Alibaba’s latest technology → Facebook: “Alibaba Tech”. Twitter: “AlibabaTech”.