For edge nodes with limited resources, especially low-level configuration edge devices, measures must be taken to effectively reduce the resource occupation and workload of edge nodes.
The environmental deviations between the cloud center and edge nodes will affect the rapid deployment and iteration of applications. The cloud native architecture of the edge nodes shields the infrastructure deviations, and realizes the unified scheduling and arrangement of the underlying infrastructure resources. Standard application of container images realizes automatic deployment of application loads. The cloud native technology is expanded from the center to the edge nodes, realizing the unified architecture of the cloud edge technology and the free arrangement and deployment of business on cloud edge nodes.
At present, a large number of heterogeneous infrastructure resources exist on the edge nodes and the site. To build an edge computing platform to meet business demands, it is necessary to integrate and utilize the existing infrastructure resources and sink the computing power of the cloud center to the edge nodes and site for the management and control of massive existing businesses on the cloud.
Edge computing nodes based on cloud native technology are compatible with various edge terminals and provide a unified technology stack for lightweight edge facilities, data centers and gateways. It complies with the requirements of cloud native standard services and can sink and expand seamlessly to edge nodes. The cloud-based training/edge reasoning mode supports edge-cloud cooperative AI processing as well as model publication, update and push, forming a complete closed-loop optimal model.
In scenarios with rapidly changing service traffic, additional edge node containers must be provided at peak hours for rapid elastic scaling to cope with traffic bursts. Idle resources are automatically released at off-peak hours to reduce costs.
It connects to DevOps to manage application publishing, gray scale and application life cycle of multiple edge computing nodes. It saves the trouble of complicated configuration and initialization for development, testing and production environment clusters and loosely couples the application publishing logic with the underlying clusters, achieving flexible and convenient expansion and management of business publishing, and accelerating the iteration and expansion of edge business.
Multiple service forms Supporting proprietary/managed K8S clusters, serverless edge containers and edge container instances. Elastic computing power Seamless connection with ECX, and automatic scaling of edge container clusters. Multi-architecture support X86, ARM, edge device access and compatible with native K8S APIs Distributed image distributionImproving application deployment efficiency, and providing full lifecycle and O&M management. Application deployment Supporting unified distributed application deployment. Unified management Shielding the differences of heterogeneous resources, creating clusters at the edge nodes of eSurfing Cloud and at any position on customer sites, and supporting cluster and multi-cloud management by customers. Intelligent dispatching Precise scheduling of applications to nearby nodes. Multi-network support Various container network plug-ins and capabilities to meet the network requirements of different applications.
We use cookies to ensure your high-speed browsing experience. By continuing to browse this site, you agree to our use of cookies. Details