Dell Technologies today updated its edge computing platform this week to make it simpler to programmatically provision infrastructure using DevOps best practices.
Phil Burt, senior director of project management for edge computing at Dell Technologies, said version 2.0 of Dell NativeEdge adds additional declarative blueprints based on YAML files and the Topology and Orchestration Specification for Cloud Applications (TOSCA) framework.
Blueprints for Telit, Litmus and PTC for manufacturing use cases, Deep North for retail, Dell Streaming Data Platform for analytics and Rancher Labs K3s for Kubernetes environments are provided.
Those blueprints make it simpler to either deploy applications on a lightweight operating system that Dell developed optimized for edge computing platforms or any other software stack an IT organization may prefer, he added.
Dell is also adding support for the virtual Trusted Platform Module (vTPM) and UEFI Secure Boot capabilities to apply zero-trust IT principles to edge computing platforms using hardware-based cryptography and secure storage for encryption keys, certificates and passwords.
Finally, Dell is also making available a three-year subscription plan for NativeEdge to lower the total cost of deploying edge computing platforms.
Dell relies on virtual machines it developed for edge computing platforms to deliver software. IT teams can then opt to deploy monolithic applications or container-based applications running on an instance of Kubernetes deployed as an extension of the virtual machine provided by Dell. Those platforms can then also automatically connect to a network to provide the connectivity required.
Regardless of approach, Dell is providing a method to programmatically orchestrate the management of highly distributed edge computing environments, noted Burt.
In the longer term, Dell is also researching how to apply generative artificial intelligence (AI) to the management of edge computing environments using the data its monitoring tools provide, he added.
While edge computing has been around in one form or another for decades, as more organizations start to process and analyze data at the point where it is created and consumed, the number of applications deployed at the network edge has increased substantially. As such, the number of applications that need to be built and deployed at the edge using DevOps best practices is growing exponentially. In fact, managing those workloads is only going to become that much more challenging as more AI models are deployed at the network edge.
Of course, not every workload at the edge is managed by an IT team. For years, many organizations have relied on operational technology (OT) teams that have historically deployed applications at the network using manual processes. Organizations that embrace DevOps practices to automate the deployment of workloads at the edge will naturally have to navigate some cultural issues alongside the technical challenges encountered when programmatically deploying software.
It’s not clear how much software will be running at the network edge in the years ahead, but as the capabilities of the IT infrastructure made available continue to increase, so will the complexity of the applications that will be deployed. The challenge, as always, is going to be finding the best way to manage all that distributed software at what will soon be an unprecedented level of scale.