Containers at the edge: Why the hold-up?
Cloud-native software like Kubernetes is well-positioned to facilitate innovation in IoT and edge hardware
When Kubernetes was created, it was a simple container orchestration tool. Over the years, however, it has grown into a complete platform for deploying, monitoring, and managing apps and services across cloud environments. Organizations seek to effectively manage containers, microservices, and distributed platforms in one fell swoop, which can run across both hybrid and multi-cloud structures. 451 Research, for example, found that more than 90% of companies will standardize Kubernetes within three to five years, across various organizational types.
The same cannot be said for the edge. In a 2020 poll, just 10% of respondents said they had deployed containers directly at the edge. The reluctance is linked to compatibility issues and select use-cases, as organizations confront the complexity of implementing containers to service their requirements.
- Here's our list of the best cloud analytics services around
- We've built a list of the best cloud storage services right now
- Check out our list of the best cloud databases available
Valentin Viennot is Product Manager at Canonical
Managing this complexity successfully could unlock the long-term benefits of containers: reduced costs, processing efficiencies, and consistency within edge environments. How they do this is through the right orchestration tools, like Juju. To bring the edge closer to central clouds, businesses need to take sensible and careful steps – if they do, releasing potential for smarter infrastructure, dynamic orchestration, and automation is just around the corner.
Why deploy containers near the edge?
Most devices used at the edge – whether in an IoT or a micro cloud context – have restricted real estate. This means the requirement for a small operating system is vital. When you add the necessity for continuous software patches to this - to both fend off evolving security vulnerabilities and gain from iterative updates - and the importance of cloud-native technology comes to the forefront. Using containerization technologies and container orchestration allows developers to swiftly update and deploy atomic security updates or new features, all without affecting the day-to-day systems of IoT and edge solutions.
Containers and Kubernetes also offer a contingency framework for IoT solutions. Various applications necessitate cloud-like elasticity along with the high availability of compute resources; in fact, we are now witnessing individual IoT projects that measure in the millions of nodes and sensors. The requirement to manage the physical device, messages, and massive data tonnage, involves infrastructure that promptly scales up. Micro clouds (e.g. a combination of LXD + MicroK8s) bring cloud-native support for microservices applications closer to the consumer, facilitating the data and messaging-intensive characteristics of IoT, while at the same time boosting flexibility. The result is a technology approach that encourages innovation and reliability throughout the cyber-physical voyage of an IoT device.
Why are they not currently being deployed?
The uptake on Kubernetes at the edge has been slow for several reasons. One reason is that it has not been optimized for all use cases. Let's split them into two classes of compute: IoT, with EdgeX applications, and micro clouds, serving computing services near consumers. IoT applications often see Docker containers used in a non-ideal way. OCIs were designed to enable cloud elasticity with the rise of microservices; not to make the most of physical devices while still isolating an application and its updates, which is something you would find in snaps.
Another reason is the lack of trusted provenance. Edge is everywhere and at the centre of everything, operating across applications and industries. This is why software provenance is critical. The rise of containers in general coincided with a rise of open-source projects with a wide range of dependencies – though there needs to be one trusted provider that can commit to be the interface between open-source software and enterprises using it. Containers are an easy and adaptable solution to package and distribute this software in trusted channels, assuming you can trust the provenance.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The third factor relates to the move from development to strict field production constraints. Docker containers are still popular with developers and technical audiences - it is a brilliant tool to accelerate, standardise, and improve the quality of software projects. Containers are also having great successes in cloud production environments, mainly thanks to Kubernetes and platforms adoption.
In edge environments, the production constraints are much stricter than anywhere else and business models are not those of software-as-a-service (SaaS). There is a need for minimal container images created for the edge, with the proper support and security commitments to maintain safety. In the past, containers were designed for horizontal scaling of (largely) single function, stateless work units, deployed on clouds. But in this case, the edge makes sense where there is a sensitivity to bandwidth, latency, or jittery requirements.
In short, Canonical’s approach to edge computing is open-source micro clouds. They provide the same capabilities and APIs as cloud computing, trading exponential elasticity against low latency, resiliency, privacy and governance of the real-world applications. While containers don’t necessarily need ‘edge’ elements, they need to mature and come from a trusted provider with matching security and support guarantees. For the other half of Edge, IoT, we recommend using snaps.
Prioritizing containers at the edge
The case for bringing containers to the edge lies in three main approaches.
The first is compatibility, contributing a layer between the hosting platform and the applications. The process allows them to live on many platforms and longer.
The second is security; although running services in a container is not enough to establish it’s secure, workload isolation is a security improvement in many respects. The last is transactional updates, delivering software in smaller chunks without taking care of entire platform dependencies.
Kubernetes containers also have innate benefits that naturally benefit the system. One example is elasticity; in the case of micro clouds, some elasticity is needed as demand may vary, and accessing cloud-like APIs is one of the main goals in most use cases. Flexibility is another benefit; being able to dynamically change what application is available and at what scale is a typical micro cloud requirement which Kubernetes helps with sufficiently.
Looking towards the future
As it persists in developing and growing to be more robust, Kubernetes will also become more efficient. This means Kubernetes’ support for scalability and portability will be even more associated with edge use cases, as well as the enormous numbers of nodes, devices, and sensors out in the world. All of this will come with greater productivity thanks to more lightweight and purpose-built versions of Kubernetes.
Cloud-native software such as Kubernetes is well-positioned to facilitate innovation and advantages in IoT and edge hardware. The lightweight and scalable nature of cloud-native software will also line up with improvements in hardware such as Raspberry Pi or the Jetson Nano. In short, containers at the edge will quickly be common practice, and the benefits are awaiting any ready enterprise with the right specs in mind.
- Check out our list of the best cloud backup services
Valentin Viennot is Product Manager at Canonical