Edge Computing·

Building for Real Edge Deployments

Why the future of edge isn't just Kubernetes - it's a hybrid of containers, binaries, and air-gapped realities.

We are witnessing a quiet shift in how infrastructure is built. For the last decade, the center of gravity was the Cloud. The goal was centralization: move everything to AWS, Azure, or GCP, and orchestrate it with Kubernetes.

In sectors like Energy, Transportation, Industrial Manufacturing and Retail, the pendulum is swinging back. The need for resilience against network failure, low latency and data sovereignty is forcing compute power back to the edge - into substations, factory floors or rolling stock. Once these nodes are expected to host multiple workloads from different, increasingly intelligent systems, this is no longer "edge computing" in the traditional sense. It is fog computing: autonomous, multi-tenant, locally operated infrastructure acting as an extension of the cloud. Fog nodes are re-emerging not as a theoretical model, but as a structural requirement for operating large industrial fleets at scale.

However, the tooling we have today hasn't caught up, driving up the operational cost of edge solutions. We are currently trying to force-fit "cloud native" tools into environments that are fundamentally different from the data center of a hyperscaler.

We will first explore the unique realities and constraints of edge deployments, then define the three architectural pillars required for a reliable, enterprise-grade solution.

The specific architecture of the edge

First, let's contrast the architecture of the modern cloud with the demands of the edge.

The cloud's primary strength is its architectural homogeneity: a consistent landscape of containerized microservices, caches, and databases, managed by Kubernetes and distributed across regions.

In contrast, edge solutions are inherently diverse and operate in a mixed reality. A complete solution is not just a container, it is a wider system, requiring orchestration across several distinct layers. To deploy securely and reliably a simple application, an enterprise will need to consider :

  • Workloads: They can be of diverse forms, containerized or not. For instance: applications running in containers, C++ & Rust executables and legacy SCADA interfaces requiring direct hardware access - CAN bus, PLCs, ...
  • Cloud dependencies: The edge interacts with cloud resources that need to be provisioned, including for instance: S3 buckets for logs, Pub/Sub topics for telemetry, and IAM roles for identity.
  • Device configuration: the OS layer itself needs to be managed-network interfaces, firewall rules (UFW/IPTables), and kernel modules.
  • Cyber compliance: security is not a "day 2" concern, and security posture often requires enforcing system-level policies-disk encryption, secure boot, SELinux/AppArmor profiles.
  • Observability: metrics might come from Kubernetes pods, but also from systemd services, the hardware sensors (temperature, voltage) underneath and the network interfaces.

What is missing in the tooling landscape for edge applications

The modern tooling works well for the cloud-native layer, despite being fragmented. The current offering could be summarized as two extremes, neither of which entirely solves the problem.

A first set of solutions are bespoke, and fragmented, leading to the "script hell" reality. Teams cobble together Bash scripts, Ansible playbooks, and manual SSH sessions. At scale, this approach becomes brittle: if - or more pragmatically when - a network connection drops during an update, devices are bricked, left in an inconsistent state. There is no unified state; just a collection of disconnected scripts. This 'script hell' creates a massive maintenance burden, effectively turning platform teams into glue-code maintainers.

Hyperscalers propose an alternative: on site, tethered extensions of their data centers - AWS Outposts, Google Anthos, Azure Stack. The edge is considered as a limb of the cloud brain. This model assumes constant connectivity ; offline is treated as an error state. But in a factory or on a ship, offline is the standard state. A fog node must be the autonomous brain for its local environment, capable of healing and operating indefinitely without a cloud heartbeat.

A sound solution to support edge deployments

So, what should an edge computing platform look like? It isn't just "better scripts" or a "cloud extension". We believe three pillars define the foundation for a sound, scalable and reliable platform.

Unified System Definition

The system's target state must be explicitly defined. Current tools force you to manage the OS with Ansible, the Containers with Kubernetes, and the Cloud with Terraform. This is why teams spend weeks debugging issues that are actually OS drift, not application bugs. A sound solution requires a declarative - a Blueprint - definition that captures the entire state of the solution and its dependencies in a single view.

The shift: you define your AI model (containers), your camera driver (Systemd), your firewall rules (OS Config), and your S3 bucket (Cloud) in a unified view. If part of the definition changes, the corresponding part of the system adapts everywhere.

Reconciliation Everywhere

The edge is too fragile for "push" pipelines. Indeed, push-based automation follows a 'fire-and-forget' model, the final state for each individual system is hardly known. In contrast, a reconciliation loop enforces a "ensure and repair" logic. State is known, transition applied step by step.

The shift: Edge nodes must run an intelligent agent that continuously pulls and reconciles against the system's target state: the blueprint. If a script fails or a config drifts (e.g., someone disables a firewall manually), the agent automatically reverts it to the safe state. Crucially, the reconciliation loop respects maintenance windows, to allow debugging manually during emergencies.

Unified Telemetry

We cannot afford to look at infrastructure components in isolation. A running container means nothing if the underlying hardware sensor is overheating.

The shift: The platform must normalize signals across all layers-Systemd, Docker, and Hardware. It must calculate a global solution health as a product of all dependencies. If the hardware is hot, the agent detects the anomaly and can either alert or safely execute a defined remediation policy (e.g. graceful restart), preventing the 'flapping' that plagues naive automation.

Conclusion

Edge is not a smaller version of the cloud. It is a fundamentally different environment, shaped by physical constraints, intermittent connectivity, and mixed execution models.

A robust edge deployment platform cannot rely on push-based automation, assume permanent connectivity, or tolerate observability blind spots. It must be autonomous by default, reconciliation-based as its core capability, enforcing policy, and assessing its own health locally.

This is the architectural direction we are taking with Alpha.

In the next post, we dive deep into how these principles translate into a concrete implementation, and why we deliberately built Alpha on open standards, so the platform remains transparent, interoperable, and never a black box.

If you are operating air-gapped or mixed runtime fleets at scale, we are onboarding a small number of design partners. Reach out to us!