Header image for Simulink to Cloud blog post

Simulink to Cloud: Deploying Digital Twins with Docker and UDP

simulink udp digital-twin docker python skill

Today, I’m building a Simulink model that I’ll deploy to a Dockerized environment and integrate with other components to build an application that will run 24/7. Although the model is simple, the architecture is a good starting point for building hybrid digital twins for predictive maintenance and monitoring applications. I’ll show you how to transform a Simulink model into a portable binary that acts as a lightweight live digital twin within a Docker composition.

My model is designed to run continuously in the cloud, using sensor data and control setpoints streamed externally via UDP. It processes the data and pushes the results to downstream Python services allowing users to monitor outputs through a dashboard and control the twin by adjusting setpoint value.

This is a classic blueprint for cloud-native digital twins featuring live data ingestion, high-fidelity physical models and IT deployment. I'm sharing the full overview, source code and an that you can use in the agentic application.

Architecture diagram of Simulink Digital Twin with UDP and Docker

The publisher acts as a sensor data provider, continuously generating synthetic data and control input. The control input, which typically comes through a PLC in industrial settings, is provided manually via the dashboard. The Simulink model processes both the sensor and control inputs, then forwards the results to a subscriber, which stores data in a database. Finally, end users monitor the data through a real-time dashboard.

Simulink Model

Simulink model showing UDP Sensor and SetPressure blocks

The UDP_Sensor data is sinusoid data and continuously streamed through UDP. The UDP_SetPressure is , and the digital twin subsystem computes the sum of the two input signals.

Embedded Coder and Simulink Coder are used to with subsequent compilation into a binary. Once the binary is compiled, it is integrated into a . Since I want to ensure the portability of my executable, I use to collect the neccesary dependencies, store them alongside the main executable, and load them during execution.

The speed at which data will be streamed to dashboard is controlled by pacing_rate parameter defined in the project-wide . The final executable, as well as its dependencies is .

Real-time dashboard monitoring the digital twin

Deployment into Docker Composition

To put all containers together I use docker compose where I point to pull Docker images that are built and . The docker compose up starts the entire stack and localhost:5000 hosts the dashboard.

Docker Compose starting the digital twin application

Docker images that used in the stack and their total size:

IMAGE                                            ID             DISK USAGE   CONTENT SIZE   EXTRA
ghcr.io/samarkanov/udp-sse-dashboard:latest      0510147770b9        265MB           67MB
ghcr.io/samarkanov/udp-sse-digital-twin:latest   69e3a610ae71        260MB           79MB
ghcr.io/samarkanov/udp-sse-publisher:latest      b402b1427e89        259MB         65.8MB
ghcr.io/samarkanov/udp-sse-subscriber:latest     de72c2aaf22f        259MB         65.8MB

Agent skill

I created an for an end-to-end automation of a development workflow and tested it with the Gemini CLI, using this prompt:

I want to build a real-time Digital Twin from scratch. Please use the digital-twin-builder skill to guide the process. The project should include a MATLAB/Simulink model that generates a portable C++ binary, Python microservices for publishing/subscribing via UDP, and a Flask dashboard with Server-Sent Events. Set up Docker Compose and a GitHub Actions workflow for a 'Local Build -> Release Asset' deployment strategy.

Then the following steps were executed:

sequenceDiagram autonumber accTitle: Digital Twin Build & Deployment Lifecycle accDescr: A sequence diagram showing the interaction between the User, Gemini AI, and the Environment. participant U as 👤 USER participant G as 🤖 GEMINI (AI) participant T as 🛠️ TOOLS / ENV Note over U, T: PHASE 1: INITIALIZATION & STRATEGY U->>G: Request DT Ecosystem G->>T: Activate Skill [digital-twin-builder] T-->>G: Resources Loaded rect rgb(24, 28, 37) Note right of G: ENTER PLAN MODE G->>T: Research & Strategy G->>T: Write Implementation Plan T-->>G: EXIT PLAN MODE end Note over U, T: PHASE 2: SCAFFOLDING & CODING G->>T: Project Scaffold G->>T: Code Implementation G-->>U: Scaffold Complete Note over U, T: PHASE 3: BUILD & ITERATIVE DEBUGGING U->>G: "Build & Start" G->>T: MATLAB Inspection G->>T: Create Model (.slx) G->>T: Run Build Script T-->>G: ❌ ERROR: Header missing rect rgb(24, 28, 37) Note right of G: ITERATIVE DEBUGGING LOOP G->>T: 16-19. Fix Includes, Links, Types, Symbols end T-->>G: ✅ 20. BUILD SUCCESS Note over U, T: PHASE 4: DEPLOYMENT & NETWORK PATCHING G->>T: Docker Orchestration G->>T: Verification (logs, ps) G-->>U: Stack Running U->>G: "No data plotted" G->>T: Debug Logs & Trace G->>T: Fix Model Ports G->>T: Patch Hostnames

Source Code

The full source code, including the MATLAB scripts, Dockerfiles, and Python services, is available on GitHub: