Large language models (LLMs) like GPT, Claude, and Gemini have changed the way developers write code. But writing code is only a fraction of embedded development. Most of the work happens in specialized tools like debuggers, test runners, static and dynamic analyzers. Each with its own interface, its own data, its own workflow. And until now, AI couldn’t touch any of it.
The Challenge: A Fragmented Toolchain
Embedded development is uniquely tool-heavy. A typical workflow involves a compiler, a debugger connected to real hardware, a profiler for timing analysis, a test framework for unit and integration tests, and a static analysis tool for MISRA-C compliance. These tools are essential, but they don’t talk to each other and they certainly don’t talk to AI.
The result is a constant context switch. You ask an AI assistant to help optimize a function, but it can’t see the profiling data. You ask it to debug unexpected behavior, but it can’t set a breakpoint or read a variable. You ask it to write tests, but it can’t execute them. So far the AI was limited to what you copy-paste into a chat window.
The Solution: MCP Servers and Skills
Two building blocks close this gap: MCP servers and agent skills.
MCP Servers: Giving AI Eyes and Hands

The Model Context Protocol (MCP) is an open standard that lets AI agents call external tools through a structured interface. An MCP server wraps a tool’s API and exposes a set of actions the AI can invoke: read a variable, set a breakpoint, start a profiling session, run a test. The AI doesn’t need to know the tool’s internal workings; it just calls the right action and gets structured results back.
This is what turns a chatbot into an agent. Instead of generating text about what you could do, the AI does it: measures, iterates, and verifies. And because MCP is a standard protocol, the same servers work across agentic IDEs like Cursor, GitHub Copilot, AWS Kiro, and Claude Code.
Agent Skills: Giving AI Domain Knowledge
MCP servers give AI the ability to act, but acting effectively in embedded development requires domain knowledge. That’s where skills come in. A skill is a reusable prompt template that encodes context, constraints, and best practices for a specific task, like how to use a particular SDK, which MISRA-C rules to follow, or how to structure a test specification.
Think of it this way: MCP servers are the tools in a workshop, and skills are the training that teaches you how to use them well. Together, they let AI work within the same constraints and standards that a human engineer would follow.
A Practical Example
At Embedded World 2026, TASKING showcases a full functional agentic workflow on an NXP S32K344. The demo walks through four scenarios:
- Understand - The AI created a visualization script to display the control loop behavior, using a skill that taught it how to proficiently use winIDEA SDK.
- Optimize - The AI profiled the median filter, identified bubble sort as a bottleneck (9.3 μs), and replaced it with a sorting network (2.0 μs) — a 4.7x speedup, verified by the on-chip timing measurements performed by the winIDEA profiler.
- Test - The AI derived unit tests based on requirements, the abstract and the detailed design without accessing the source code. It executed the unit tests using testIDEA, and reported code coverage.
- Comply - The AI ran MISRA-C analysis using LDRA certified tools, summarized the violations, and offered to fix them.
No copy-pasting. No context switching. The AI worked through the tools, measured real results, and iterated, just like a developer would.

AI agent uses winIDEA SDK skill to create a live visualisation of the application behavior
AI agent uses LDRA tool suite to generate and execute unit tests.
What’s next?
This is an active area of development. The current MCP servers cover debugging, profiling, coverage, testing, and static analysis, the core of an embedded workflow. We’re expanding capabilities, refining the integration, and working with early adopters to validate the approach in real projects.
Q&A
Will my data be used to train AI?
The TASKING MCP server and agent skill files do not have any direct integration with the LLM. They are connected throught the agentic development environment, where user is free to choose any LLM that matches their data securty requirements. This is why our solutions work with any LLM, giving you full control over what happens with your data.
Is winIDEA MCP already available?
winIDEA MCP server is currently in active developent and is available to selected users. Official release date is planned for Q3 2026.
What do I need to connect my LLM to winIDEA?
To connect your LLM to winIDEA, you will need:
- An agentic IDE of your choice (Cursor, GitHub Copilot, AWS Kiro, Claude Code, …)
- winIDEA 9.21.393 or newer
- Python 3.10 or newer
- The winIDEA MCP server
- A valid winIDEA AI Server Add-on license
How can I try it?
If you’re interested in trying it out, reach out to your TASKING sales representative.
The tools are available for early access for select users, and we’d love to hear how AI-assisted embedded development fits into your workflow.
