Transforming embedded software development with AI

Large lan­guage mod­els (LLMs) like GPT, Claude, and Gem­i­ni have changed the way devel­op­ers write code. But writ­ing code is only a frac­tion of embed­ded develop­ment. Most of the work hap­pens in spe­cial­ized tools like debug­gers, test run­ners, sta­t­ic and dynam­ic ana­lyz­ers. Each with its own inter­face, its own data, its own work­flow. And until now, AI could­n’t touch any of it.

The Chal­lenge: A Frag­ment­ed Tool­chain

Embed­ded develop­ment is unique­ly tool-heavy. A typ­i­cal work­flow involves a com­pil­er, a debug­ger con­nect­ed to real hard­ware, a pro­fil­er for tim­ing analy­sis, a test frame­work for unit and inte­gra­tion tests, and a sta­t­ic analy­sis tool for MISRA-C com­pli­ance. These tools are essen­tial, but they don’t talk to each other and they cer­tain­ly don’t talk to AI.

The result is a con­stant con­text switch. You ask an AI assis­tant to help opti­mize a func­tion, but it can’t see the pro­fil­ing data. You ask it to debug unex­pect­ed behav­ior, but it can’t set a break­point or read a vari­able. You ask it to write tests, but it can’t exe­cute them. So far the AI was lim­it­ed to what you copy-paste into a chat win­dow.

The Solu­tion: MCP Servers and Skills

Two build­ing blocks close this gap: MCP servers and agent skills.

MCP Servers: Giv­ing AI Eyes and Hands

The Model Con­text Pro­to­col (MCP) is an open stan­dard that lets AI agents call exter­nal tools through a struc­tured inter­face. An MCP serv­er wraps a tool’s API and expos­es a set of actions the AI can invoke: read a vari­able, set a break­point, start a pro­fil­ing ses­sion, run a test. The AI does­n’t need to know the tool’s inter­nal work­ings; it just calls the right action and gets struc­tured results back.

This is what turns a chat­bot into an agent. Instead of gen­er­at­ing text about what you could do, the AI does it: mea­sures, iter­ates, and ver­i­fies. And because MCP is a stan­dard pro­to­col, the same servers work across agen­tic IDEs like Cur­sor, GitHub Copi­lot, AWS Kiro, and Claude Code.

Agent Skills: Giv­ing AI Domain Knowl­edge

MCP servers give AI the abil­i­ty to act, but act­ing effec­tive­ly in embed­ded develop­ment requires domain knowl­edge. That’s where skills come in. A skill is a reusable prompt tem­plate that encodes con­text, con­straints, and best prac­tices for a spe­cif­ic task, like how to use a par­tic­u­lar SDK, which MISRA-C rules to fol­low, or how to struc­ture a test spec­i­fi­ca­tion.

Think of it this way: MCP servers are the tools in a work­shop, and skills are the train­ing that teach­es you how to use them well. Togeth­er, they let AI work with­in the same con­straints and stan­dards that a human engi­neer would fol­low.

A Prac­ti­cal Exam­ple

At Embed­ded World 2026, TASKING show­cas­es a full func­tion­al agen­tic work­flow on an NXP S32K344. The demo walks through four sce­nar­ios:

  1. Under­stand - The AI cre­at­ed a visu­al­iza­tion script to dis­play the con­trol loop behav­ior, using a skill that taught it how to pro­fi­cient­ly use winIDEA SDK.
  2. Opti­mize - The AI pro­filed the medi­an fil­ter, iden­ti­fied bub­ble sort as a bot­tle­neck (9.3 μs), and replaced it with a sort­ing net­work (2.0 μs) — a 4.7x speedup, ver­i­fied by the on-chip tim­ing mea­sure­ments per­formed by the winIDEA pro­fil­er.
  3. Test - The AI derived unit tests based on require­ments, the abstract and the detailed design with­out access­ing the source code. It exe­cut­ed the unit tests using testIDEA, and report­ed code cov­er­age.
  4. Com­ply - The AI ran MISRA-C analy­sis using LDRA cer­ti­fied tools, sum­ma­rized the vio­la­tions, and offered to fix them.

No copy-past­ing. No con­text switch­ing. The AI worked through the tools, mea­sured real results, and iter­at­ed, just like a devel­op­er would.

AI agent uses winIDEA SDK skill to cre­ate a live visu­al­i­sa­tion of the appli­ca­tion behav­ior

AI agent uses LDRA tool suite to gen­er­ate and exe­cute unit tests.

What’s next?

This is an active area of develop­ment. The cur­rent MCP servers cover debug­ging, pro­fil­ing, cov­er­age, test­ing, and sta­t­ic analy­sis, the core of an embed­ded work­flow. We’re expand­ing capa­bil­i­ties, refin­ing the inte­gra­tion, and work­ing with early adopters to val­i­date the approach in real projects.

Q&A

Will my data be used to train AI?

The TASKING MCP serv­er and agent skill files do not have any direct inte­gra­tion with the LLM. They are con­nect­ed throught the agen­tic develop­ment envi­ron­ment, where user is free to choose any LLM that match­es their data secur­ty require­ments. This is why our solu­tions work with any LLM, giv­ing you full con­trol over what hap­pens with your data.

Is winIDEA MCP already avail­able?

winIDEA MCP serv­er is cur­rent­ly in active devel­opent and is avail­able to select­ed users. Offi­cial release date is planned for Q3 2026. 

What do I need to con­nect my LLM to winIDEA?

To con­nect your LLM to winIDEA, you will need:

  • An agen­tic IDE of your choice (Cur­sor, GitHub Copi­lot, AWS Kiro, Claude Code, …)
  • winIDEA 9.21.393 or newer
  • Python 3.10 or newer
  • The winIDEA MCP serv­er
  • A valid winIDEA AI Serv­er Add-on license

How can I try it?

If you’re inter­est­ed in try­ing it out, reach out to your TASKING sales rep­re­sen­ta­tive. 

The tools are avail­able for early access for select users, and we’d love to hear how AI-assist­ed embed­ded develop­ment fits into your work­flow.

Scroll to Top