4 days ago

Packing Large AI Into Small Embedded Systems

Not every microcontroller can handle artificial intelligence and machine learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models.

In this episode, Sam Fok, CEO at Femtosense, talks about AI/ML on the edge, the company's dual sparsity design, and how the small, low power SPU-001 can augment a host processor. 

Comment (0)

No comments yet. Be the first to say something!

Copyright 2023 All rights reserved.

Version: 20241125