About us
Mirai builds the fastest on-device inference engine for Apple Silicon. In under a year, a 14-person team built a full stack, from model optimization to a proprietary runtime, outperforming MLX and llama.cpp on supported models.
We’re making local inference practical, fast, and reliable for real products.
Why us?
Mirai is founded by proven entrepreneurs who built and scaled consumer AI leaders like Reface (200M+ users, backed by Andreessen Horowitz) and Prisma (100M+ users). Our team is small (14 people), senior, and deeply technical. We ship fast and own problems end-to-end.
We’re advised by a former Apple Distinguished Engineer who worked on MLX, and backed by leading AI-focused funds and individuals.
Bonus:
You are probably a good fit if you have a thorough understanding of such concepts as:
Remote / SF / Europe