The XPU Enabler team ensures AI models run smoothly across a wide range of devices. Whenever a new AI model or hardware device emerges, we address compatibility issues and build automated performance profiling services to create seamless connections between models and devices. Our team brings together specialists from various domains—ML Engineers, Embedded Engineers, Backend/Frontend/Mobile Engineers, and AI Software Engineers—working closely together to solve complex technical challenges by leveraging their individual areas of expertise.
📌 What You’ll Do at This Position
You’ll collaborate with top-tier NPU vendors and clients looking to optimize AI models for deployment on various NPUs. Your focus will be on LLMs and similar models, building a strong foundation in core technologies such as static graph transformation, optimization, quantization, and device-level profiling. Through this, you’ll gain a deep understanding of NPU-specific optimization methods and acquire hands-on experience in model deployment—developing your capabilities as a highly skilled AI engineer.
The XPU Enabler Team takes full ownership of the entire pipeline—from AI model development to real-world device deployment. We don’t limit ourselves to specific frameworks or devices. Instead, we thrive on flexibility and a strong problem-solving mindset as we work across diverse models and NPU environments.
Our team values technical depth, cross-functional respect, and shared growth. If you enjoy exploring new technologies and tackling unfamiliar challenges with curiosity and creativity, you'll feel right at home with us.
🔎 Helpful materials
The XPU Enabler team ensures AI models run smoothly across a wide range of devices. Whenever a new AI model or hardware device emerges, we address compatibility issues and build automated performance profiling services to create seamless connections between models and devices. Our team brings together specialists from various domains—ML Engineers, Embedded Engineers, Backend/Frontend/Mobile Engineers, and AI Software Engineers—working closely together to solve complex technical challenges by leveraging their individual areas of expertise.
📌 What You’ll Do at This Position
You’ll collaborate with top-tier NPU vendors and clients looking to optimize AI models for deployment on various NPUs. Your focus will be on LLMs and similar models, building a strong foundation in core technologies such as static graph transformation, optimization, quantization, and device-level profiling. Through this, you’ll gain a deep understanding of NPU-specific optimization methods and acquire hands-on experience in model deployment—developing your capabilities as a highly skilled AI engineer.
The XPU Enabler Team takes full ownership of the entire pipeline—from AI model development to real-world device deployment. We don’t limit ourselves to specific frameworks or devices. Instead, we thrive on flexibility and a strong problem-solving mindset as we work across diverse models and NPU environments.
Our team values technical depth, cross-functional respect, and shared growth. If you enjoy exploring new technologies and tackling unfamiliar challenges with curiosity and creativity, you'll feel right at home with us.
🔎 Helpful materials