The XPU Enabler Team conducts Advanced Research to secure long-term technical competitiveness for NetsPresso. Our primary mission involves analyzing model architectures, establishing optimization strategies, and verifying hardware compatibility to ensure the stable deployment of state-of-the-art AI models in Edge AI environments.
We participate in major government-led AI advancement projects (e.g., K-AI projects and national R&D initiatives), shaping the technical standards and direction of Edge and On-device AI. This position focuses on advanced research that supports both national projects and the long-term roadmap of NetsPresso products, rather than immediate feature development.
As a member of this team, you will preemptively verify the feasibility of running AI models in Edge environments. You will perform model optimization, conversion, and performance analysis to deploy new models onto existing or emerging Edge devices. You will play a pivotal technical role in South Korea’s K-AI projects, with opportunities to engage in the following:
1. Advanced Research & Optimization for Edge AI
2. Model Conversion & Execution Verification
3. Accuracy & Performance Trade-off Analysis
4. Execution of K-AI & Government R&D Projects
5. Technical Support for NetsPresso Product Development
(Additional assignments may be included during the process.)
Our team covers the entire model lifecycle—from model conversion and graph optimization to vendor compiler integration, device runtime configuration, and deployment—ensuring that LLMs and Computer Vision models run seamlessly on various NPUs, GPUs, and AI accelerators.
Our core mission is to solve challenges that hardware manufacturers do not address, such as Front/Middle-end optimization, Graph Surgery, and operator modification. We strive to make any model run on any device environment.
The XPU Enabler Team conducts Advanced Research to secure long-term technical competitiveness for NetsPresso. Our primary mission involves analyzing model architectures, establishing optimization strategies, and verifying hardware compatibility to ensure the stable deployment of state-of-the-art AI models in Edge AI environments.
We participate in major government-led AI advancement projects (e.g., K-AI projects and national R&D initiatives), shaping the technical standards and direction of Edge and On-device AI. This position focuses on advanced research that supports both national projects and the long-term roadmap of NetsPresso products, rather than immediate feature development.
As a member of this team, you will preemptively verify the feasibility of running AI models in Edge environments. You will perform model optimization, conversion, and performance analysis to deploy new models onto existing or emerging Edge devices. You will play a pivotal technical role in South Korea’s K-AI projects, with opportunities to engage in the following:
1. Advanced Research & Optimization for Edge AI
2. Model Conversion & Execution Verification
3. Accuracy & Performance Trade-off Analysis
4. Execution of K-AI & Government R&D Projects
5. Technical Support for NetsPresso Product Development
(Additional assignments may be included during the process.)
Our team covers the entire model lifecycle—from model conversion and graph optimization to vendor compiler integration, device runtime configuration, and deployment—ensuring that LLMs and Computer Vision models run seamlessly on various NPUs, GPUs, and AI accelerators.
Our core mission is to solve challenges that hardware manufacturers do not address, such as Front/Middle-end optimization, Graph Surgery, and operator modification. We strive to make any model run on any device environment.