NetsPresso converts various framework IRs into a unified IR and optimizes models for each hardware environment to run faster and more efficiently. The Core Research team develops the core modules that enable this, working on key optimization techniques such as graph optimization, quantization, and compression. We focus on improving performance and compatibility so models run reliably across a wide range of devices, from edge hardware to data center servers.
📌 What You’ll Do at This Position
You will research NetsPresso’s core technologies, including the latest quantization and graph optimization methods, and explore ways to increase model speed while maintaining accuracy across different hardware environments. You will also take part in designing Nota’s optimization strategies and implementing them in the actual product, gaining broad experience with on-device AI models and optimization techniques.
👉 Check what core research team has done here.
On-device AI Model Optimization and Conversion
Support Optimization for Various AI Models
(Additional assignments may be included during the process.)
A strong curiosity for new technologies and the ability to turn ideas into real impact are essential in this role. You won’t be doing research for its own sake—you’ll be developing original model-optimization technologies that directly power the NetsPresso service. Because each module is tightly interconnected, we place great value on active communication and a proactive mindset. If you enjoy diving deep into complex technical challenges and growing through close collaboration, you’ll thrive and achieve meaningful results with this team.
🔎 Helpful materials
NetsPresso converts various framework IRs into a unified IR and optimizes models for each hardware environment to run faster and more efficiently. The Core Research team develops the core modules that enable this, working on key optimization techniques such as graph optimization, quantization, and compression. We focus on improving performance and compatibility so models run reliably across a wide range of devices, from edge hardware to data center servers.
📌 What You’ll Do at This Position
You will research NetsPresso’s core technologies, including the latest quantization and graph optimization methods, and explore ways to increase model speed while maintaining accuracy across different hardware environments. You will also take part in designing Nota’s optimization strategies and implementing them in the actual product, gaining broad experience with on-device AI models and optimization techniques.
👉 Check what core research team has done here.
On-device AI Model Optimization and Conversion
Support Optimization for Various AI Models
(Additional assignments may be included during the process.)
A strong curiosity for new technologies and the ability to turn ideas into real impact are essential in this role. You won’t be doing research for its own sake—you’ll be developing original model-optimization technologies that directly power the NetsPresso service. Because each module is tightly interconnected, we place great value on active communication and a proactive mindset. If you enjoy diving deep into complex technical challenges and growing through close collaboration, you’ll thrive and achieve meaningful results with this team.
🔎 Helpful materials