[EdgeFM] AI Software Engineer
Job group
R&D
Experience Level
Experienced 3 years or more
Job Types
Full-time
Locations
Nota대한민국 서울특별시 강남구 테헤란로 521, 파르나스타워 16층 Nota

👋 About ​the ​Team

The ​EdgeFM team ​is responsible for enhancing ​the ​performance of ​EdgeFM-Engine, a core ​module of ​NVA ​(Nota Vision ​Agent) ​— ​the flagship product ​in ​Nota’s AI-based solution ​business. ​We ​focus on developing ​the inference ​modules ​of Large ​Vision-Language Models, ​while ​improving overall inference ​performance through ​hardware-specific optimization and model lightweighting. At the forefront of the rapidly evolving Foundation Model field, we continuously adopt new technologies and explore innovative ways to meet real-world industrial demands. This environment offers engineers the opportunity to gain deep, hands-on experience and technical expertise as AI researchers and developers.



📌 What You’ll Do at This Position

By joining this position, you will be responsible for the software development and management of the VLM Inference Engine, which is the final deliverable of the EdgeFM Team and plays a critical role in the NVA solution. Your duties will include architecture design, device compatibility management, and maintenance to ensure the stable delivery of highly intensive, advanced research outcomes to the solution. You will contribute to the team's goals by proving the engine's reliability with objective data through the building and management of Benchmark Pipelines.




✅ Key Responsibilities

  • Design and Improvement of the VLM Inference Engine
  • Design a highly scalable architecture that allows for the continuous integration of the latest AI inference optimization techniques.
  • Perform software optimization for highly efficient resource management throughout the entire inference process.
  • Ensure Reliability and Stability of the VLM Inference Engine
  • Secure and manage compatibility across diverse deployment environments (Devices).
  • Develop Monitoring Metrics to ensure the Observability of the EFM-Engine.
  • Analyze and address bugs occurring within the engine.



✅ Requirements

  • 3+ years of practical experience in a related field.
  • Proficient in Python and possesses a strong foundation in computer science (Data Structures, Algorithms, etc.).
  • Experience building services by customizing LLM/VLM inference frameworks (e.g., vLLM).
  • Deep understanding and hands-on experience with Docker container technology.
  • Familiarity with Git for version control and collaborative development.
  • Clear communication skills for seamless collaboration and the ability to proactively solve problems.



✅ Pluses

  • Experience developing or designing AI software in constrained environments such as Edge Devices or Workstations.
  • Experience directly developing or designing high-efficiency inference engines utilizing AI models (e.g., CV, LLM/VLM).
  • Experience building and operating monitoring systems (e.g., Grafana, Prometheus).



✅ Hiring Process

  • Document Screening → Screening Interview →1st Interview& Assignment → 2nd Interview

(Additional assignments may be included during the process.)



🤓 A Message from the Team

The EdgeFM team drives a wide range of R&D initiatives—from model optimization and lightweighting to specialized model development—to meet the speed and accuracy required in real-world applications. This work plays a key role in shaping the core value of our solutions, offering opportunities to gain direct and indirect experience with cutting-edge Foundation Model technologies and AI infrastructure.
If you aspire to grow as an AI software expert, you will find meaningful challenges and growth opportunities within the EdgeFM team.



Please Check Before Applying! 👀

  • This job posting is open continuously, and it may close early upon completion of the hiring process.
  • Resumes that include sensitive personal information, such as salary details, may be excluded from the review process.
  • Providing false information in the submitted materials may result in the cancellation of the application.
  • Please be aware that references will be checked before finalizing the hiring decision.
  • Compensation will be discussed separately upon successful completion of the final interview.
  • There will be a probationary period after joining, and there will be no discrimination in the treatment during this period.
  • Veterans and individuals with disabilities will receive preferential treatment in accordance with relevant regulations.



🔎 Helpful materials

Share
[EdgeFM] AI Software Engineer

👋 About ​the ​Team

The ​EdgeFM team ​is responsible for enhancing ​the ​performance of ​EdgeFM-Engine, a core ​module of ​NVA ​(Nota Vision ​Agent) ​— ​the flagship product ​in ​Nota’s AI-based solution ​business. ​We ​focus on developing ​the inference ​modules ​of Large ​Vision-Language Models, ​while ​improving overall inference ​performance through ​hardware-specific optimization and model lightweighting. At the forefront of the rapidly evolving Foundation Model field, we continuously adopt new technologies and explore innovative ways to meet real-world industrial demands. This environment offers engineers the opportunity to gain deep, hands-on experience and technical expertise as AI researchers and developers.



📌 What You’ll Do at This Position

By joining this position, you will be responsible for the software development and management of the VLM Inference Engine, which is the final deliverable of the EdgeFM Team and plays a critical role in the NVA solution. Your duties will include architecture design, device compatibility management, and maintenance to ensure the stable delivery of highly intensive, advanced research outcomes to the solution. You will contribute to the team's goals by proving the engine's reliability with objective data through the building and management of Benchmark Pipelines.




✅ Key Responsibilities

  • Design and Improvement of the VLM Inference Engine
  • Design a highly scalable architecture that allows for the continuous integration of the latest AI inference optimization techniques.
  • Perform software optimization for highly efficient resource management throughout the entire inference process.
  • Ensure Reliability and Stability of the VLM Inference Engine
  • Secure and manage compatibility across diverse deployment environments (Devices).
  • Develop Monitoring Metrics to ensure the Observability of the EFM-Engine.
  • Analyze and address bugs occurring within the engine.



✅ Requirements

  • 3+ years of practical experience in a related field.
  • Proficient in Python and possesses a strong foundation in computer science (Data Structures, Algorithms, etc.).
  • Experience building services by customizing LLM/VLM inference frameworks (e.g., vLLM).
  • Deep understanding and hands-on experience with Docker container technology.
  • Familiarity with Git for version control and collaborative development.
  • Clear communication skills for seamless collaboration and the ability to proactively solve problems.



✅ Pluses

  • Experience developing or designing AI software in constrained environments such as Edge Devices or Workstations.
  • Experience directly developing or designing high-efficiency inference engines utilizing AI models (e.g., CV, LLM/VLM).
  • Experience building and operating monitoring systems (e.g., Grafana, Prometheus).



✅ Hiring Process

  • Document Screening → Screening Interview →1st Interview& Assignment → 2nd Interview

(Additional assignments may be included during the process.)



🤓 A Message from the Team

The EdgeFM team drives a wide range of R&D initiatives—from model optimization and lightweighting to specialized model development—to meet the speed and accuracy required in real-world applications. This work plays a key role in shaping the core value of our solutions, offering opportunities to gain direct and indirect experience with cutting-edge Foundation Model technologies and AI infrastructure.
If you aspire to grow as an AI software expert, you will find meaningful challenges and growth opportunities within the EdgeFM team.



Please Check Before Applying! 👀

  • This job posting is open continuously, and it may close early upon completion of the hiring process.
  • Resumes that include sensitive personal information, such as salary details, may be excluded from the review process.
  • Providing false information in the submitted materials may result in the cancellation of the application.
  • Please be aware that references will be checked before finalizing the hiring decision.
  • Compensation will be discussed separately upon successful completion of the final interview.
  • There will be a probationary period after joining, and there will be no discrimination in the treatment during this period.
  • Veterans and individuals with disabilities will receive preferential treatment in accordance with relevant regulations.



🔎 Helpful materials