Lattice Semiconductor logo

AI Senior Software Engineer – AI Agent Development

Lattice Semiconductor
1 day ago
Full-time
On-site
Malaysia

Lattice Overview

There is energy here…energy you can feel crackling at any of our international locations. It’s an energy generated by enthusiasm for our work, for our teams, for our results, and for our customers. Lattice is a worldwide community of engineers, designers, and manufacturing operations specialists in partnership with world-class sales, marketing, and support teams, who are developing programmable logic solutions that are changing the industry. Our focus is on R&D, product innovation, and customer service, and to that focus, we bring total commitment and a keenly sharp competitive personality.

Energy feeds on energy. If you flourish in a fast paced, results-oriented environment, if you want to achieve individual success within a “team first” organization, and if you believe you can contribute and succeed in a demanding yet collegial atmosphere, then Lattice may well be just what you’re looking for.

Responsibilities & Skills

Role Summary

We are seeking an AI Software Engineer to design and build AI agents that perform complex, goal-oriented tasks within an intelligent software platform. This role focuses on developing agent behavior, reasoning logic, task execution, and interaction patterns rather than underlying AI infrastructure.

The ideal candidate has experience building AI-driven systems that combine language models, structured workflows, tool invocation, and rule-based logic to deliver reliable and controllable outcomes. This role emphasizes agent design, behavior modeling, and task execution, working on top of a shared AI platform and infrastructure.

Key Responsibilities

  • Design, implement, and maintain AI agents that can interpret tasks, reason over context, and execute multi-step workflows.

  • Define agent capabilities, behaviors, and responsibilities using structured logic, prompts, rules, and policies.

  • Implement agent reasoning patterns such as task decomposition, decision-making, validation, and iterative refinement.

  • Develop agent-to-tool integrations that allow agents to invoke software tools, APIs, and services in a controlled manner.

  • Build mechanisms for state management, context handling, memory, and task progression within agent workflows.

  • Collaborate with platform and infrastructure teams to integrate agents into a shared AI framework.

  • Ensure agent behavior is predictable, auditable, and aligned with safety and governance requirements.

  • Improve agent reliability through testing, evaluation, and controlled experimentation.

  • Contribute reusable agent patterns, templates, and best practices that accelerate development across multiple use cases.

  • Participate in code reviews, design discussions, and technical documentation related to agent development.

Required Qualifications

  • Bachelor’s, Master’s, or equivalent experience in Computer Science, Software Engineering, or a related field.

  • Strong programming skills in Python (or similar high-level languages).

  • Experience developing AI/ML-powered applications, particularly systems built on large language models (LLMs).

  • Familiarity with designing task-oriented or goal-driven AI systems.

  • Experience working with structured prompts, rules, policies, or workflow logic.

  • Understanding of software design principles, testing, and debugging.

  • Ability to reason about correctness, edge cases, and failure modes in AI-driven systems.

  • Strong communication skills and ability to collaborate with cross-functional teams.

Preferred Qualifications

  • Experience building AI agents, assistants, or automation systems that perform multi-step tasks.

  • Familiarity with agent orchestration frameworks, workflow engines, or state-machine-based systems.

  • Experience integrating agents with external tools, APIs, or domain-specific software.

  • Knowledge of prompt engineering, evaluation techniques, and agent behavior tuning.

  • Experience implementing guardrails, validation, or policy enforcement for AI systems.

  • Exposure to AI governance, responsible AI practices, or human-in-the-loop workflows.

  • Background in technical or engineering domains where correctness and reliability are critical.