Position: AI Engineer - Iris
Team: Bridge Innovation (Iris/Fern Product Pod)
Reports to: Head of Innovation
Location: Remote | Working hours: 12:00–9:00 PM IST (to ensure overlap with US EST)
Type: Full-time / Contract
About This Role
Iris is a personal AI assistant with a local-first memory architecture — it remembers, learns, and evolves with the user over time. The consumer product, Fern, is powered by Iris and is approaching its first public release.
We're looking for a backend engineer to own the brain: the memory system, AI integration layer, and all the intelligence that makes Iris more than a chat wrapper. You'll work directly with Head of Innovation (who designed and built the architecture) and alongside a native app engineer and product designer in a small, collaborative product pod.
This is ground-floor work on a product with real technical depth. If you're drawn to hard problems in AI memory, token efficiency, and context management — and you want to build something genuinely new rather than maintain something existing — this is that opportunity.
What You'll Own
- Memory compression and retrieval — reducing what loads into context per session to cut token costs while keeping Iris sharp
- Vector database integration for semantic memory retrieval
- Onboarding data model — defining what information Iris needs from a new user and in what structure, so the product is useful from day one
- Token cost optimisation — identifying runaway API call patterns, routing tasks to the right model tier
- Ingest pipeline improvements — handling large files, correct model routing, reliability fixes
- Memory consolidation engine — ongoing improvements to how Iris processes, compresses, and reflects on its own memory
- Cross-platform brain compatibility (macOS and Windows)
- Architecture advisory — working with Head of Innovation to identify what ships first, what can wait, and the fastest path to a stable v1
What We're Looking For
- Strong Python — this is a Python-first codebase
- Hands-on experience with LLM integration: API design, context window management, embeddings, prompt engineering
- Enough product sense to make good architectural decisions without being handed exhaustive specifications
- Comfortable working in an evolving codebase with an ambitious roadmap — you'll shape the architecture, not just implement tickets
- Fluent with AI-assisted development tools (Claude Code, Cursor, or equivalent) — this is a core requirement, not a nice-to-have
- Collaborative by default — this is a product pod, not a siloed team; you'll be working closely with the frontend engineer, designer, and Head of Innovation daily
Nice to Have
- Experience with vector databases (Pinecone, Weaviate, ChromaDB, or similar)
- Familiarity with token-level cost analysis and optimisation strategies
- Interest in memory systems, knowledge representation, or cognitive architecture
How We Work
This is a product studio, not a development shop. There are no handoffs between BA, UX, engineering, and DevOps — the pod works together toward a shared goal. You'll collaborate directly with Head of Innovation on product direction, pair with the frontend engineer on integration points, and work with the designer on how the brain's capabilities surface in the user experience. We move fast, we iterate, and we expect everyone to think about the product — not just their slice of it.



