Full Stack AI Engineer

0



Full-Stack AI Engineer

Location: Hybrid | Global | Flexible Schedule

Type: Contract


About Bridge BSS

Bridge BSS delivers cutting-edge software solutions, AI innovations, and IT staff augmentation for global enterprises.

We partner with renowned brands and foster an inclusive, high-energy, learning-driven culture.


The Role

We’re hiring a Full-Stack AI Engineer to build AI-powered products end-to-end.

You will actively use both Node.js (Fastify) and Python (FastAPI) to design, ship, and operate production services and modern web apps (Next.js/React + TypeScript).

You’ll collaborate with product, design, data science, and platform engineering to turn ambiguous problems into safe, delightful, and measurable user experiences.


What You’ll Do

Prompt Engineering & LLM Integration

  • Design, test, and refine prompts (zero-shot, few-shot, chain-of-thought, ReAct) across models (GPT-4/4o, Claude, Mistral, Llama, etc.).
  • Implement tool/function calling, structured outputs (typed/JSON schema), and guardrails for reliability, safety, and cost control.
  • Build production RAG pipelines: chunking strategies, embeddings, retrieval orchestration, caching, and evaluation.
  • Analyze outputs for hallucinations, bias, tone, factual accuracy, and business alignment.


Backend & Platform (Node + Python)

  • Build and own polyglot services in Node.js/TypeScript (Fastify) and Python (FastAPI) with shared standards for auth, observability, and reliability.
  • Implement OAuth2/JWT, rate limiting, retries/backoff, and real-time streaming via SSE/WebSockets across both runtimes.
  • Integrate with vector databases (Pinecone, Weaviate, pgvector), relational (Postgres), non-relational (MongoDB), and caches.
  • Ship with CI/CD, containerization (Docker), and cloud deploys (AWS/GCP/Azure); instrument cost/latency telemetry, logging, metrics, and tracing.


Frontend (Next.js)

  • Build user-facing experiences with Next.js (TypeScript, App Router), React Server Components, and Server Actions.
  • Deliver chat UIs, evaluators/annotators, admin dashboards, and prompt versioning UIs with real-time streaming.
  • Ensure UX clarity, accessibility, and responsive performance; collaborate closely with design (shadcn/ui + Tailwind).


LLMOps, Quality, and Safety

  • Maintain a version-controlled prompt library with documentation, test cases, and use-case mappings.
  • Run structured experiments, A/B tests, and quantitative/qualitative evaluations; automate golden sets and regression checks.
  • Apply responsible AI practices; mitigate misuse, prompt injection, toxic content, and data-privacy/PII risks.
  • Stay current on emerging techniques and share best practices with the team.


Required Skills & Experience

Experience

  • 3+ years professional full-stack development (shipping production APIs and web apps).
  • 2+ years hands-on AI/LLM experience in production (prompt engineering, LLM APIs, RAG, evals, guardrails).


Languages & Frameworks

  • Advanced JavaScript and TypeScript.
  • Node.js with Fastify/Express (required).
  • Python with FastAPI (required). You’ll use both Node and FastAPI regularly.


Backend & APIs

  • REST and GraphQL; async I/O; OAuth2/JWT; rate limiting; retries/backoff.
  • Real-time streaming with SSE and WebSockets; resilient job/queue patterns.


Data & ORMs

  • Postgres and MongoDB; schema design and query optimization.
  • ORM (required); experience with Prisma/TypeORM/Drizzle or SQLAlchemy is beneficial.
  • Migrations (e.g., Prisma Migrate), connection pooling, and transaction patterns.


LLMs & Tooling

  • OpenAI and Anthropic APIs; familiarity with Mistral and Llama ecosystems.
  • Prompt strategies (zero/few-shot, CoT, ReAct), function calling, structured outputs, and guardrails.
  • LangChain and/or LlamaIndex in production; prompt/versioning/eval tools (e.g., PromptLayer, Cursor, Lovely).


Retrieval & Evaluation

  • Vector stores: Pinecone, Weaviate, pgvector.
  • Embeddings, chunking, hybrid retrieval, caching; offline/online evaluation frameworks.


DevOps & Reliability

  • Docker and Kubernetes; CI/CD pipelines.
  • Observability: logging, metrics, tracing; cost and latency telemetry for LLM usage.
  • Secrets management and secure configuration.


Security & Safety

  • Content safety techniques, prompt-injection defenses.
  • Data privacy and PII handling/compliance best practices.


Collaboration

  • Git, code review, excellent written communication and documentation.
  • Comfortable collaborating via Jira/Teams/Figma with cross-functional teams.


Why Bridge BSS

  • Own high-impact, end-to-end AI product experiences.
  • Collaborative, learning-driven culture with global teammates.
  • Modern stack, real users, and meaningful scale.


You have to wait 20 seconds

Generating Apply Link...

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.
Post a Comment (0)
Our website uses cookies to enhance your experience. Learn More
Accept !
X

Join Our WhatsApp Channel to get latest Updates Join Now

Link Copied