Adding AI to your engineering stack with efficiency

AI can make your product feel elevated. It can also make it feel unreliable, expensive, and risky.

Our team at Jots has been leveraging LLMs in the last few months to expand and improve our offerings.

This article was first published on dev.to, and we’re sharing an adapted version here with context on how this thinking shaped our product.

Here are our key learnings:

Focus on mindset vs models

Ask yourself:

  • What user pain does this solve today?
  • What’s the “manual” version of this flow? And how does AI improve the experience, without taking control away?

Spend time on technical discovery

  • Write acceptance criteria (what “good” looks like)
  • Sketch the visual journey
  • Define the scope for faster iterations and early feedback

Treat every AI feature like a pipeline vs a simple function call

  1. Normalize input
  2. Sanitize and redact (privacy-first)
  3. Schema validation and assertions
  4. Trigger logic (Threshold)
  5. Generation logic (prompt + tool choices)
  6. Post-processing (format, structure, safety)
  7. Delivery (UI controls, logging, persistence)

This pipeline approach solves two problems at once:

  • It tames non-deterministic outputs with constraints and checks
  • It makes the system observable when potential risks are uncovered

Defend against latency and cost spikes

  • Feature-level AI usage logging (to narrow down what is expensive)
  • Rate limiting + hard daily caps (avoid surprise bills)
  • Deduping (event table + hash keys)
  • Similarity checks to avoid “same output, different words” fatigue
  • Feature flags to ship safely and roll back fast
  • Breadcrumbs + Alerts in Sentry across the whole pipeline for visibility

Evaluate like an engineer, not like a researcher

Early on, don’t start with “LLM-as-a-judge.” It adds complexity fast.

Start simpler:

  • Manual review of recent traces
  • Bottom-up error analysis: group failures, count patterns, fix the top ones

Examples

Recent examples of AI features we implemented for Jots:

  • AI-generated prompts
  • AI-generated tasks

Each feature has its own pipeline. That separation gives clear ownership and keeps changes localized. We also created reusable utility functions to be shared across these pipelines.

One UX decision made a big difference: users can accept or reject the AI output.

That does two things:

  • It keeps the user in control
  • It gives us a clean signal of usefulness from users (technical feedback loop)

Practical learnings

If you’re adding AI to your product this week, do the following:

Start with one question: What improves for the user if AI is added? If the answer is “nothing,” you’re adding complexity without any added value. If yes, ship the smallest AI feature that has:

  • A clear input contract
  • A pipeline (even a simple one)
  • Logging + cost visibility
  • A UX escape hatch (edit, reject, fallback)
  • Security layers in place

Keep your mindset clear:

  • AI is a tool, not the product
  • Your UX should still make sense without it
  • Don’t let the model steer the roadmap

Conclusion

The same principles introduced in software engineering can also be applied to life in general.

At Jots, we think of AI as a power tool. Used well, it cuts out the mundane parts. Used poorly, it cuts into your judgment.

Our advice is to keep a balance: use AI to accelerate repetitive work and explore options. But keep the important decisions and reflections grounded in your own thinking.

And if you’re into improving your critical thinking skills as an engineer, give Jots a try. We use research-backed frameworks and AI assistance to prompt you with the right questions, to reflect and learn in the best way.