Every time I went through discussions here, I put the shift of "prompt engineering" as a unique trick to what feels more like end-to-end system design. It is no longer just to write a clever sentence, but also to structure context windows without drowning the token costs. Setting up feedback/eval loops, so that the input requests do not immerse yourself in spaghetti. Treat input requests such as the development of blueprints (role → context → edition → restrictions) and not static one. If I know when things should stay small and modular when I can rely on multi-stage or self-critical rivers in my own work (to build a AI product in the recruitment room), I repeatedly came across the same knowledge: What we call "prompt engineering" is indicated in backend engineering, UX design and even the text. The best rivers that I have seen does not come from isolated fast hackers, but from people who understand how to combine structure, evaluation and human-friendly conversation design. Curious how others think about it here: Do you see "LLM Engineering" as his own up -and -coming discipline or is it just a new level of existing roles (ML engineer, Backend Dev, UX writer)? What backgrounds or adjacent skills did you make effective for those who have worked with strong practitioners? (I saw how people with linguistics, product design and classic ML bring very different strengths). Not looking for a silver ball, but really interested in how this community sees the profile of the people who can bridge the request, infra and product experience when we try to build real, reliable systems.
prompts·1 min read9.9.2025
Building with LLMs feels less like “prompting” and more like system design
Source: Original