There’s no doubt that LLMs are extremely useful, introducing a new search paradigm and way of interacting with computers. However, besides ethical, environmental and legal considerations, I think they over-promise, and users tend to overgeneralize their capabilities.
I recently spent quite some time debugging an agent. While discussing with friends how heavily prompted most assistants are, a smart guy made an interesting observation: this form of interaction is coming full circle and starting to resemble programming. It’s similar to programming, with a few caveats:
- You program in something akin to Legalese, which is ambiguous and difficult to debug.
- You literally have to tell the model not to do certain things (whether it does so or not is another question).
- It is not deterministic.
- It doesn’t necessarily become easier as the complexity of the task increases.
This is all good as long as you understand the usefulness and limitations of these tools. However, I see a new generation of “prompters” acquiring a completely different skill set while somehow forgetting or bypassing nearly a century of cumulative wisdom in computer science and software engineering. What will happen with the critical systems of the future? Only time will tell.