LLMs and prompting
I have been interested in AI for quite some years. In 2018 I did a nano-degree in AI programming and learnt many things about ML, neural networks and other AI topics that were all the trend then. But in the last 12 months, the progress on LLMs has been so deafening that I think it would be irresponsible for anyone working on tech not to learn deeply about them. It seems to have devoured the whole AI field, and everything is now about LLMs. This craze will pass too, but I think it is dangerously naive to ignore that they will eradicate, transform and create whole industries.
When I am trying to learn a new field, I try to be as hands-on as possible and start from the very basics, so I spent a few nights this week doing the Spanish translation of one of the best prompt engineering guides out there: https://www.promptingguide.ai/es. I learned a lot about LLMs and how to use them. Here are some insights.
-
The amount of literature to go through to get hardcore, low-level expertise on the field is mindblowing and growing at hilarious speed (cf. https://www.promptingguide.ai/papers), but the high-level, basic concepts involved in correctly/proficiently dealing with the LLMs can be grasped in a few days.
-
After translating a few sections like few-shot and CoT prompting, I now have the basic vocabulary to understand and recognise the patterns behind recurrent sentences like “Let’s think step by step”, which I have seen in so many screenshots of jaw-dropping experiments.
-
Some techniques are so self-referential and naturally described that the fact that they work at all feels almost like Vudu. Are you getting problems trying to make sense of some logic problem with your LLM? Give it some trivia “knowledge” in other fields, make it generate knowledge about your topic and then feed that very knowledge the LLM has generated to itself to make new, better, more correct predictions. Yes, I know it is all data, stats and tokens, but it DOES feel like Vudu.
-
The number of attacks and abuse these models will suffer in the wild is mindblowing AND much more tricky and challenging to fix than anything I have seen in web security. Just check https://www.promptingguide.ai/risks/adversarial for a quick overview of how twisted and subtle some attacks are. As a complete newbie, my guts are telling me that to prevent this type of attack, some compromise should be made regarding flexibility in the prompt. Some initial thoughts on defending against them resonate with my trad developer self.
-
Selecting the best model (GPT4, LLaMA, etc.) and technique (ReAct, DIrectional Stimulus) combination for your AI-based service will take much curation and, for lack of a better word, taste on what can work best. As we accept less deterministic results in software, we increase the space based on what we feel works better when confronted with a wrong, inexact prediction. That is probably the most revealing thing I have discovered during the translation. As a person used to getting to a point where the test suite just goes green, and you have “made it”, this whole thing looks more like “arrive at something you think is good enough and move on”.
My next step is to start a port of LangChain to Ruby. First and foremost, because the project’s two main principles (connect the LLMs to other data sources and allow the LLM to interact with its environment) make much sense to me. As a guy who has been creating software applications for a few decades, I firmly believe that just calling an API, however good, will not be enough. You need the glue, you need the local, behind-doors knowledge, you need the versatility to add your business-specific microcosm to the cocktail, and to do that, you need something like LangChain.
Secondly, trying to implement the port will teach me how to programmatically repurpose all the black magic the LLMs can do in a product-oriented way, focusing on creating applications and services and not just “chatting” with the model for fun to do some text-based tasks. I have seen some super interesting things following that path and the ReAct technique.
As a developer, all this generative wave can feel very threatening, but, as usually happens in technology, it will just become more and more prevalent in our society so better get at ease with it and its risks as soon as possible.