LangChain
Introduction
Large Language Models (LLMs) are on the rise today due to how accurate and precise they can generate and output based on user's input. Furthermore, it could help with auto completion of words or assistance in coding and many more capabilities. Security now becomes a concern due to exposure of new attack vectors and exploits within the LLM model system.
Communication with LLMs
The most common way to interact with the model is to use the web interface of LLM (e.g. ChatGPT), followed by using the API endpoints of the models and lastly, using LangChain library to combine with the API to have more control with query and response.
Importance of LangChain
LLM is a new ecosystem just like the rapid rise of blockchains and smart contracts for ethereum (and based networks) have developed. LangChain has became a popular tooling for developer to test prompt chaining, logging, callbacks and memory connections to different data sources. One of the most important feature is the ability to integrate with different LLM providers.
Agents in LangChain
Used by LLMs to recreate programming entity to execute given tasks which may be a specified sequence of actions to take. Especially useful to allow more development options and creativeness dependent on the support the LLM provider offers. They are used as reasoning engine to aid in automation of question answering, querying tabular data, summarization and evaluation.
Agents can also be further enhanced and integrated with additional tools to perform web scraping, retrieval of information, data processing, applying additional machine learning algorithms, integration and customization of proprietary systems.
Memory in LLM
Ways to retain context for further query prompts by the user is to pass previous context messages or use vector database (embed text to numerical values). LangChain offers memory module to ease context passing during development and supports different LLMs to ease chat history management.
ChatGPT rolled out memory on 13 Feb 2024, which helps in managing context passing of chats or specific prompts to combine as context for the users' next few queries. Benefit of using memory is removing the need to pass full chat history as context, but rather specific chat messages to retain enough context and ultimately, reduce token usage.
Interview Questions
What are the various methods to communicate with LLMs?
What are the benefits of using LangChain over LLM provider APIs?
Explain how to manage context passing when querying to LLMs?
Author
References
Future Todos
Last updated