Today thanks to a NetworkChuck video I discovered OpenWebUl and how easy it is to set up a local LLM chat assistant. In particular, the ability to upload documents and use them as a context for chats really caught my interest. So now my question is: let’s say l’ve uploaded 10 different documents on OpenWebUl, is there a way to ask llama3 which between all the uploaded documents contains a certain information (without having to explicitly tag all the documents)? And if not is something like this possible with different local lIm combinations?
Only if your model has a large enough token context to contain all the documents’ info would you be able to do something like that
And where do I find how much token context has my llm?
It probably says somewhere where you dled the model. It’s also in the metadata. I forget where it’s displayed. Maybe in the terminal window.
Things you should know:
L3 is probably not the right base for the task. Maybe Phi-3 or Cohere.