The Personal Knowledge Management programme teaches learners to build and operate their own language model system, rather than relying on external tools. The focus is practical: by the end of the programme, each participant has a working personal or team LLM that can ingest documents, retrieve relevant information, and support structured workflows.
The system is built from first principles using an open stack. Learners deploy a Mistral 7B model via llama.cpp, create embeddings, and store them in a vector database such as Qdrant. They design ingestion pipelines that transform raw documents into structured, searchable representations, paying close attention to chunking strategy, metadata, and identifier management. From there, they implement retrieval-augmented generation (RAG), constructing prompts that combine retrieved context with user queries to produce grounded responses.
Beyond basic question answering, the programme introduces orchestration. Using a lightweight API layer, learners build workflows that combine retrieval with computation or external tools. Extensions such as reranking, knowledge graphs, and simple agent loops are introduced where they add value, alongside methods for evaluating retrieval quality and response accuracy.
Throughout, the emphasis is on understanding system behaviour. Learners examine common failure modes—poor retrieval, hallucination, silent degradation—and develop strategies to diagnose and correct them. All components are treated as replaceable, ensuring the system can evolve as requirements change.
The result is a fully owned, extensible knowledge system and a clear understanding of how and why it works.