Our Programmes

Five programmes, each complete in itself.

Each programme is built around a coherent domain of knowledge and a specific set of practical skills. They can be taken individually or in sequence, depending on where a learner is starting from and where they need to go.

P

The Personal Knowledge Management programme teaches learners to build and operate their own language model system, rather than relying on external tools. The focus is practical: by the end of the programme, each participant has a working personal or team LLM that can ingest documents, retrieve relevant information, and support structured workflows.

The system is built from first principles using an open stack. Learners deploy a Mistral 7B model via llama.cpp, create embeddings, and store them in a vector database such as Qdrant. They design ingestion pipelines that transform raw documents into structured, searchable representations, paying close attention to chunking strategy, metadata, and identifier management. From there, they implement retrieval-augmented generation (RAG), constructing prompts that combine retrieved context with user queries to produce grounded responses.

Beyond basic question answering, the programme introduces orchestration. Using a lightweight API layer, learners build workflows that combine retrieval with computation or external tools. Extensions such as reranking, knowledge graphs, and simple agent loops are introduced where they add value, alongside methods for evaluating retrieval quality and response accuracy.

Throughout, the emphasis is on understanding system behaviour. Learners examine common failure modes—poor retrieval, hallucination, silent degradation—and develop strategies to diagnose and correct them. All components are treated as replaceable, ensuring the system can evolve as requirements change.

The result is a fully owned, extensible knowledge system and a clear understanding of how and why it works.

D

This programme provides a comprehensive introduction to data, computing, and analytical thinking through the practical medium of spreadsheeting software. It is designed to establish the core skills required for business analysis while preparing learners for progression into more advanced scientific and modelling programmes.

The content covers the full lifecycle of working with data: acquisition, structuring, cleaning, manipulation, and reporting. Learners develop fluency in tabular data operations, formula design, and reproducible workflows within their organisation's existing spreadsheet environment. From there, the programme introduces charting and visualisation, focusing on how to represent information clearly and support decision-making.

Teaching is anchored in a corporate finance workflow. Financial statements, cash flow modelling, and performance analysis provide the context through which learners engage with data. This grounds abstract concepts—such as aggregation, transformation, and feature construction—in real business problems. Alongside this, learners are introduced to foundational ideas in computing, including logic, structure, and the disciplined organisation of analytical work.

The programme also introduces basic predictive methods, particularly in the context of financial time series. Learners explore simple forecasting approaches, understand their assumptions, and learn how to evaluate their usefulness in decision-making settings.

Throughout, the emphasis is on principles rather than tools. By the end of the programme, learners are able to work confidently with data, construct meaningful analyses, and support business decisions with clear, well-structured outputs.

D

This programme provides a rigorous introduction to data science and data engineering, centred on the systematic use of the Python scientific computing ecosystem. It develops the theoretical and practical foundations required to design, implement, and evaluate predictive models within well-structured analytical workflows.

Learners work within a Python environment and engage directly with core libraries used for modelling and data processing. The programme covers the full modelling lifecycle: data ingestion, transformation, and feature engineering; construction of reproducible pipelines; and the disciplined application of training, validation, and testing procedures. Particular emphasis is placed on the correct use of train–test splits, cross-validation, and performance metrics appropriate to the problem context.

Model development is treated as a process of statistical estimation and generalisation. Learners implement and compare a range of supervised learning methods for both regression and classification, examining their assumptions, limitations, and behaviour under different data conditions. Attention is given to issues such as bias–variance trade-offs, overfitting, data leakage, and the impact of feature construction on model performance.

In parallel, the programme introduces core data engineering concepts required to support analytical systems. Learners design and build pipelines that integrate preprocessing, modelling, and evaluation into coherent, repeatable processes, ensuring consistency between development and deployment settings.

Throughout, libraries are treated as formal implementations of underlying principles rather than as tools to be applied heuristically. By the end of the programme, learners are able to construct robust machine learning workflows, justify methodological choices, and critically evaluate model performance.

A

This programme provides an advanced and rigorous treatment of statistical modelling grounded in the Bayesian approach. It is concerned not with the routine application of methods, but with the construction of coherent explanations of real-world processes. The emphasis throughout is on understanding how models encode assumptions, how those assumptions relate to data-generating mechanisms, and how inference supports reasoning under uncertainty.

The programme begins from first principles. Statistical models are introduced as formal representations of hypotheses about how data arises. Learners develop a working understanding of priors, likelihoods, and posterior distributions through simulation and iterative model building. Inference is framed as the comparison of competing models, with attention given to how different assumptions lead to different conclusions.

A central component of the programme is causal reasoning. Learners are introduced to directed acyclic graphs (DAGs) as a formal language for representing causal structure. These are used to reason about confounding, mediation, and selection effects, and to distinguish between correlation and causation in a precise and operational way. The use of DAGs is essential for designing statistical investigations, particularly where intervention or decision-making is involved.

Building on this foundation, the programme progresses to more expressive model classes. Learners work with generalised linear models and hierarchical (multilevel) models, examining how partial pooling improves estimation and how structure reflects real-world dependencies. Computational methods for approximating posterior distributions are introduced as necessary tools for working with complex models.

Computation is used as a means of thinking. Learners construct synthetic data, perform prior and posterior predictive checks, and use these techniques to diagnose model behaviour and assess adequacy. Modelling is treated as an iterative process of proposal, evaluation, and refinement.

By the end of the programme, learners are able to formulate, implement, and evaluate probabilistic models, reason explicitly about causation, and apply statistical methods in a disciplined and transparent manner.

N

This programme provides an advanced study of nonlinear systems, focusing on the behaviour of complex, dynamic processes that cannot be adequately understood through linear models. It addresses the limitations of approaches that assume stability and proportionality, and instead examines systems characterised by feedback, interaction, and emergence.

The programme begins with the foundations of dynamical systems. Learners study differential equations, equilibrium states, and stability, before progressing to phase space analysis as a primary tool for understanding system behaviour. Systems are analysed both geometrically and analytically, with attention given to trajectories and long-term dynamics.

From this basis, the programme introduces bifurcation theory: how small changes in parameters can lead to qualitative shifts in system behaviour. Learners examine transitions between regimes and develop the ability to recognise these patterns in applied settings. A central concept is extreme sensitivity to initial conditions, where small differences at the outset can lead to radically different outcomes, limiting predictability even in deterministic systems.

The programme also explores mechanisms such as feedback loops, self-reinforcement, and network effects. Learners analyse how interactions between components can amplify signals, create lock-in, and generate cascading behaviour across systems. These ideas are essential for understanding real-world phenomena in which outcomes emerge from interconnected processes rather than isolated variables.

Chaos is introduced as a structured form of apparent randomness. Learners study systems that exhibit irregular behaviour and use concepts such as attractors and phase portraits to identify underlying order. Computation and visualisation play a central role, with learners constructing and exploring systems through simulation and numerical methods.

By the end of the programme, learners are able to analyse nonlinear systems, identify regimes of stability and instability, and reason about emergence, feedback, and unpredictability in complex environments.