Array algebra refers to the operations that we can perform on rectangular arrays (“tensors”) of numbers, which are the fundamental data structures used in deep learning, as well as more generally in data science and data-oriented programming. This talk, which pairs with the array algebra practical, will give a whirlwind tour of what arrays are, and help to organize the operations we typically perform on them. We’ll look at the natural sequence of n-arrays where scalars / numbers = 0-arrays, vectors / tuples = 1-arrays, matrices = 2-arrays, etc. We’ll review some concrete examples of different n-arrays, and what the axes of these arrays mean. We’ll triangulate among several different viewpoints of n-arrays, to gain a better intuition of the core array operations: generalized dot products, aggregation, reshaping, transposing, slicing, stacking, mapping, etc. We’ll also try to equip you with helpful ways of thinking about higher-order arrays and their axes, one of the more challenging aspects of coding with complex neural network architectures.