Background Readings

Here are a few resources for background readings, highly recommended for students both from computational and biological backgrounds:

  1. Artificial Intelligence and Brain Research – Neural Networks, Deep Learning and the Future of Cognition:
  2. Stanford CS 225: Machine Learning
  3. Molecular Biology for Computer Scientists

More on Large Language Models (LLMs) and Transformer architectures:

  1. Transformers, the tech behind LLMs | Deep Learning Chapter 5
  2. Build a Large Language Model (From Scratch)
  3. Foundations of Large Language Models
    • https://arxiv.org/abs/2501.09223
    • Full book on foundational concepts of LLMs (with some more math), with 5 chapters exploring key areas: pre-training, generative models, prompting, alignment, and inference.
  4. A Mathematical Framework for Transformer Circuits
  5. A few other (blog post) links: https://jalammar.github.io/illustrated-transformer/, https://e2eml.school/transformers.html.

More on Reinforcement Learning (RL):

  1. Reinforcement Learning 101
  2. UC Berkeley CS 285: Deep Reinforcement Learning course, fall 2023
  3. Reinforcement Learning: An Introduction (textbook by Sutton and Barto)
  4. Key papers in Deep RL

More on Biology, Neuroscience, and Evolution:

  1. Cognitive Biology: Dealing with Information from Bacteria to Minds
CS 591 BAI: Biologically Plausible AI
Email: gribkov2@illinois.edu