Hi, I'm Jordan!

Google Scholar / Twitter / GitHub / LinkedIn

I'm a CS PhD student at Stanford, advised by Azalia Mirhoseini (Scaling Intelligence Lab) and Chris RĂ© (Hazy Research). My recent research has focused on improving LLM capabilities by scaling test-time compute (e.g. Large Language Monkeys, CodeMonkeys) and designing AI systems for accelerating these models (e.g. Megakernels, Tokasaurus).

I completed my undergrad at the University of Waterloo, studying software engineering and optimization. As part of UWaterloo's co-op program, I alternated between four months of classes and four months of full-time internships over the course of five years. I've been lucky to work with some really great people across several different areas of AI (listed here in mostly reverse chronological order):

  • I worked in the NVIDIA Toronto AI Lab researching language control for physics-based character animation, advised by Jason Peng and Sanja Fidler.
  • I interned twice at Facebook: once working on data pipelines for machine translation, and once working on neural image compression, where I helped create an open-source library for compression research.
  • I worked on sparse training algorithms for transformer-based language models at Cerebras.
  • At Groq, I helped build a prototype compiler that converts computational graphs into instructions for the company's ASIC.

Before my undergrad, I spent several summers as a biochemistry student researcher in the Kay lab at the University of Toronto, studying the ClpP protease.

You can email me at: (my first name)(my last name)@gmail.com