Feedbacks on SparseTIR project

Table of Contents

DONE Talk @ Google MLIR Team

A collection of useful questions:

  1. TACO can also decompose computations and schedule each of them, so what's the advantage of SparseTIR?
  2. Do SparseTIR support reorder dimensions (such as [1, 0] in MLIR Sparse Dialect)?
  3. CPU results, and 2:4 sparse tensor core support.
  4. Preprocessing overhead, how good is SparseTIR in the setting that sparse pattern is dynamic.

DONE Qualifying Project Presentation @ UW

Feedbacks from TQ:

Some notes on your talk (for future refs):

  • Three stage compilation can be a bit too detailed, would be useful to have a "map"(outline) that you get back to
  • Think about high-level punchlines (composability) and come back to it in your talk
  • Try to increase your voice on punchlines(here is the key take-away! …) this helps to make your talk more animated.
  • Good job bringing examples on transformations
  • Avoid discussing technical details in exp and focus on result, show results, then mention what you did briefly
  • Come back to your key takeaway (composability) in exps

At a high level: Think about your overall flow and try to always get audiences back to it Try to be animated and emphasize on punch-lines QA:

  • try to repeat the question from audience(so you can answer, wrt to your interpretation, instead of try to guess what others ask)

Other questions:

  1. relations with Taichi-like frameworks.

Talk @ Cornell Zhang Group

ASPLOS Rebuttal

Author: expye(Zihao YE)

Email: expye@outlook.com

Date: 2022-07-27 Wed 00:00

Last modified: 2022-08-12 Fri 09:07

Licensed under CC BY-NC 4.0