V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
carmark
V2EX  ›  外包

帮朋友找个懂行的同学做四道 gpu 相关的题目

  •  
  •   carmark · 2017-05-09 21:02:01 +08:00 · 455 次点击
    这是一个创建于 2756 天前的主题,其中的信息可能已经有所发展或是发生改变。

    PTX and Threads Scheduler

    Assignment 1. Please analyze GPU PTX, SSE Assembler (or NEON Assembler), and CPU Assembler Instruction Queues, (and Cambricon[1] as optional) of matrix operations, for instance, matrix (vector) addition, multiply.

    1. Analyze the reasons of "why GPU is faster at matrix operations", ( and why Cambricon is more efficient than GPU in DNN computations, also optional) .
    2. Please figure out which instructions are loading data, which instructions are SIMD operations, and compare them with traditional x86 instructions (in x86 scalar instructions, matrix operation always are organized with loop).

    Assignment 2.Study the threads scheduler of GPGPU by analyzing warp scheduler.

    1. Read the relevant GPGPU-sim code of warp-scheduler and find where the score-boarding algorithm is described. Please flow the algorithm and the warp controller structure (that means drawing the flow diagram and structure diagram).
    2. Please illustration the performances with whether the memory accessing latency is hidden by warp scheduler (the key of this problem is just to construct sufficient operations for scheduler). Note: A Latex Template has been uploaded to overleaf.com with url:https://www.overleaf.com/read/vkyjvtnzrczh Reference

    [1] S. Liu, Z. Du, J. Tao, D. Han, T. Luo, Y. Xie, Y. Chen, and T. Chen, “ Cambricon: An Instruction Set Architecture for Neural Networks,” presented at the 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 393 – 405.

    目前尚无回复
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   3002 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 28ms · UTC 14:04 · PVG 22:04 · LAX 06:04 · JFK 09:04
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.