Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Change DAG read offsets every iteration to prevent ASIC optimization #13
added a commit
this pull request
Nov 16, 2018
This is a unique ASIC structure that we had not analyzed before. I don't think that it's a practical system, but it is plausible.
In both Ethash and the 0.9.0 ProgPoW spec a single lane will only ever access the same word(s) within the data that's loaded from the DAG. This means the DAG can be split into 32 (Ethash) or 16 (ProgPoW) chunks that each reside on a single chip. Each chip would require ~128mb (Ethash) or ~256mb (ProgPoW) of eDRAM for the system to hold a 4GB DAG and be viable for a few years. For reference the IBM Power9 has 120 mb of eDRAM, so this is plausible.
In both algorithms every load address comes from a different lane, so the address needs to be broadcast across lanes. The most reasonable structure would be a central coordinator chip that broadcasts address data to all the per-lane chips. A DAG load across all the chips that consumes 128 bytes (Ethash) or 256 bytes (ProgPoW) requires a 4 byte address to be broadcast.
For ProgPoW a GPU + DRAM with 256 GB/sec of memory bandwidth could be replaced at equal perf with 16 eDRAM chips and 1 central coordinator in a system with 4GB/sec of broadcast bandwidth. Without a lot more analysis it's unclear what the overall performance, power, or cost of this multi-chip system would be.
Since it's easy enough to break this architecture we've decided to update the spec. The 0.9.1 version XORs the loop iteration with the lane_id when accessing within the loaded DAG element. Your suggestion to ADD the two means lane data could be rotated across a single-directional ring bus while the DAG data remained in place. By doing an XOR there would need to be a full mesh network or high bandwidth switch so any chip could shuffle data to any other chip, which almost certainly makes the system impractical.
Some quick benchmarking shows this makes no performance difference on either AMD or NVidia hardware.