Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LRC-2 #6

Closed
KeeJef opened this issue Apr 3, 2019 · 7 comments
Closed

LRC-2 #6

KeeJef opened this issue Apr 3, 2019 · 7 comments
Labels
ORC Loki Request for Comment

Comments

@KeeJef
Copy link
Collaborator

KeeJef commented Apr 3, 2019

LRC 2

Metadata 
LRC Number: 2
Title: Hashing Algorithm Post LIP-3 
Created: date (2019-04-03)

An LRC is a Loki Request for Comment. These documents should outline an issue that affects the direction of Loki that does not have a clear answer. LRCs are a place for discussion that ideally leads to a broad consensus on how to move forward.

Synopsis of LRC 1

Excluding any major bugs found during the implementation and testing Service Node Checkpointing (LIP-3) will be included in our next major release (Hefty Heimdall). Along with LIP-3 and other important changes we should also investigate changing Loki's hashing algorithm for the next major release.

@KeeJef KeeJef added the ORC Loki Request for Comment label Apr 3, 2019
@KeeJef
Copy link
Collaborator Author

KeeJef commented Apr 3, 2019

When deciding on the next hashing algorithm i think there is a number of things to consider, primarily i think we should recognize the fact that the key requirements for a hashing algorithm have changed.
Post checkpointing we are no longer using PoW to establish the security and protection of the chain against 51% attacks. Miners now become useful for two primary reasons,

  1. They Aggregate transactions into blocks
  2. They create random data via the blockhash, which can be used by the Service Nodes

This means we can neglect to a certain extent considerations that were important when selecting hashing algorithms pre-checkpointing, like nicehash and large coins with the same algorithm. The attacks that can be performed post checkpointing by miners is limited, examples would be

  • Intentionally causing large hashrate fluctuations, which would lead to more unpredictable time intervals between blocks
  • Neglecting to mine any onchain transactions in legitimate blocks
  • Influencing the randomness of swarm selection through control of the blockhash (There are some solutions to this )

with this in mind My personal view is that after checkpointing the two focuses we should have when choosing a hashing algorithm are,

  1. Choosing a hashing function with a low verification time
  2. Choosing a hashing function with a relatively commoditized market, whether that's general purpose GPU's or ASIC's (This should control for HR fluctuation's since unknown power is rare)

I'm looking into some options now and will update this LRC as my thoughts become more clear, i encourage others to get involved in the conversation too.

@hashbender
Copy link

I would propose Blake2b. It was designed to be hardware efficient (https://blake2.net/). This will reduce verification time significantly, which we can see with the Decred project (decred/dcrd#1656 for example). In summary, due to the efficiency of the underlying algorithms, the full Decred blockchain can be synced in less than an hour with an SPV node being initialized and validated in less than 4 seconds on commodity hardware.

With regard to mining, the Blake2b mining market is extremely commoditized. You mention checkpointing obviating 51% attacks and other mining stressors; a side-effect of switching to Blake2b would be completely eliminating even the potential for that risk. There is 80+ PH/s of hashrate sitting idle that community members could get for very low cost. This is due to Siacoin (https://sia.tech) having bricked all of this hardware in November.

You get both the agility of low verification times as well as the security of a strong mining network.

@jagerman
Copy link
Member

jagerman commented Apr 3, 2019

How commoditized is Blake2b hardware? Low verification time is an advantage, of course, but it seems like whatever dark mining hardware comes back online would be likely to be considerably more centralized than a CPU/GPU-specific algorithm.

@SomethingGettingWrong
Copy link

SomethingGettingWrong commented Apr 10, 2019

After reading and understanding the fundamental direction your wanting to take I think the best algo for the situation was picked. If you remember my conversation within loki channel I was against it and voted for CN-GPU or a lighter version Conceal! However for a quick solution Turtle was a great choice!

The most efficient algo on all gpu's is one that has 256kb to 512kb scratchpad. Older GPu's even the more powerful cores like the 32 compute core R9 is just as core powerful at the same clockspeed as the 32 compute core Rx580. (the only inefficnecy at the same clock is diesize heat dissipation)This is why they achieve the same hashrate on Turtle! For example around 7000 hash each. (BUT ONLY IF THEY ARE ON A 256 or 512kb scratcpad!) the older gpu's workload was handled more efficient. This had nothing to do with the GPU'S memory scratcpad size!

Cn-gpu and Cn-R are both good algo's as far as Nicehash Resistance and Asic Resistance. However they centralize hashrate efficency to the TopEND latest generation Gpu only. They have as much of an efficency advantage over older genration gpu's with the same core power as an FPGA does in relation to them. While Cn-GPU actually has a direct teraflop advantage most GPU compute cores are 8 or less and on 256 kb efficent algos can achieve over 1/2 the hashrate of a topend gpu... instead of 1/4 the hashrate dictated by tereaflop performance! Forking to Cn-gpu Wouldnt be as efficent as Conceal. Forking to CN-R would actually bring issues with the fact FPGA's always follow monero. Also the algo's themselves hashrate dictates a (AN EXTREMLY HIGH INEFFICENT time per hash roll. It might take 100 milliseconds to generate a cn-gpu and cn-r hash vs less then 10 ms on a TURTLE or CONCEAL.

The best algo would be one that is extremly fast algo for Loki since I feel emission should come more for the service nodes then for the mining itself. This is why I think turtle was a good choice!

I do however feel that asics could be made for an algo but a simple line change would fix that issue. As far as FPGA if we had FP32 math of some sort like conceal it would actually be good. Becasue they have a TURTLE V1 hash rate but put some fp32 math in it.

However! the hash roll time would have increased per Gpu clock at whatever efficiency the gpu is in relation to a smaller scratchpad without fp32 math.

If it was up to me. I would probably do a simple line change on Turtle to stop the merge mining. However profitability has become codependent on both networks difficulty. It is a good thing that all pools can now mergemine however you will see a price decrease when the networks separate.

In the End I think we should have our own algo but not take away from dev time. I believe TurtleV2 that we are currently on with a simple line change (maybe a small math calculation) would be the most efficient use of Dev time and keep us within the boundary of a fast hash rate.

Blake2b majority holders would dump to cover hardware cost. Centralization is a bad thing. Hashrate is a good thing. But not at the cost of adding an asic I for one would sell every loki have if asics got on it. Theres a reason networks dump asics. Putting asics of that much hashrate on a small network like loki would result in every loki being dumped to cover the electrical cost (+the hardware costs) . Gpus only need to cover the electrical costs if there already paid for.

For Dev time and wallet compatiblity I would just stick with cryptonote variance. Turtle Is already implmented I would reccomend a line change( preferably some math) or a shuffle then rename in CN-loki and assign devs to real work not repeated work.

@natedagger
Copy link

I think the best choice would be the one that coincides with future plans the most. I.e. - barely concern yourself w mining-related power consumption and fairness.

So all of the cpu/gpu/cpu and even dirty asic talk is a red herring. The best way to control for mining centralization is to have it neutered, which I believe is your plan anyway (I am not trying to put words in the team’s mouth or push an agenda).

It looks like your requirements (quick block & random number generator) leave you with a large playing field. So... what else can you achieve?

I would look for algorithms where putting LOKI on the radar will yield the best chance for some of those miners hearing about the project and investing in it. You could achieve this with a simple spreadsheet comparing total profit per hash of various coins.

The other thing to consider is that Loki has always been CN based, and therefore the simplified version is it has been best to mine it with AMD cards. It might be less of a Venn Diagram crossover if you pick something that hums on Nvidia. It will capture a new, vast audience.

Other valuable audiences are counter-culture type audiences, or those that you think would value this type of project & haven’t yet heard about it.

I am of the opinion that ASICS and more importantly FPGAs are not worth fending off but not trying to hijack this thread, just bringing up the point that long-term switching algos every 4 months forever is not worth the team’s bandwidth.

@neuroscr
Copy link

I agree with NateDagger, our eco-system is more than a technical one, having a profitable mining algorithm could be seen as a marketing element. It got us a lot of exposure earlier on. And with the recent price drops and the split coming with turtle merged mining, I think we need to consider strengthening our miners.

@KeeJef
Copy link
Collaborator Author

KeeJef commented Sep 23, 2019

Closed, With the inclusion of RandomXL

@KeeJef KeeJef closed this as completed Sep 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ORC Loki Request for Comment
Projects
None yet
Development

No branches or pull requests

6 participants