Merge-Mining Feasiblity

To delve right into the topic, merge mining is the concept of mining 2 different chains at once, without splitting the processing power, while only looking for one nonce because the block templates are the same. The only difference is the difficulty. The main blockchain, in our case Bitcoin, will have the main net difficulty and the other chain (share chain) that will be implemented into an embedded will have a lower difficulty. Once in a while, when a miner finds a good nonce ( solution / share ) for the share chain, the resulting hash will also be good for the Bitcoin main net, that block will be broadcasted and accepted by the network. And so, we mined 2 blocks for 2 networks in the same time.

How can this concept be implemented on Zenon Network?
We will need 3 main components: Embedded Contract, Miners, Hauler ( Proxy between them )

  1. Embedded Contract with the following roles:
  • Maintain a list of Bitcoin block headers.
    This is needed so that the hauler can use this as source of reference when creating the block template.
    Creating a block template requires the following fields:
  1. version - This 4-byte little endian field indicates the version of the Bitcoin protocol under which the node is publishing the block.
  2. hashPrevBlock - This 32-byte little endian field is the double SHA256 hash of the previous block header. This forms the edge of the previous block that joins it to the blockchain DAG.
  3. hashMerkleRoot - This 32-byte little endian field represents the Merkle root of the Merkle tree that contains the transactions which are timestamped in the block.
  4. time - This 4-byte field is the Unix epoch timestamp that is applied to all transactions in the block. Current network policy only requires this value to be accurate within 2 hours of the validating nodes’ local timestamp. The timestamp has 1-second precision.
  5. bits - This 4-byte field yields the difficulty target value of the proof-of-work) puzzle, as determined by the network rules.
  6. nonce - This stands for ‘number used once’. The values of this 4-byte field are cycled through to modulate the block header. The hash of the header is then checked against the difficulty target. Nodes provide adequate information such that when all 4.3 billion values of a nonce have been tried, they can modulate the input field in the block’s coinbase transaction and recalculate the Merkle root. This changes the serialized string of the block header, giving another 4.3 billion unique nonce values to iterate in their search for a hash puzzle solution under the difficulty target.

    Knowing this, the constant values will be version, hashPrevBlock, bits, time and can be fetched from the embedded. The variable ones are nonce ( which will be submitted when providing the share) and the merkle root hash. Here it is a little bit tricky. This hash can be constructed from the hashes of all the transactions that will be included in the block, in a particular order, including the coinbase one ( in our case it need to have the destination address the TSS address, or whatever we use to hold the coins ). When a miner is submitting a share, we need to prove that the merkle root hash is constructed from a list of transactions that contains the correct coinbase transaction ( that has TSS address as destination ). The only way to prove this is to reconstruct all the hashes that lead to the merkle root. The most trivial way is to send all transactions when sending the share so the embedded can reconstruct and check it. We cannot do that as that would mean each miner would send every few minutes ( 2500 hashes * 32 bytes ). What we could do, as we only care about the coinbase transaction is this: send only the hashes needed (the ones in red squares) to reconstruct the root using only the coinbase transaction ( which must be known by the embedded ) .



    This way each share would contain log 2 of n hashes, where n is the number of transactions in a Bitcoin block. log 2 of 4096 is 12. Let’s upper bound this to 13 maybe, we would have 13 * 32 bytes which is a lot less for now. I will think of other ways we could still optimize this in terms of chain space and I also ask you to come with new ideas on how to validate that a share ( constant values + nonce + merkle root) contains the correct coinbase transaction. Also, the randomness space comes from nonce, and the input field of the coinbase transaction (4 more bytes). So we could accept every coinbase transaction as long as the destination address and amount is correct. Maybe we could have more TSS addresses, so more randomness space for miners and accept any (just an idea).

    Here we also need a mechanism on the correctness of the Bitcoin header blockchain. We could have a committee where every member would send a transaction specifying which one is the next block header ( when a new Bitcoin block is mined ) and the one with a majority is cemented. This leads to space overhead in time. One other option would be to have an algorithm that decides which member should post the next header and if others believe it is wrong or he does not post, they could intervene.
  • Maintain a database of each share submitted for the share chain

    We need to track how many correct shares each miner sent so that we could incentivize them.

  • Maintain a difficulty and block time logic

    The same mechanism as Bitcoin’s so share chain has a block time of 2-3 minutes. This is just an example, the topic open for debate, higher time means lower space for our chain as less shares would be sent.

  • Algorithm to pay the miners

    Each miner would be paid based on how many shares he sent in an epoch and the total number of shares from that epoch. They will be paid in ZNN and (why not ?) BTC ( as ZTS or some way, after we actually mine a block, just an idea for now). Here we need to discuss where to get the ZNN from. We could decrease the percentages of rewards from other contracts (Pillars, Sentinels, LP providers)

  • In the future we could have 2 share chains, one for small miners like CPU and one for ASICs or Graphics Card.

  1. Hauler ( I came up with this name as it is a very important component on actual physical mining and it is used to transport almost everything to and from the mine )

    Its role is to gather information about Bitcoin mempool, Embedded latest block header and coinbase transaction template, send jobs to the miner, send shares to the Embedded and broadcast a valid Bitcoin block if found.

    It needs a connection a Zenon node, it needs a connection to a Bitcoin full node so he knows the mempool and can construct a valid mining job (using the transactions) for the miner.

    It needs to handle multiple miners and know how to distribute jobs so that no redundant work is done ( a nonce is not tried twice by two different miners )

    Very important note, every hardware owner should run its own hauler as connecting to any other ones, means that the jobs received could be tampered with and the coinbase transaction would be different. ( You could mine for them actually, even if only a percent of the jobs are tampered )

  2. Miner

    Every miner that is compatible with the Stratum protocol can merge mine! :pick: :hammer_and_pick: :pick:
    It just needs a connection to a Hauler.

This post shows that implementing merge mining on the Zenon Network is feasible, we still need to discuss the details on the topics that I’ve mentioned. Also, any other ideas, complaints, optimizations about my view are very welcomed! I hope I did not miss any information, but will edit the post if so, and will gladly answer questions.

8 Likes

Thank you for your post. Can you please clarify these two items?

Does this make finding a BTC block even harder than mining BTC individually? What if the miner finds a solution to the main chain (BTC) but that solution does not work for the share chain (NoM). Or is that not possible because we share a “block template”? Not sure how that works if the difficulty adjustment and time frames are different between BTC and NoM.

Meaning, anyone who wants to Mine, needs to run their own hauler?

Does this make finding a BTC block even harder than mining BTC individually?

It is not harder, it is the same. A better example. Let’s say BTC hash threshold is 10, and share chain threshold is 100. If a miner finds a nonce that leads to a hash with value 50, it is good for the share chain but not good for BTC. If a miner finds a nonce that leads to a hash with value 5, it is good for both because the chains have the same block template and the same latest block. The only difference is in difficulty

What if the miner finds a solution to the main chain (BTC) but that solution does not work for the share chain (NoM)

If no Hauler is modifying its source code, that would not be possible. Otherwise he would just be mining something else.

Not sure how that works if the difficulty adjustment and time frames are different between BTC and NoM.

Do not think about our consensus and 10 seconds time delay between momentums. It has nothing to do with this. All the logic I have explained will be implemented in the embedded. The emebedded will contain the share chain

Meaning, anyone who wants to Mine, needs to run their own hauler?

Yes and no. You could connect to anyone but the Hauler will be the one that sends the share so connecting your miner to others Hauler means that he could send you modified jobs ( for example find the nonce for this template that contains my coinbase tx not the one from embedded and if you find it you will mine a block for him) or he could send some correct share for the share chain as coming from him instead of you.
If you trust the other one you could connect to him sure

Thanks. That make sense!

Appreciate the in-depth explanation.

I’m glad 0x asked, this ELI5 explanation helped a lot!

Here’s a few more newbie questions:

  • It seems the randomness space of only 4 bytes is a concern, as you pointed to a few workarounds to increase it. Is this only a concern as difficulty increases? Is the nonce space much larger for btc? Why is ours limited to 4 bytes?

  • I assume optimizations around increasing chain weight are because data will live forever in the embedded contract, so in every full node moving forward? Couldn’t we prune historical data after certain intervals?

  • Embedded/NoM has no connection to a bitcoin full node. Is the idea that “a committee”, say like the orchestrators now, will be running a bitcoin full node and submitting the required data to the embedded node as NoM transactions? Couldn’t this “agreement” be done off-chain?

Hauler is a great name!

It seems the randomness space of only 4 bytes is a concern, as you pointed to a few workarounds to increase it. Is this only a concern as difficulty increases? Is the nonce space much larger for BTC? Why is ours limited to 4 bytes?

We have the same random space as BTC. Randomness comes from nonce ( 4 bytes ) and from merkle root hash, (maybe also from timestamp). We can change the merkle root by changing the coinbase transaction’s input field (4 bytes) or by changing the order of any two transactions, or by adding / removing transactions. Each of these actions result in a different merkle root hash, hence a different block template. There are options, we just need to be efficient.

I assume optimizations around increasing chain weight are because data will live forever in the embedded contract, so in every full node moving forward? Couldn’t we prune historical data after certain intervals?

We could let’s say keep in storage only the headers of the last 100 blocks but what we cannot delete is when a Hauler sends in an account block the share of the miner that must contain the necessary data to prove that the list of transactions whose merkle root hash is mentioned, contains the correct coinbase transaction ( which as we speak contains the nonce + at most 13 * 32 bytes, for sure other data we will see. For 1000 miners we are speaking about 0,4 MB every 2-3 minutes.

Embedded/NoM has no connection to a bitcoin full node. Is the idea that “a committee”, say like the orchestrators now, will be running a bitcoin full node and submitting the required data to the embedded node as NoM transactions? Couldn’t this “agreement” be done off-chain?

Theoretically yes, we would have p2p network between haulers, each would send others their shares ( nonce + the 13 hashes) and by some mechanism they could agree on the correct ones and only send some account blocks to specify who mined how many shares. This seems more complex and with other problems that need to be solved like how could you tell a share (basically nonce + merkle root hash) comes from a hauler after he broadcasts it, roles for who can send the account block to account for shares and so on.

Hauler is a great name!

:man_construction_worker:

Thats around 84G per year. If we had 10,000 miners it would be 840G per year. Do you think that is a problem? Maybe not initially.

This can be signed via TSS off-chain for agreement and published on-chain.

We can create a mechanism to slash malicious Haulers that change the TSS coinbase address for example by requiring an upfront ZNN collateral.

The main bottleneck that we encounter is chain bloating. More specifically bloating the merge-mining embedded account-chain.

MrK emphasized that we should keep the base layer minimal.

We need to find a solution for efficient on-chain storage (and processing) of valid share-blocks.

Here zero-knowledge proofs are coming to the rescue… more specifically Merkle proofs.

"To generate a proof, we need to provide the hashes of the sibling nodes along the path from this leaf that result in the Merkle Root.

zkSNARKs allows to prove the possession of a valid Merkle Proof without disclosing the candidate element or the Merkle Path.

Merkle proofs enable demonstrating that a certain element is a part of a given set without revealing which element it is. The Merkle Tree stores a list of elements, and the zkSNARK is used to prove knowledge of a path from a leaf element to the root of the Merkle Tree. By executing the Merkle Proof inside a zkSNARK, the candidate element and the Merkle path can be kept private".

Basically, we only need to post the Merkle zk-proof on-chain and verify it.

2 Likes

I have played with GitHub - hashcloak/merkle_trees_gnark: Merkle Tree functionality using gnark and using only the hashes in red ( described above) we can actually reduce the space from a maximum of 13 hashes of 32 bytes to 4 (maximum 5 hashes) of 32 bytes which is a great improvement so far. Very good idea to execute Merkle Proof inside a zkSNARK.

3 Likes

I might need some help implementing the Stratum part. We need to find someone who knows Golang and Stratum protocol and is willing to help with the server part. I can handle the embedded and everything else. If you know anyone willing to help, not for free of course, that would be perfect.

2 Likes

Are there any other specific skills that you are looking for in a person? We might be able to do some outreach in order to find someone.

Will the payment be through A-Z funds?

1 Like

Here are the contributors on their repo. They must know a thing or two about Stratum.

2 Likes

This guys knows what’s up.

1 Like

I just need someone who can write a stable Stratum server for our needs, which are actually standard for merge mining. It can also be a different binary and not written in Golang.

The payment should be from AZ, they could just sell for ETH. I could pay him upfront with some starting ZNN and then get it back.

5 Likes

This is something I can work on! Thanks to @sugoibtc for telling me about this platform and this in particular

11 Likes

Update:
I have started to work on the Embedded Contract after lots of documentation on how mining actually works in technical terms and planning the implementation. You can follow the progress here: git.
So far I have implemented the logic to store a Bitcoin header chain, add / edit share chains with custom difficulties (maybe different reward multiplier based on the difficulty), copied all the security from Bridge with administrator, guardians and the TSS address. Also tests follow the same design as the one from Bridge because they seem very well written and are easy to use.

Me and @Shuccck are in contact and we have agreed that he will work on the Stratum server under my guidance.

I will now work on accepting the shares logic with that ZK circuit and leave the rewards part at the end. I am also thinking to create a custom update for this contract so we update after / before a few momentums of the pillar update because sometimes it is very slow. The rewards will be based on how much PoW each address has brought, per epoch (24h), so far.

After having a stable version of embedded I will also start to work on the Hauler and then just integrate the Stratum server because it will be coded as a github repo in Golang so I can just import it as a module.

We must decide on where should the rewards come from? Reduce from other percentage? Mint extra?
Should we increase the rewards amount as hash rate increases so we continuously stimulate new miners to come? Come with ideas.

5 Likes

Theres plenty of rewards to go around from the current distribution but its hard to say whats an ideal ratio for all participants without knowing the role Sentinels are going to play.

Aside from that, staking and delegating could easily be cut by 1/4 to 1/3 and still hand out well over 8% APR. When compared to mainstream yields thats still healthy.

Pillars imo do not need a reduction in rewards they need as much incentive as possible right now.

Then LP could stand to reduce too but with other pool options coming up in the short/medium term like QSR and maybe others that portion of rewards is likely going to be diluted anyways so could be left as is to help compensate for its higher risk.

1 Like

Even a 1/3 & 1/2 reduction will get you a healthy APR.

Agree. They do the heavy lifting right now: running critical infra including the orchestrator layer and the extension-chain.

Agree here as well. Hopefully we’ll see other chains connected in the near future.

I would also bring up in the discussion Sentinels. They aren’t actively participating in the network and we can redirect extra value towards the miners.

A beautifully balanced landscape of participants will emerge:

  • Users → feeless transactions, bridge incentives
  • Devs → AZ grants
  • Miners → protocol emission
  • Stakers → protocol emission
  • Delegators → protocol emission
  • Liquidity Providers → protocol emission
  • Sentries (TBA) → protocol emission
  • Sentinels → protocol emission
  • Pillars → protocol emission, extension-chain incentives

We’ll also need a sustainable way to replenish Accelerator-Z for devs.

1 Like

Update:
The hauler is starting to look good on the znn part, there are still some things to implement on btc part.
The sdk is also up to date.
The stratum server is being developed and updated to be compatible with the logic of the hauler.
In the next days, I will put on a testnet and see how the hauler behaves and then connect some miners.

I am also thinking that everyone could run a hauler architecture with a znn fee, ( 1 / 20 shares will go to the owner of the hauler for example), miners would just need an ip to connect and the mining will start immediately, without any additional setup.

5 Likes