Ideas for dynamic plasma

There has been a lot of discussion surrounding dynamic plasma lately, and different ideas have been presented by community members. I’ve spent some time researching the topic and I’ve attempted to combine some ideas in this post and present some ideas for a dynamic plasma implementation.

I didn’t post this in the dynamic plasma thread, since I wanted to create a thread that focuses mainly on the pros/cons of the ideas presented in this post.

Background

Why do we need dynamic plasma

Currently our network has a hardcoded throughput limit of 100 account blocks per momentum. For an account block to be eligible for inclusion in a momentum, the account block has to have a minimum amount of plasma committed to it. This minimum amount of plasma needed is dependent on the type of the transaction. For example, calling an embedded contract requires a higher plasma commitment than a normal send transaction.

Since our network utilization rate is so low right now, this mechanism works without issues. There is always space in the momentum so users do not need to compete for their account block to get included in the next momentum.

Once the network utilization rate picks up, we will start running into problems. Since a momentum can fit a finite number of account blocks, there has to be a mechanism that allows users to compete for inclusion in a momentum.

What is plasma

The founding team introduced plasma as “an anti-spam mechanism that acts as the network gas” and “as a third dimension asset similar to the mana concept employed by many popular games”.

Based on this and based on the current implementation of plasma, we can come to the conclusion that plasma has three purposes: mitigating spam, representing a computational cost for transactions, and allowing for transactions to be prioritized (competing for inclusion in a momentum).

The current implementation only fulfills one of these purposes: representing a computational cost associated with a transaction. The anti-spam properties of the current implementation are relatively weak and there is no mechanism that allows for users to compete for inclusion in a momentum.

Plasma as a representation of computational resources

Using “gas” to represent computational resources is a widely utilized concept in other blockchains such as Ethereum. Ethereum transactions require the user to commit a minimum amount of gas to the transaction in order for it to be eligible for inclusion in the next block. The minimum computational cost for a regular ETH transaction is 21,000 gas units. This amount of gas has to be committed to every transaction. The gas is paid for in ETH and the protocol burns this gas when the transaction is processed. This is only a theoretical minimum fee, since in practice the total fee for a transaction is more than this and fluctuates based on network usage.

The reasoning for why the requirement is 21,000 gas units can be found here. Based on Vitalik’s explanation we can see that the gas units required for a transaction are based on the actual computational resources that are needed to process the transaction.

In our network a regular transaction also has a computational cost of 21,000 gas units, or rather, plasma. Whether these computational requirements are based on actual testing and benchmarking or just on guesstimation is unknown. The computational cost differs depending on the transaction’s complexity. For example, calling an embedded contract to collect rewards has a computational cost of 73,500 plasma since it requires more computational resources to process the transaction.

The maximum account blocks allowed in a momentum is currently 100 and the amount of plasma the most expensive transaction requires is 105,000 (registering a pillar). Based on this we can calculate that the theoretical maximum computational units our network will currently process is 10,500,000 units per momentum.

Plasma as a transaction prioritization mechanism

As stated before, our network doesn’t have any transaction prioritization mechanism currently, but plasma can be used for this purpose, with the basic idea being that committing more plasma to the transaction means higher prioritization of the transaction.

Plasma as an anti-spam mechanism

The third purpose of plasma is to work as an anti-spam mechanism. To me, on-chain spam can be defined as any activity that floods the chain with transactions with the intention to significantly limit the ability of others to use the network. There can be many reasons for conducting a spam attack - for financial gain, for example.

The most common anti-spam mechanism used by networks is to require a transaction’s sender to pay a fee. The idea being that the cost of fees will be greater than the potential gain from an attack.

I think that the anti-spam properties of our current plasma implementation are limited, mainly because the rate at which plasma is currently “recharged”. It’s too fast to effectively throttle an account’s ability to send transactions.

Ideas for an initial dynamic plasma implementation

Below are some ideas for how to fulfill the goals presented above. Example implementations and pseudo-code examples of the following algorithms can be found here.

Goal 1: Computational resources

This goal is already partially met in the current plasma implementation. But since we have finite momentum space, we need to increase the cost of computational resources when demand increases.
To meet the goal, we could utilize a dynamic algorithm that adjusts the “price” of computational resources based on network utilization which is readjusted on every momentum.

The basic idea being that the readjustment algorithm targets half full momentums. This means that if the previous momentum was more than half full, the price of plasma increases. If the previous momentum was less than half full, the price of plasma decreases. If the momentum was exactly half full, then the price stays the same. For a transaction to be eligible for inclusion in the next momentum, the account block has to commit at least the base plasma requirement for the transaction multiplied by the minimum price of plasma, the “base price”.

Instead of the current per momentum transaction cap, we would have a per momentum base plasma cap, since different transactions require a different amount of base plasma as explained before.

The price of plasma will increase exponentially if there are momentums that are over half full consecutively, meaning that it is economically unviable for momentum size to stay high indefinitely.

Example:

Momentum Height Momentum Fullness Base Price Price Change
1 50% 1000 0%
2 100% 1000 0%
3 100% 1125 12.5%
4 100% 1265 12.5%
5 100% 1423 12.5%
6 100% 1600 12.5%
7 100% 1800 12.5%
8 100% 2025 12.5%

The main benefit of targeting half full momentums, instead of full momentums, is that the base price for plasma is adjusted algorithmically, reducing the need for a bidding war among users when there is a short term spike in momentum space demand. It is also easier for wallets to predict the minimum plasma price needed for a transaction.

This is similar to how Ethereum’s current fee adjustment algorithm works as described in EIP-1559.

Goal 2: Transaction prioritization

Even with a dynamically adjusting base price for plasma, there still needs to be a way to prioritize transactions in the situation where the momentum cannot fit all the transactions that are eligible. This could be achieved by simply prioritizing account blocks based on the price of plasma used in the account block.

plasmaPrice = ab.UsedPlasma / ab.BasePlasma

To be eligible for inclusion the account block must fulfill: plasmaPrice >= momentum.BasePrice

This allows a user to commit more plasma to their transaction than what is needed, in order to have a better chance of being included in the next momentum if the network is congested.

Note 1: Account blocks sent by embedded contracts have endless plasma and should probably always have the highest priority, so this is something that has to be taken into account when determining the per momentum base plasma cap.

Note 2: Since we currently have no way of “forcing” pillars to order transactions in a specific order, there is no way to guarantee that pillars prioritize transactions as intended. I think the best that we can do right now, is minimize the incentives for pillars to alter the ordering of transactions.

Goal 3: Anti-spam

The anti-spam aspect of plasma is probably the trickiest goal to achieve, depending on what the requirements are. The dynamic adjustment algorithm presented in Goal 1 works as an anti-spam mechanism, but it is probably not enough on its own.

PoW Plasma anti-spam

Currently there is a hard coded PoW-to-Plasma conversion rate, meaning that a given amount of PoW will always be converted to a specific amount of plasma. Currently there is also a hard coded limit to how much PoW can be converted to plasma for an account block.

If account blocks are prioritized based on plasma, then having a hard coded limit to how much PoW plasma can be generated can make PoW transactions unviable when there’s high demand for momentum space.

But if the hard coded cap, for how much PoW plasma can be generated, is removed, then a user with a significant amount of computing power would be able to generate huge amounts of plasma and gain an undue amount of control over total network bandwidth.

To mitigate this problem, we could introduce a dynamic PoW-to-Plasma conversion rate (difficulty-per-plasma), that is dependent on network utilization and how much of the total plasma is being generated by PoW.

In the example algorithm a difficulty-per-plasma value is readjusted on every momentum. The target PoW plasma rate in a momentum is set to 20% - this means that when the momentum is over or equal to half full, the PoW plasma percent should be 20%. If it’s less than that, the difficulty-per-plasma will decrease, if it’s more than that, the value will increase. If the momentum is less than half full, the PoW plasma target is 100%, meaning that the PoW plasma percent can be 100% of the momentum without penalty.

The example algorithm’s output when PoW plasma is 100% of the momentum’s plasma and the momentum is equal to or over half full:

Momentum Height Momentum Fullness PoW plasma Difficulty-per-plasma Difficulty-per-plasma change
1 >=50% 100% 1000 0%
2 >=50% 100% 1500 50%
3 >=50% 100% 2250 50%
4 >=50% 100% 3375 50%
5 >=50% 100% 5062 50%

The example algorithm’s output when PoW plasma is 10% of the momentum’s plasma and the momentum is equal to or over half full:

Momentum Height Momentum Fullness PoW plasma Difficulty-per-plasma Difficulty-per-plasma change
1 >=50% 10% 1000 0%
2 >=50% 10% 937 -6.3%
3 >=50% 10% 878 -6.3%
4 >=50% 10% 823 -6.3%
5 >=50% 10% 771 -6.3%

With this algorithm, even a user with a significant amount of computing power, would not be able to occupy over 20% of momentum space indefinitely.

Fused Plasma anti-spam

Currently, whenever fused plasma is used to send a transaction, that plasma gets recharged once the transaction is confirmed. This means that, if transactions are prioritized based on plasma, a malicious user with a significant amount of QSR could easily outbid others for inclusion in a momentum.

This problem is relevant since we will probably have to continue to have a conservative network max throughput, even with dynamic plasma, at least initially. My view is that we don’t really have any data to justify raising this throughput limit. Raising the throughput limit is generally a centralizing force, so we would need to do benchmarking to find a throughput limit that can be handled by regular consumer hardware. We want users to be able to run their own nodes.

To work around this problem, we could introduce a dynamic plasma recharge rate, so that fused plasma gets recharged at a rate that is dynamically adjusted. Instead of only requiring one confirmation to recharge all fused plasma, the plasma would be recharged at a per confirmation recharge rate.

This will act as a counter force to the temptation of excessively committing plasma to a transaction. The more plasma committed, the longer it will take to get it recharged, limiting an account’s ability to occupy momentum space and freeing up bandwidth for other users. This recharge rate would be dependent on the current network utilization rate.

Example of the recharge rate algorithm (recharge rate = QSR/confirmation):

Momentum Height Momentum Fullness Momentum Recharge Rate Target Recharge Rate Recharge Rate Change
1 0% 100 100 0%
2 100% 50 0.01 -50%
3 100% 25 0.01 -50%
4 100% 12.5 0.01 -50%
5 100% 6.25 0.01 -50%
6 100% 3.12 0.01 -50%
7 100% 1.56 0.01 -50%
8 100% 0.78 0.01 -50%
9 100% 0.39 0.01 -50%
10 100% 0.2 0.01 -50%
11 100% 0.1 0.01 -50%
12 100% 0.05 0.01 -50%
13 100% 0.02 0.01 -50%
14 100% 0.01 0.01 -50%

Another example:

Momentum Height Momentum Fullness Momentum Recharge Rate Target Recharge Rate Recharge Rate Change
1 100% 0.01 0.1 0%
2 50% 0.02 1 100%
3 50% 0.04 1 100%
4 50% 0.08 1 100%
5 50% 0.16 1 100%
6 50% 0.32 1 100%
7 50% 0.64 1 100%
8 50% 1 1 56%

From these examples we can see that every momentum has its own plasma recharge rate depending on how full the momentum is and what the previous momentum’s recharge rate was. The account chains that have an account block included in a given momentum will be subject to the plasma recharge rate of that momentum.

This introduces a “real” penalty for using fused plasma, since when there is high demand for momentum space, it will take a long time for plasma to recharge.

For example: If account chain A included an account block in momentum 5, and committed 10 QSR as fused plasma to that account block, it would take 63 confirmations (~10 minutes) to fully recharge the fused plasma.

Here is a pseudo-code example of how the committed, or “confined” plasma, is calculated so that the current available fused plasma for an account chain can be calculated.

Note: In order to not overload terms used in the existing go-zenon codebase, I’m using the term “confined” to mean the plasma that is committed to account blocks and that cannot be used until it has been recharged.

Fused plasma recharge rate considerations

One downside of this recharge rate idea is that it limits addresses to a low maximum sustained TPS. If the recharge rate is 1 QSR/confirmation, when blocks are at target fullness, then this means that every address is limited to a sustained throughput of ~0.01 TPS (assuming no PoW is used). This is also assuming the plasma base price is only 1.

For the vast majority of users this would probably not be an issue, but businesses and more complex use cases may well need a higher sustained TPS.

To alleviate this problem, we could also introduce a recharge rate multiplier that would linearly increase the recharge rate of accounts that have more than some threshold amount of QSR fused. This would probably have to be accompanied by some kind of account level limit of how many transactions could be sent within n momentums.

If the network max throughput can be increased, then the recharge rates could also be increased.

Attack scenarios

Below are a few examples of how the proposed ideas for dynamic plasma would help mitigate the effects of spam attacks.

Scenario 1: Fused Plasma spam

I simulated the following attack scenario with the example code, where the attacker owns 1M QSR and fuses it as plasma evenly to 20,000 addresses (50 QSR per address). His goal is to block users that have less than 50 QSR fused from using the network.

Attack result
------
Able to block users with less than 50 fused QSR for 29.17 minutes
Attack can be repeated in 227.27 minutes

From the result we can see that users with less than 50 QSR would basically have to wait a maximum of ~30 minutes for their transaction to be included in a momentum. They would also be subject to a very slow plasma recharge rate, unless they wait for network demand to decrease. The attacker could repeat the attack in ~3 hours and 47 minutes.

This is of course a highly simplified simulation since there would most likely be users with more than 50 QSR who would still be able to use the network during the attack, which would result in a longer waiting time for someone with only 50 QSR. On the other hand, if the attacker is only using fused QSR, then the PoW difficulty will decrease, making it easier for users to generate plasma via PoW.

Scenario 2: Fused Plasma spam

We can modify the first scenario a bit and say the attacker’s goal is to block users that have less than 1000 QSR fused from using the network.

Attack result
------
Able to block users with less than 1000 fused QSR for 0.67 minutes
Attack can be repeated in 430.50 minutes

We can see that in this scenario the attacker would only be able to block users with less than 1000 QSR for ~40 seconds and would have to wait ~7 hours and 10 minutes to repeat the attack.

Scenario 3: Pre-computed PoW attack

This needs more consideration, but I’m not sure if the dynamic difficulty adjustment is enough to mitigate this attack vector; someone pre-computing a massive amount of PoW over days, months, years, and then using it spam the network. Should PoW expire?

What about burning ZNN to generate plasma

The adjustment mechanisms presented in this post can be adjusted, so that new ways of generating plasma can be introduced. If it makes sense from a tokenomic, marketing, and technical standpoint.

Hard coded targets for different “plasma resources” can be introduced, or the plasma resources can directly compete with each other and let market forces dictate at what rate different resources are used.

For example, we could add novel ways of generating plasma, like burning some amount of Bitcoin would generate some amount of plasma. This may not actually make sense, but the point is we could potentially unlock new use cases like this.

Remarks

This post became longer than I initially intended. It has lots of stuff to consider, so I appreciate anyone taking the time to read through it. Since there are many angles to consider when it comes to a dynamic plasma solution, I may have failed to consider something important, so feedback and critique is welcomed.

14 Likes

Amazing. Literally amazing.

2 Likes

This is really thoughtful @vilkris. Thank you very much for working on this. I will read a few times and prepare some questions.

1 Like

Definitely need more time to digest it.
Thanks for putting your thoughts in a well formatted / readable manner as always.

My impressions after an initial appetizer read through:

I like the reference to EIP-1559. A lot of the motivations they cite we will share if not now then later. But need to think hard about what makes sense for feeless. The incentives/psychology around bidding wars are a little different.

Per momentum plasma recharge seems to introduce a lot of complexity.
What kind of data structures/indices will be needed to support it?
What’s the worse case for additional computations needed per momentum on nodes?

PoW to Plasma method, nothing controversial here. Main parameters would be the target %, adjustment period, and adjustment rate/curve.

1 Like

Thank you very much for your thoughtful and well-written post about dynamic plasma.

I wasn’t a fan of dedicated bandwidth/lanes, and I’m glad to see your ideas align with the philosophy of indiscriminate plasma consumption instead of segregating transactions into different queues.

I also appreciate the modeling work you’ve shared with us; this can be used to conduct further experimentation if we choose to adjust the initial variables.

re: increasing throughput limit
I agree that the current throughput is sufficient at this time, in terms of AccountBlocks/Momentum and data/AccountBlock.


I have similar concerns as George.
Will this recharge rate variable be stored in account chains, or indexed elsewhere like the plasma contract?
If an account has a recharge rate of 0.01, then the network decongests, can they claim a more optimal recharge rate by submitting another transaction?
Or will their initial transaction’s plasma continue to emit at the same rate?

Similar thought here: where would we track transactions-per-address-per-period?

I’m unsure if we should implement global (per momentum) recharge rates.
If we’re considering tracking rates per address, maybe we can only target accounts that are significantly contributing to congestion.
I expect this to be more complex but it could lead to a better experience for low-frequency, low-fuse addresses.


For pre-computed PoW, would the following be feasible?
Let’s say nodes only accept PoW nonces that were generated within the last N momentums.

  • Nodes provide a difficulty to client
  • Client must generate a PoW nonce based on the difficulty and a recent momentum N’s hash
  • Client signs their AccountBlock and broadcasts it
  • Nodes must verify that the PoW nonce meets the difficulty requirement and could only have been generated with N momentum’s hash, potentially having to iteratively calculate until a match is found or all options are exhausted.

For this solution, I’m concerned about the validation load for nodes.


Some other considerations that I mentioned in a previous post:

  1. fused QSR limitations
  2. exceptions, such as guaranteed tx priority for certain tx types
    • “Account blocks sent by embedded contracts have endless plasma”
    • Can we also include calls to Plasma.fuse()? This should improve the onboarding experience.
  3. RBF, but with more plasma
1 Like

Can someone please elaborate on what Vilkris means by ‘minimize the incentives for pillars to alter the ordering of transactions’? Are we talking future nasty stuff like MEV? We don’t have zApps and seems like the idea is to keep the base layer lean, so there shouldn’t be much to be gained, what are they going to do prioritize an AZ vote or a collection of rewards? Additionally, wouldn’t this be a non-issue once pow-links are implemented?

Finally, why wouldn’t we be able to force transaction ordering based on plasma with the same spork that implements dynamic plasma?

1 Like

Currently calculating an account’s available plasma requires 3 database reads. With the recharge rate, it would require 6 database reads, without increasing the amount of indexes. The current “chain plasma” database index would be replaced with a “confined plasma” index.

This is meant to replace/improve the existing concept of account chain “committed” plasma.

With the pseudo code example I provided, they could claim a new recharge rate by submitting a new transaction. This might not be something we would want, so I need to think about it a bit.

I don’t know yet. Maybe in the database.

It has tradeoffs, but I think globally rate limiting fused plasma usage is a reasonable way to distribute the limited network bandwidth amongst different addresses.

I’m not sure how this could be done in a fair manner. A spammer can disguise himself as a “regular user” with endless on-chain identities. Like in the example attack scenario 1, the attacker is using 20,000 addresses, and can send transactions at a very low per-address rate, but at a very high combined rate. Ultimately what the recharge rate does is it attempts to distribute bandwidth to as many addresses as possible when there is high network usage.

With the recharge rate, I think the experience would be relatively good for low-frequency addresses in a congested network. Since if you have enough fused plasma to send a few consecutive transactions at a time, it won’t matter if it takes hours to get it back, if you only need to send a few transactions a day.

Can this be done? If so, maybe we could add the account block’s momentumAcknowledged.hash to the PoW hash and require the momentumAcknowledged.height property to be at max N distance from the frontier height. It should have a negligible effect on computational load to validate it.

I don’t think the ideas proposed in my post would require any limitations on fused QSR per account.

How to prevent abuse?

Yes, this needs to be considered.

1 Like

This was just a general comment that we should keep in mind that pillars can decide the ordering of transactions as they wish (by customizing go-zenon). So an initial dynamic plasma solution should take this into account and ideally not introduce new incentives to pillars to alter the ordering. The ideas in my post should not introduce any new incentives in that regard.

In the future we can implement ways to enforce a certain ordering of transactions.

2 Likes

Why 50% and not 75% or some other number less than 100% but greater than 50%? Will a higher percent allow for more block utilization without adjustments?

We need to consider the possibility that someone will launch a game on NoM and they will need to interact more quickly. Maybe we decide that happens at the L2 only.

maybe. what if there is a legit mining purpose for degens to create PoW plasma farms to be rented in the future? Something to consider. In fact, can I do that today and store PoW?

From here:

Why limit = target * 2? Why not 4? Or 8?
It could easily be higher than 2. The higher the limit/target ratio the greater the fee market efficiency benefits of EIP 1559. It depends on how severe the short term spikes are that we are willing to accept; 2x is fairly conservative. We could even launch EIP 1559 with a limit/target of 2 to start off, and increase it over time as we see the network functioning okay even under short-term spikes.

So the reasoning Ethereum used 50% is that the higher the target is, the smaller the fee market efficiency benefits are. I am also assuming they concluded that having 50% of momentum space as a buffer zone to handle short term spikes in demand, is sufficient. Of course we don’t have any real data to analyze what the optimal target would be for our network, so 50% is a neutral choice.

This is why I think introducing a recharge rate multiplier that is based on the account’s total fused plasma amount is needed. You can get a better sustained TPS but you’ll need to buy more $QSR. If our network max throughput is around 10TPS, then how much of that TPS should a single entity be able to consume at a sustained rate? Use cases that require high sustained TPS aren’t really feasible on our base layer.

Maybe the PoW expiration shouldn’t be too extreme. For example, if it was 24 hours, then it would still allow for some use cases but reduce the effectiveness of a pre-computed PoW attack.

PoW can be pre-computed and stored as a hash, but the PoW is only valid for a specific address, and only for one transaction.

3 Likes

Amazing work @vilkris. Very well written and thought out process. It even has code samples. I’ve been following the discussions and still trying to wrap my head around it. Will study your work.

6 Likes

Very nice read @vilkris, I’ve enjoyed it!

I appreciate bringing into the discussion the hybrid approach I’m proposing.

Also I would like to bring into the discussion the following:

Goal 4: Storage costs

If we are targeting a minimal L1, we will need an efficient way to store rollups/data for L2s. Long term storage for blockchains is still an open research question.

One of the best strategies to prevent PoW “hoarding” is to make it dependent on memory that is not readily accessible and unpredictable (eg on-chain data). Modern CPU friendly algos like RandomX are employing many other different strategies to prevent ASICs from being developed in the first place.

The whole objective of PoW is to be hard to produce and easy to verify using CheckPoWNonce. Nodes must read the on-chain difficulty instead of reading the hardcoded PoWDifficultyPerPlasma = 1500.

In order to prevent the pre-computation, I suggest we rely on a weighted average of MomentumsPerHour to create a base difficulty for the PoW and a validity timestamp.

func (ab *AccountBlock) ComputeHash() types.Hash {
	return types.NewHash(common.JoinBytes(
		common.Uint64ToBytes(ab.Version),
		common.Uint64ToBytes(ab.ChainIdentifier),
		common.Uint64ToBytes(ab.BlockType),
		ab.PreviousHash.Bytes(),
		common.Uint64ToBytes(ab.Height),
		ab.MomentumAcknowledged.Bytes(),
		ab.Address.Bytes(),
		ab.ToAddress.Bytes(),
		common.BigIntToBytes(ab.Amount),
		ab.TokenStandard.Bytes(),
		ab.FromBlockHash.Bytes(),
		ab.DescendantBlocksHash().Bytes(),
		types.NewHash(ab.Data).Bytes(),
		common.Uint64ToBytes(ab.FusedPlasma),
		common.Uint64ToBytes(ab.Difficulty),
		ab.Nonce.Data[:],
	))

More specifically, we can use the TimestampUnix of the MomentumAcknowledged and check if TimestampUnix + 3600 > currentTimestamp.

I’ve also proposed rotating between 3 distinct PoW algos: SHA-3, RandomX and Equix depending on an external piece of information (eg hash of momentum frontierMomentum - 31).

// SHA-3 = 0, RandomX = 1, Equix = 2
hash := "0000009f4b552.."
hashInt := new(big.Int)
 _, ok := hashInt.SetString(hash, 16)
if !ok {
        log.Fatalf("Invalid hash: %s", hash)
}

algo := getAlgo(hashInt)
fmt.Printf("Hash %s corresponds to %d\n", hash, algo)

func getAlgo(hashInt *big.Int) int {
    const numAlgos = 3
    maxVal := new(big.Int)
    maxVal.Exp(big.NewInt(2), big.NewInt(256), nil)
    algoSize := new(big.Int).Div(maxVal, big.NewInt(numAlgos))
    algo := new(big.Int).Div(hashInt, algoSize).Int64()
    return int(algo)
}

This program will categorize any given hash into 3 buckets (0, 1 and 2, corresponding to SHA-3, RandomX and Equix) based on its integer value. The categorization is consistent for any hash such that the same hash will fall into the same “bucket”. Due to the nature of cryptographic hashes, this distribution should be fairly even, assuming the hashes are uniformly distributed (a fundamental property of cryptographic hash functions).

Nodes already have this information and they need to switch between the appropriate CheckPoWNonce. Clients also have access to this info in order to compute the proof of work.

4 Likes

I think the hybrid approach you talked about has reasonable arguments in favor of it and that’s why I wanted to leave the door open for that possibility. But I think burning ZNN also introduces potential problems that we will really need to think through. I’m also trying to limit the scope of the initial dynamic plasma implementation so that we don’t introduce too many new concepts and changes.

Could you elaborate a bit more how this goal ties into dynamic plasma, and is this goal something that should be solved with plasma?

As you probably know, additional tx data has a cost of 68 plasma/byte currently. Ethereum lowered their data cost from 68 gas/byte to 16 gas/byte, to better support the use case you mentioned: EIP-2028. Of course just lowering the cost doesn’t exactly contribute to the goal of a minimal L1.

I’m not sure I understand what this means. Does this differ from the ideas presented in my earlier posts?

If RandomX is one of the algorithms, then an attacker will need general purpose CPUs to do a PoW attack. I’d imagine the performance gain is roughly the same for all the algorithms when using a general purpose CPU. So how does adding more algorithms increase the attack cost, since there isn’t really a need for the attacker to have different hardware for the different algorithms?

If we start implementing a solution, it should be the best solution we can come up with.

We can stick with it for the moment.

We can introduce a validity timestamp such that plasma generated via PoW will expire if not used.

The idea is to make the attack prohibitively expensive for an attacker. You can rent hash power from Nicehash or similar platforms. If we only use RandomX, we are exposed.

Exactly: rotating between multiple PoW algos decreases the attack surface and it doesn’t impact the end users at all.

This is why every project should use their own configuration to mitigate this attack, as stated in the RandomX docs:

We recommend each project using RandomX to select a unique configuration to prevent network attacks from hashpower rental services.

Rental services are renting the Monero RandomX variant, which would be useless for our network.

The incentives to rent out PoW in our network are completely different from Monero.

I agree. Just highlighting that even RandomX is Nicehashable :slight_smile:

Another argument behind rotating multiple PoW algos is that we already have a working SHA-3 implementation that is ASIC friendly and it works well enough.

The only con is that clients will need to support 3 different PoW implementations which will add up some complexity.

But yet again some PoW algos are simpler than others to implement and support. I still think the best idea to move forward is to keep SHA-3 and add RandomX and another one that is simple to implement.

Sure, but so is a PoW algorithm that consists of multiple hash functions, like x11 or x13. If a market were to form for PoW on our network, the sellers would just sell hashpower that computes the triple hash PoW algorithm using a general purpose CPU.

The PoW algos aren’t chained together, they’re independent.

I’m thinking of creating a decentralized Bitcoin mining pool and implementing SHA-256 to merge-mine Bitcoin while computing the PoW.

Food for thought:

  1. P2Pool
  2. Pros and cons
  3. Mining decentralization
  4. P2P Braidpool and specs
2 Likes

I’ve spent some more time thinking about the practical feasibility of the ideas presented in the starting post and I’ve also considered some of the questions that were raised in this thread.

The formulas related to the dynamic plasma ideas are presented here. I hope this format makes it easier to understand the underlying principles. I’ll be using the terminology introduced in that document.

Plasma recharge rate over N account blocks

In order to prevent the situation where an account can claim a more optimal recharge rate by submitting a new account block when the network decongests, I’ve modified the recharge rate algorithm slightly from the initial version.

The modified idea is that a confirmation target is calculated once an account block is included in a momentum. The confirmation target is the amount of confirmations the frontier account block needs in order to recharge all of the account’s confined plasma.

Confined plasma is the plasma that is “locked” in account blocks and it has to get recharged before it can be used again.

Example:

Momentum Height Block Height Block Fused Plasma Confined Plasma Recharge Rate (plasma/confirmation) Confirmation Target
1 1 21000 21000 21 1000
100 2 21000 39900 2100 900+10=910
1010 3 21000 21000 2100 10

In the second account block, we can see that 100 confirmations have elapsed since the first block was included in a momentum. These confirmations are subtracted from the new confirmation target and the second account block’s confirmation target is added to the new confirmation target. This ensures that the user has to wait for all of the confirmation targets from previous account blocks to be surpassed in order to recharge all of the plasma.

This also means that every account has its own per confirmation recharge rate that depends on three things:

  1. What the network’s base recharge rate was when an account block was included in a momentum.

  2. What the previous account block’s confirmation target and confined plasma amount was.

  3. How much total fused plasma the account has.

Sustained TPS of an account

In order to allow accounts to have a higher sustained TPS than what the network’s base recharge rate would allow, we could use a recharge rate multiplier that depends on the total fused plasma of an account. The more fused plasma an account has, the bigger the recharge rate multiplier.

Example:

Account QSR Fused as Plasma Current Base Recharge Rate Multiplier Account Recharge Rate
1 50 2100 0.5x 1050
2 100 2100 1x 2100
3 8000 2100 80x 168000

From this we can calculate that for an address with 10,000 QSR fused as plasma, the sustained TPS for the account would be 1 TPS, when the base recharge rate is 2,100 (meaning that the momentum is exactly half full). If momentum space demand increases they will not be able to sustain that TPS unless they fuse more plasma, but if our total network max TPS is only 10-20 TPS, then I think these are reasonable numbers. For comparison, it would be very expensive for a user to sustain 1 TPS on other networks such as Bitcoin and Ethereum, which have similar throughput capabilities as our network.

From the example above we can also calculate that the absolute maximum amount of confirmations an address needs to recharge all of its fused plasma is 10,000 confirmations (=27.78 hours), regardless of how much fused plasma the account has. This should be acceptable from a UX point of view.

Transactions per period

I initially thought we would need a transactions-per-address-per-period cap, but I don’t think this is needed anymore, since the recharge rate will effectively rate control an account in a similar manner.

Implementation complexity introduced by the dynamic plasma recharge rate

After looking at how to actually implement the dynamic recharge rate, I’ve come to the conclusion that it would need two new database indexes: one for the confirmation target and one for the confined plasma amount.

These values are computed and stored when the transaction gets included in a momentum. The values are stored in the momentum store. The computations are simple and should have a negligible effect overall. The database has to be read 4 times and written to 2 times. For reference, currently when an account block is stored into the momentum store, it takes 5-7 reads and 3-4 writes (depending on the transaction’s type).

When querying an account’s available fused plasma, it requires 8 database reads. In the current implementation this operation requires 3 database reads.

Looking at some (old) benchmarks for LevelDB, it does not seem like the plasma recharge rate would be a bottleneck, even with a significantly higher amount of network throughput.

I’m open to suggestions if someone has a better understanding or ideas for how to determine “how much additional complexity is too much”.

RBF

I think the already existing RBF implementation is good enough. A transaction that is in the mempool can be replaced by committing a greater amount of plasma to it. We will need to think about client support for this once network utilization starts picking up.

Full vs. half-full momentums

In the initial post I presented the idea for using half-full momentums, mainly as a way to improve end-user UX by making the estimation of transaction plasma requirements easier. I think most of the reasoning Ethereum has stated for using half-full blocks applies to our network as well.

If we were to use full blocks (like on Bitcoin for example), this would mean that we would not have an algorithmically adjusted base price for plasma (as presented in the first post), but transactions would be included in momentums purely based on the plasma price used for the transaction and the estimation for what is a sufficient plasma price for a transaction would have to be done off chain, leading to additional complexity in clients.

Using full momentums would also mean that the base recharge rate for fused plasma would have to be calculated in a different way. For example, the base recharge rate could still be based on how full the previous momentum was. But this might lead to very slow overall recharge rates once momentum space is in high demand and when most, if not all, momentums are full.

10 Likes

I second this. I always prefer data driven decisions over ideology.