Ideas for dynamic plasma

Bravo, sir! Your presentation is incredibly thorough, meticulously thought through, and well-presented, particularly the pseudo code on Git. I’m still digesting everything and learning myself, so please forgive my ignorance. I am a bit perplexed by the assumption that the recharge rate and sustained TPS of an account are based on the precedent that the network is only operating at 10-20 TPS.

I don’t understand why it should continue to be so low. If we employ a block lattice structure like Nano, is it not reasonable to assume that we should be able to achieve a similar throughput? Nano, for instance, claims up to 7000 TPS.

Every single indicator points towards the network being designed for high throughput. While I find your idea for multipliers based on previous momentums and recharge cooldowns truly awesome (I love the pseudo code and equations you posted), I wonder if you could consider a scenario where the network is unthrottled?

In this scenario, let’s take the requirements section from the Zenon GitHub wiki and mandate nodes to have:

  • CPU: At least 4 cores
  • RAM: At least 4 GB
  • Storage: At least 40 GB of free space
  • Network: More than 100 Mbps of dedicated bandwidth
1 Like

Keeping up with the nano similarities, I’ve been researching their system to find any insights we could apply.

Interestingly, we both utilize a minimal “Proof of Work” (PoW) and incorporate the concept of “lanes.” In their framework, these lanes are divided into various buckets based on transaction sizes. They also take into account the age of accounts, discouraging attackers from utilizing multiple new accounts.

While I’m not fully versed in the technical specifics of their approach, I believe Vilkris’ suggestion for a plasma recharge rate, leveraging our plasma fee abstraction, presents a more refined strategy than their complex bucket divisions.

There’s potential to learn from their method of distinguishing between “regular” and “spammy” transactions.

Below are some resources that could be informative:

Buckets

Dynamic PoW

Post Update Stress Test Results

1 Like

Increasing network throughput has been out of the scope of the ideas I’ve presented.

The throughput of the network isn’t really relevant to the underlying mechanisms behind the proposed ideas. If we can increase network throughput then the recharge rates and sustained TPS could be increased as well. The constants used in the algorithms can always be reconsidered with future protocol advancements.

At 7000 TPS the recharge rates would be significantly faster.

We don’t really have any benchmarks to help us make an educated guess on what kind of throughput the network can sustain with the current implementation. I think we would need to start with that if we want to consider increasing network throughput, but increasing network throughput without actual end user demand for that throughput isn’t that high of a priority in my opinion.

I also did some research on Nano a while ago. They have some similarities and their bucket approach probably has more or less the same end result as the recharge rate.

1 Like

In my understanding, the reason for the transaction cap was because there was no dynamic plasma. I thought DP was meant to allow scaling while adding protection?

If you implement DP and keep the transaction cap, then what’s the point of DP? The network functions fine now with the transaction cap in place.

I understand that you can change the values; I just thought it could have advantages to design it with the intention of scaling in mind. That changes the calculus of the recharge rate, the cost of plasma, the rate of plasma to permissible transactions, and so on. Additionally the cap could be removed with DP implementation.

For arguments’ sake, if the network capacity is 5000 tps but we implement it for 10 tps, you are making it 500x more expensive for users to use plasma and wait for recharges. And maybe more importantly, you are going to make the POW very expensive too.

Why would a user buy lots of QSR to fuse into plasma and deal with long recharge times when they could just pay fractions of a cent for a transaction on another chain?

Furthermore, I disagree that we should only concern ourselves with transaction throughput once there is demand. In simplified terms, there are two main ways users become aware and use a network: marketing and use cases.

In a competitive market where tps is one of the main metrics, I think it goes without saying that a next-gen blockchain with 10 tps is a bit of a troll.

And in terms of use cases, having 500x more expensive plasma is not appealing to anybody wanting to transact more than once every few minutes.

Besides, being on the backfoot to correct something once or if demand sets in is always a much worse option than accommodating for it beforehand, if at all possible.

Maybe that involves benchmarking nodes and/or developing DP that can adjust to a variable transaction processing capacity of the network.

To me, the idea of the zero-transaction-fee paradigm is to allow users to transact freely and cheaply while managing adequate network protection and equal access.

4 Likes

I apologize if I sound argumentative.
I just want to reiterate that the mechanics you have presented are really impressive and awesome.

1 Like

Aside of the technical side, an uncapped network gives the marketing team more selling points, and it’ll be easier to onboard projects. Saying we might one day have more than 10 tps isn’t the same as saying that right now they can build with 20k tps in mind.

1 Like

This is the first thing I explained in the initial post. My definition of dynamic plasma doesn’t include increasing network throughput, it’s about the goals that I stated: network gas, tx prioritization and anti-spam. These are needed regardless of what our network throughput is.

Higher TPS would result in lower plasma requirements, but to achieve high throughput we need to lay a foundation for it. We need improvements to IBD, we need pruned nodes, and we have to consider what kind of hardware/storage/bandwidth requirements we want to target, since in general higher TPS is a centralizing force and storage requirements can increase fast.

What issues do you see with the ideas I’ve presented, with regard to scaling?

At the current network utilization rate, the plasma requirements/PoW requirements would stay roughly the same and the recharge rates would be fast. Ideally we will be able to increase network throughput alongside the adoption of our network.

I don’t think anyone disagrees. We have limited dev resources though, so we need to prioritize and take things one step at a time.

4 Likes

I’m currently quite pressed for time, so this post will be brief.

I had a chance to look at the formulas for Dynamic Plasma, particularly focusing on the Plasma Base Price Formula.

I believe the issue is that, for scaling, the Plasma table needs to be removed, and the definition of plasma needs to correlate with network resources. Having a plasma table with relational constants is very inflexible.

We should define plasma, for instance, like satoshi/vbyte for storage and wei/computation. This way, we can determine the real cost of a transaction and represent it in plasma. Then, you might be able to simplify the formulas a bit. I haven’t given it as much thought as you, so perhaps the formula works as is with a plasma/byte-esque definition.

I wonder about the “c” value: the base price change denominator. How is this one determined?

Regarding having a plasma ceiling per momentum (I assume that is what your plasma target of momentum is for?) and nodes, my vote would be with the recommended requirements the developers of Alphanet outlined in the official wiki. We could define a plasma ceiling from there.

I am against the notion that a node should be able to run on a potato. This philosophy makes sense for a low tp network like Bitcoin that aims for decentralization of nodes.
High transactional decentralized networks, by their nature, have a computational cost. For a similar comparison, look at Nano and their distribution of nodes, comparing their recommended specs to ours.
The sentry(trusted) - Sentinel(trustless) paradigm makes our nodes much better than Nanos’.

You seem to have a good understanding of storage costs, database reads, and all that jazz. It would be awesome if you could define a plasma/byte equivalent, and we could move away from a plasma table.

2 Likes

First off, thanks for taking the time to look at the formulas. I appreciate it.

I’m not sure I understand what you mean with “plasma needs to correlate with network resources”, since I think that’s what it does already. As I explained in the initial post, one purpose of plasma is to represent computational resources, and that’s essentially the purpose of the plasma table. The values in the table represent the computational cost of the operation in question. The required amount of computational resources for a given operation should always be the same, so I don’t see why the values can’t be constants.

We only have embedded smart contracts on our base layer, meaning that we should be able to define the plasma requirements as known values. In Ethereum for example, the gas requirements for smart contract operations are calculated based on conversion rates of low-level-computer-operations-to-gas.

Both use hard coded “tables” but ours is highly simplified compared to Ethereum.

The Plasma Base Price formula is essentially the same algorithm that Ethereum uses for the same purpose (EIP-1559).

The plasma/resource conversions in our codebase mimic those of Ethereum. For extra data in account blocks we have the following definition: ABByteDataPlasma = 68, which means that every extra byte in an account block costs 68 plasma. The account block base plasma, which is 21,000 plasma, already covers the data/storage cost of a vanilla account block that has no extra data, which is again similar to Ethereum.

So based on this I think the requirement to represent the real cost of a transaction in plasma, is already fulfilled in that sense.

When targeting half full momentums and when c=8, the maximum base price change between momentums can be +/-12.5%. The bigger the c value is, the smaller the maximum change can be between momentums. The reason I chose 8 is because that is what Ethereum uses and apparently it has worked well for them. Again, the Base Price formula is basically the same as Ethereum’s.

Also, based on the simulations I’ve done, it appears to provide a decent balance between spam mitigation and UX.

Yes, the per momentum base plasma cap is a hard ceiling. Those recommended requirements are not enough to run a public node currently. To run a public node smoothly you need at least 32GB of RAM and more than 4 cores on a dedicated server. Like I said in my previous post, we have issues with the node and our current bottleneck appears to be the P2P layer. In a local network the recommended requirements should be more than enough though.

These node issues are obviously something we need to sort out before we can start considering any significant increase in throughput limits.

The throughput limits can be increased later on, and this is why I previously said that in my opinion increasing throughput isn’t necessarily a high priority right now, taking into account that there is no demand for it. That doesn’t mean that the design decisions we make now shouldn’t carefully take into account how they affect performance and scalability.

6 Likes

That explains a lot; thank you for the detailed response. I also read the post about plasma and the plasma table on the other forum, which has helped me understand it much better.

The plasma table is used to query plasma utilized by a transaction, so it’s definitely necessary ( …I’m a dope :laughing:).

When looking at go-zenon/vm/const/plasma.go, I can see the definition of the Plasma table. Interestingly, there is a distinction made for AlphanetPlasmaTable.

So, we have three types of transactions:

  • EmbeddedSimple
  • EmbeddedWithdraw
  • EmbeddedDoubleWithdraw

The plasma cost of an EmbeddedSimple is calculated as:

EmbeddedSimplePlasma = 2.5 * AccountBlockBasePlasma

Essentially, each embedded type has its own multiplier to AccountBlockBasePlasma: 2.5, 3.5, and 4.5. I would assume that these multipliers need to become dynamic for Dynamic Plasma?

Furthermore, we not only have a definition for TxPlasma and TxDataPlasma but also how these values relate to fusion units of an account.

MaxFusionUnitsPerAccount limits each account to a maximum of 5000 fusion units.

PlasmaPerFusionUnit = 2100
MaxFusionPlasmaForAccount = 5000 * 2100 = 10,500,000

So, I guess currently a max fused account could fire off an embeddedSimple 200 times at the cost of 52,500 per pop.

I’m not quite clear on the MaxFussedAmountforAccount variable. Firstly, there is a typo; it should be MaxFused to remain consistent with naming. Apart from that, it’s a huge number. I assume it could just be the theoretical limit.

Am I correct in my understanding? I think being able to fire off 200 transactions is quite hectic, and it’s no wonder we needed the transaction cap for now.
However, if we could make the multipliers dynamic, that would reduce the number of sustained transactions somebody could perform.

For transactions to non-embedded addresses, the ABByteDataPlasma value could be made dynamic with a multiplier.

Again, the plasma table only represents the computational requirements of an operation, which stay the same regardless of network utilization. These multipliers appear to be ball park estimates to represent the computational requirements, not fine-grained estimates, like in Ethereum. But I think ball park estimates are fine for us, at least now.

The BasePrice variable I’ve presented introduces a dynamic price on the computational resources based on demand, and essentially does what you’re suggesting.

Since the rate control mechanisms of the current plasma implementation are minimal, I think the founding devs wanted to have some kind of hard cap, that limits the amount of transactions an account can send. But as you can see, the rate at which an account can fire transactions leaves the network open to spam attacks, which is why dynamic plasma is a high priority in my opinion.

Yes and that’s exactly what the ideas I’ve presented are trying to achieve. The plasma recharge rates effectively throttle the number of sustained transactions somebody can send.

Plasma recharge rates are an interesting topic that delves into the roots of why QSR exists. I’ve been thinking about QSR quite a bit lately, especially in light of the Gravity wallet. I’ve also been pondering dynamic plasma implementation, aliencoders’ fee proposal, and other related matters.

I’ve come to the conclusion that one of the main utilities of QSR is tokenized throughput. Obvious, right? We all know that… Yes, but do we really understand it?
Owning QSR is equal to owning a percentage of network throughput. It’s a novel and brilliant concept, perhaps making one need to rethink how spam should be defined within that context.

Say, for instance, somebody owns 1% of the QSR supply, this makes them eligible for 1% of the throughput. If that person decides to use their position and ‘spam’ the network with transactions, could that actually benefit the network?

With dynamic Plasma, it will make the plasma cost per transaction more expensive for everybody. That means more fused QSR and more QSR bought, causing deflationary pressures and price rises.

Now, consider the price, especially once demand really kicks in. The 1% network ‘spammer’ now sees the value of QSR holdings rise, and they sell some… can you see what’s happening here? A decentralization of holdings and throughput over time.

In my opinion, recharges may not be needed. Despite thinking along the same lines as you and wanting to take advantage of the accounts-based system, I believe in this instance it would involve unnecessary computation. A global dynamic plasma is far more conducive to the system.

I think we should take tokenized throughput quite literally and with that be transaction agnostic as well, and realize that network utilization is designed to drive decentralization.

Yes, I think making these multipliers dynamic instead of hardcoded would be efficient. Basically saying the more transactions that occurred in the last momentum, the higher this multiplier goes.
Making the cost to transact progressively more expensive and inexpensive in correlation to the amount of transactions processed.

do you have an opinion on @aliencoder fee proposal?

What if they don’t sell because their intention is to attack the network? They can indefinitely keep the cost of transactions so high, that almost no one will be able to use the network. A short selling attack could be profitable for example.

1 Like

I’ve also proposed a burn mechanism when you unfuse the QSR.

This question comes up with Bitcoin too. What if a state actor attacks bitcoin PoW for non-economic reasons? This is a good question and I think one “answer” is, yes that is possible, but they have not done it yet. Try it and let’s see what happens.

@Nostromo raises an interesting question about the purpose of $qsr. Does $qsr represent ownership of network throughput? If so, I think the recharge rate diminishes the value of $qsr is some way. With “recharge” applied, the value of $qsr is derived from the ability to process one TX fast. We are sort of “socializing” the value of $qsr in order to prevent spam. Not sure if that is a correct analogy.

I think we should explore what happens if we don’t impose the recharge rate as a thought experiment. I guess it’s as simple as the amount of plasma needed to process a TX goes up. Is that a bad thing for the value of $qsr?

My guess is a few OGs have 1 to 3 million $qsr. That is a total guess. But from what we can tell, they are honest actors. It will be hard for a “bad actor” to get enough $qsr to attack the network. But… never say never.

I think a more likely scenario is someone gets enough $qsr to manipulate the value of $znn or $qsr. We are too small for a non-economic actor to care (right now). And if someone is trying to manipulate the value, as @Nostromo points out, it will result in the redistribution of $qsr eventually. Is that a bad thing?

I don’t think they can; you would need to own a very large percentage of QSR to do this and you would need to overcome the daily inflation and actively buy and hoard the circulating supply. Maybe at launch of AlphaNet there may have been risk of this and one could speculate that this was the reason it did not launch with dynamic plasma.

It could be that in the future there is an application that uses, for argument’s sake, 3% of the network throughput, continuously making transactions. Would you consider that an attack on the network? I would say that is network utilization, and from the point of view of the network dynamics, there is no difference between an application using 3% of the network or a QSR whale with bots making dummy transactions using 3% of the network.

I think it’s superfluous and would break network dynamics to some extent. Sometimes things that are very well designed look too simple, and one may feel tempted to add complexity. Chess is such a good game because of well-balanced simple rules.

Sorry, I am against this too. Being able to choose beneficiary addresses to fuse plasma is a simple but powerful feature. A forced burn would make this problematic. I think that QSR is already very deflationary.

3 Likes

Do you have some example calculations? With the current implementation you only need 1,000 QSR to fill up every single momentum indefinitely. If transactions are prioritized by plasma, you need 100,000 QSR to fill up every single momentum with 1,000 QSR/tx, so that anyone with less than 1,000 QSR will be blocked from sending vanilla transactions, unless they spend a significant time generating PoW, which isn’t really an option for mobile/browser wallets. To collect delegation rewards a user would need 3,500 QSR in this situation. What good is daily inflation when you can’t use the network anyway?

It should be assumed that accumulating hundreds of thousands of QSR is feasible, or otherwise we won’t have new pillars.

If the argument is that we attempt to raise the throughput to 1,000 TPS or something similar, then that means that with 100,000 QSR you can bloat the node database at 200,000 B/s indefinitely, if a tx is 200 B. This means 16 GB per day and would have a toll on node decentralization and users would basically have to rely on centralized node provider services, assuming node providers find a way to monetize their services. That money has to come from somewhere, most likely the users.

Please define what you mean with 3% of network throughput. Throughput per momentum, per time window?

I defined what I mean by spam in the initial post. You’re right that the network doesn’t understand what spam is:

Sure, but trying to guess what motives people may have now or in the future is not really in line with the crypto ethos. There are countless addresses that that have +100k QSR. You can’t prove they all belong, and always will belong, to good actors.

Every blockchain that is worth anything has been attacked by irrational actors and by people “proving a point”. On our network the consequences of a serious spam attack could be detrimental without proper mitigations.

To be precise, QSR doesn’t represent throughput, plasma does. And plasma represents ownership of throughput even with the recharge rates. The rules are the same for everyone. What it does is it rate controls how fast an account can send transactions within a time window. If there is no recharge time, then the time window becomes one momentum, and it will become impossible for people with 100 QSR fused on their account to claim their miniscule share of the bandwidth, if network utilization and plasma usage is maxed out.

@Nostromo Don’t get me wrong, I’m all for a simpler solution that doesn’t need any additional rate controlling measures, but I haven’t seen the numbers that would allow for this.