It’s well known that some Reddheads run nodes on their RasPi - or ReddPi. Since i’ve been running one for the last six months - with negligible balance variations - i’ve noticed that whenever it has to process a new block the CPU usually stays maxed out for some time.
So i was wondering, as i understand it the amount of coins you are staking is essentially your “hashing power” - in PoW terms. Under that assumption, as long as you have the - more or less - same amount of staking coins in your wallet, your hashing power remains the same. Think of it as if 1k RDD always equalled 1 kH/s of the total network hashing power at the time.
Now my question is if CPU usage is tied directly with the amount that you’re staking (i.e. the more coins you stake, the more CPU cycles it takes), with the current difficulty (i.e. since your “hashing power” is fixed, CPU load stays the same regardless of it), with block size (i.e. the number and value of TXs that the block contains), a combination of them all or whatever other factors i’m most likely missing.
I don’t own a crapload of coins and i don’t have a very good knowledge of the underlying algorithms at work, but i will also have to assume that as long as your CPU stays maxed out, you will potentially miss any chance of staking because you will be behind the network - basically catching up. I don’t know if the daemon runs in real-time priority under linux (which i doubt), but then again this is not exactly my field of expertise.
So, in a nutshell, i was wondering if the ReddPi will become obsolete once/if the RDD network grows substantially in size, or it’s still good as long as you’re not running it as a full node and staking some crazy millions. Thanks!
PS: Sorry if the question got a bit jumbled up but it’s holidays and i’ve done… things. @[email protected]