CORE and CORE Media were birthed out of the community which formed around the SuperNET project launched by jl777 in 2014. Now that many aspects of the project are finally coming together in fantastic form and usable products are just around the bend, we couldn’t be more pleased to help spread the word and keep our audience up to date on the latest happenings with this revolutionary project and its technology. Welcome to SuperNET Weekly!
Development of SuperNET’s Iguana codebase continued this past week as lead developer jl777 pushes forward in his quest to bring cutting-edge technology to Bitcoin and all of cryptocurrency.
JL777 expects in the following months to achieve a fully self-contained Iguana codebase that will be able to operate under very low resource consumption. All who are interested in facilitating the development of his work and have command line UNIX experience are invited to help with testing.
What needs to be tested right now in particular is bandwidth consumption and speed using bmon. The big variable that needs to be determined is what percentage of available bandwidth Iguana uses. Right now it is set to go as fast as it can, but this can be throttled down if it is determined to be an issue.
Although bandwidth performance is still choppy in testing, jl777 is seeing bursts above 100MB/s due to recent optimizations. This figure is about the same, however, as what it was before the optimizations and before adding vins and vouts scripts data directories. Displeased with the loss of bandwidth performance upon adding the code dealing with the scripts, jl777 has scrapped that method of streaming transaction data and begun experimenting with alternative solutions for optimizing performance.
Using 1MB blocks every 10 minutes, interleaving can achieve a tenfold increase in transaction capacity by having 10 interleaves in parallel.
“Each interleave is a mostly independent blockchain, but we need to enforce a fixed ordering based on the interleave regardless of when the block is solved for that interleave. This does create the situation where there is a solved block that can’t be confirmed due to a prior interleave not there yet.
This provides the possibility of a rapid delivery feature, where the user can generate 10 different variants of the same tx, so each interleave gets that tx. This would confirm in each interleave, but of course only be valid in the first one that gets confirmed. So a provision needs to exist for charging extra for these redundant tx and to invalidate the ones that are in the unconfirmed later interleaves.
Miners would be able to optimize their revenues by intelligently managing their hashpower based on the state of the 10 interleaves. Also, by having different service levels that come for free, this would create a larger tx fee revenue base via the premium fees and of course from just having 10x the transaction capacity.
10x interleaving works just as well with 1MB blocks, 8MB blocks, subchains, etc.”
With regards to UTXO, the approach jl777 has taken with Iguana creates a parallel set of datasets in read-only memory mapped files. This method achieves what is arguably the ultimate DB as it simply can’t get any faster than what is achieved through this method.
“All things needed for block explorer level queries are precalculated and put into the read-only files.
By having each bundle of 2000 blocks being mostly independent of all other blocks, this not only allows for parallel syncing of the blockchain but also using multiple cores for all queries regarding tx history.
At the 30MB size or smaller, the entire UTXO bitmap will fit into CPU L cache and provide 10x faster performance than RAM. RAM is actually quite slow compared to things that can be done totally inside the CPU.
With everything happening in parallel, it is easy to end up thinking a bundle is ready before it is totally ready. So I had to combine the “1st” field along with just waiting 3 seconds after a bundle is written out. That gives time for avoiding race conditions, I am seeing some cases where writing the data out to file is not completed, even though it is memory-mapped. When dealing with multiple threads, it does take time to get a fully synchronized state across all threads, especially when dealing with buffered files.”
Because he believes that the C++ code in which Bitcoin Core is written consumes an unnecessary amount of system resources in its use of of C++ classes and large libraries of dependencies, jl777 has rewritten Bitcoin Core from scratch in simple C – his programming language of choice – for the purposes of maximizing the efficiency of his Iguana codebase.
“I use only stdio, read-only mapped files, bsd sockets, system clock and a few other standard system functions. Openssl/libsecp256k1 is the only crypto dependency I use. The rest is all in my repo (add-on services utilize curl, but that dependency is outside of the bitcoind equivalent). I use no boost, not even libevent. Everything from the protocol, to the blockchain, scripts, signing, transaction construction, and RPC has been coded from sratch. I started writing Iguana last November when I had already written several iterations of MGW which gave me a good understanding of the blockchain and multisig.”
Because of the way jl777 has written the Iguana codebase with almost no external dependencies, it is very portable and could be ran as a standalone binary without needing to have it wrapped in a Chrome extension. In fact, compiling the codebase natively is much easier than cross-compiling it into a Chrome app. The build team has it built for a Chrome app, Android, UNIX, OSX, and Win64. There are still some issues to be solved with the Win32 build. Emscripten could also be supported for non-Chrome browsers.
Due to the fact that jl777 has thus far been unable to get the size of his validated Bitcoin blockchain dataset under Google’s maximum Chrome extension size of 10GB, the Chrome app will initially run a pruning Bitcoin node by default. It will, however, have no problem running full nodes for most altcoins. A possible solution which might help the Chrome app in running a full Bitcoin node is to run multiple separate Chrome apps that act as file servers. JL777 recognizes, however, that most users interested in running Iguana via a browser probably favor convenience and efficiency over running a full node so finding a solution for this is not atop his priority list at this time.
JL777 also postulated this week in SuperNET Slack about the possibility of increasing the speed of Iguana significantly by creating a CUDA/OpenSSL version of Iguana for use with GPUs. This would allow for dedicating a core per bundle among other optimizations.
With most all the data being read-only, the biggest challenge in implementing GPU coding (out of sync data between cores) is already solved. It might take a few GPU cards to store the complete dataset, but with this method all RPC, even ones that need to scan the entire blockchain, happen within milliseconds and the CPU can be used for the latest bundle and create a system that keeps up with new blocks.
Because Iguana running on most decent modern hardware will be plenty fast, such a GPU method would be overkill. However, this gives you an idea what kinds of things can be achieved with this technology in the future as things require more speed, efficiency, and security.
Once Iguana is ready and it runs a node via a one-click installation, one can imagine that it won’t take long for there to be more Iguana nodes than any other type of node. Because Iguana is a multicoind capable of interacting with many blockchains at the same time, it will appeal to altcoin fans looking to run all their coins on the same computer. The inclusion of atomic swap technology and Pangea decentralized poker in the same app will also increase its appeal significantly.
It is remarkable what jl777 and his team have been able to accomplish by assuming a holistic perspective of the entire system, working backwards from the desired result and making optimizations along the way. Operating in this way, it is likely just a matter of time until most every obstacle facing Bitcoin and blockchain technology in general is overcome.
For more information on SuperNET and the work of jl777 and his team, please refer to previous SuperNET Weekly articles and our monthly CORE Magazine and be sure to follow the progress via SuperNET Slack or the SuperNET website.