SuperNET Weekly No. 2

CORE and CORE Media were birthed out of the community which formed around the SuperNET project launched by jl777 in 2014. Now that many aspects of the project are finally coming together in fantastic form and usable products are just around the bend, we couldn’t be more pleased to help spread the word and keep our audience up to date on the latest happenings with this revolutionary project and its technology. Welcome to SuperNET Weekly!

This week we’d like to share a series of questions we asked developer jl777 regarding the latest on his work with SuperNET, his Iguana code and the answers he gave us.

In what form will Iguana technology be packaged for use by the public?

The code can run as a Chrome app or in native mode. As a Chrome app, users will be able to run a Bitcoin node from the browser, though pruning mode would have to be added as Google limits storage to 10GB. The code size is compiled to 3MB executable. It is smaller, faster and runs in the browser. The code is set up in such a way that it can be further improved upon to allow for incremental ledgers which will also help to achieve a more scalable Bitcoin.

What would you say is the primary goal you wish to accomplish with Iguana tech?

The goal for Iguana is to create a scalable Bitcoin Core implementation that is backward compatible and a drop-in replacement. The design needs to meet many constraints, primarily to be as small as possible without sacrificing speed and to be able to work in a parallel sync. That means that each block or bundle needs to be as self-contained as possible, but have external references that can be resolved. This is similar to object files and linkers. The main thing that has these external references are the vins [unspent inputs used in a transaction].

The end goal is an end-to-end, fully integrated end user solution, which is really all the end users care about. Iguana tech is still young, but already it proves that Bitcoin can easily scale to max transaction capacity without causing any issues for the typical nodes. After initial sync, since the vast majority of the data set is in memory-mapped files, it doesn’t need a lot of RAM to run. The code size is small enough that it could run on a smartphone and other low-powered devices. Does that make Bitcoin good for IoT?

It’s been said that Iguana technology allows for a parallel sync of the full BTC blockchain in 30 minutes which uses half the space and starts up instantly. How is this so?

The files it creates are read-only files and never change after they are created. This allows them to be further compressed into a read-only file system. Without signatures the data set ends up at around 15GB in a read-only volume and 25GB when not compressed. These files are invariant so, once validated, they never need to be verified again which allows for an “instant-on” after each restart. A layer of data added to the read-only files can be used for an in-memory hash table directly as memory-mapped files. This means the startup time is close to 3 seconds and thereafter it is ready to start syncing new blocks.

The first thing you notice when looking at the raw blockchain is that there is a lot of redundancy. So by mapping the high entropy hashes to a 32-bit integer you get a 28-byte savings for each use. For a transaction ID with N outputs, up to N*28 bytes can be saved as each vin refers to the transaction ID.

However, both endian forms need to be put into the BitTorrent network as the Iguana files are designed to be directly memory mapped. This allows the serialization/deserialization of each and every multibyte field to be skipped. Since the files are less than half the size, even with the doubling due to both endian forms, they still consume less data overall than the raw blockchain.

A Bitcoin RPC compatibility layer is being added currently so that it will be compatible with existing Bitcoin apps. When that is in place, it can be put through the blocktester and other standard Bitcoin validation tests. In the meantime, a GUI team is creating an all HTML/JS GUI that will interface via Bitcoin RPC.

What additional functionality will Iguana tech provide?

In addition to the Bitcoin code, there is also an atomic swap protocol integrated as a virtual exchange so that standard bid/ask orderbooks can be used to do atomic swaps of altcoins. There is also the beginnings of a private chains framework, but it hasn’t been blockchained yet. Also there will be a decentralized Texas Hold’em poker protocol and a fiat pegging system that uses delta-neutral portfolio balancing. By combining Iguana with a popular monetized game like poker, we are looking at something that has a chance to go mainstream.

So it adds up to over 50,000 lines of custom C. There is a lot more than just the Bitcoin Core from my other work over the years and the entire codebase compiles to around 3MB and is portable to many OS, even Chrome as a JS bytecode pexe in a Chrome app. The code is in my jl777/SuperNET repo and docs.supernet.org has API bindings.

How much RAM is required to run Iguana for blockchain syncing?

The more RAM you configure for it the better, but even with 8 cores it doesn’t go above 32GB during sync. At 4GB, it will mostly serialize the processing toward the end when it is just the large bundles. I haven’t optimized the minimum memory setting as my target machine is my 3 year old laptop with 4GB of RAM.

Iguana doesn’t use any DB and achieving this performance requires constant time performance for all operations within a bundle. One of the final steps would happen as the main chain is validated linearly as the parallel sync is proceeding. Code that does a parallel sync of the blockchain is mostly working, creating read-only bundle files (of 2000 blocks) and is able to stream at 500mbps to 1gbps speeds without too many slow spots.

Are there any other requirements or limitations a user may have to consider when using Iguana tech?

The speed of sync is limited by bandwidth, so a typical home user’s 20mbps connection it takes about 6hours, which is about 6x faster than 0.12, but on a fast connection I am seeing sustained speeds of 70MB to 120MB/sec. Yes, that is 1GB per 8 seconds during peak rates. And it is processing all this in close to realtime if you have 8 cores. The compression done after the permanent files are created takes about half an hour.

Generally speaking, my approach with this was to write a Bitcoin Core without any external dependencies that can sync data at whatever bandwidth you have available on a sustained basis. I also hate waiting forever for importing private keys and the total lack of multisig support at the address level, so it is doing this at an effectively txindex=1 level, with additional data structures to allow for address-based queries and “instant-on” where it is ready to go without rescanning previously created data. Since they are read-only files, all that is needed is to make sure they haven’t changed. Yes, that means no DB, but a fixed set of data in files that can be used directly as memory-mapped files.

Do you anticipate Bitcoin Core taking an interest in implementing your code?

Well I am not too concerned with whether or not Bitcoin Core feels they want to adopt a 10x faster way to sync. End users will follow the path of least resistance and, without any external dependencies, the one-click install Chrome app won’t have any adoption barrier.

SuperNET Weekly articles can be found on our website as well as in the CORE Magazine.