Blockchain project documentation and Timeline
Development methodology
The development methodology is based on agile that will allow flexibility and adaptability to accomplish the specific needs of this project
Fast pace development that evolves through the collaborative effort with a self-organizing and cross-functional team with adaptive planning, evolutionary development, early delivery and continual improvement that encourages communication and rapid and flexible changes.
Short feedback and adaptation cycle with a daily stand-up, with weekly or daily brief sessions team members, report with daily goal definition.
Custom development with full control of the product allows efficiency as we construct the infrastructure that is stable and scalable, with a programmed stress test, hacking trials.
A token platform that will implement scalability and directed Acyclic Graph Technology as a core, allowing high transaction speeds, creating an ecosphere that will use the experience in other technologies with focus development in a custom platform that will allow the leadership of The investors to reach their goals.
3º party implementation
The creation of a Wiki with detailed information for third-party implementation that will have a development that allows third-party implementation. Foundational and development focused on flexible implementation for different industries such as insurance, conditional payment, betting, e-commerce, social media, auction house, exchange multilateral trading facilities, and much more.
Security
Custom encrypted systems combining DAG with a unique smart contract will create a hybrid platform that combines different technologies creating the best solutions that follow high encryption standards and security for its users, giving the ability to create contracts without security breaks in an environment that can be trusted.
Ensuring security is one of the key tasks for a successful project and will be enforced by constant tests.
Technology philosophy
Technical requirements are based on the motto “user first” approach that ensures the best functionality for the end-user, developing a custom hybrid chain with custom development that will ensure a rise of efficiency.
Specific target with organized plans to real-world issues in the sectors that work with conditional payment, lottery, insurance, prediction betting, auctions, e-commerce, shopping chatbot, and social media. Focusing on these needs we can create a custom system that will fit those markets and make them more efficient.
Development of a decentralized structure with no single authority where each component of the networks will cooperate together, distributing service across both networks connecting each node from the host network to any other node on effectiveness ranked node in the other network.
Solutions to analyzed risks
Quality control/transactions per second
Creating a server’s system in Europe for testing the developed project in a real case situation, not in a theoretical LAN or same network exchange will ensure the efficiency and objective data where progress can be measured and commitment is done.
Servers in the Netherlands, Spain or Portugal, Germany, Poland, and other locations are required.
Ensuring the improvement of efficiency is possible if we create different servers that will test each node and evaluate the most effective node depending on the case (Portugal S1 works well with Germany S2 and ok with S3 so the system will push the first GS2 and GS3 then GS1). We will speak about this after the first steps are done, as this will allow us to use the resources right and identify the less efficient servers in real-time and reconfigure them, improve the code, and so on, having a continuous development of your idea.
Creating a communication layer
Create different backend servers (1 to many, 1 to 2, 1 to 1). To introduce the best communication layer we need to test communication between many different clients found in different geographical places.
The step is communication between services using different types of protocol. For example: TCP-Http, TCP-WebSocket, TCP-GRPC (TCP-HTTP2.0), TCP-ZeroMQ[Rep/Req], UDP-ZeroMQ (Rep/Req).
The following diagram shows the number of concurrent clients and [Text Messages Per Second] for TCP-GRPC, TCP-ZeroMQ[Rep/Req] and TCP-ZeroMQ[Router].
To choose the best option for communication we need to introduce other options to the diagram. As you can observe we involved UDP because
UDP is still superior in terms of performance latency, and will always be faster, because of the philosophy of the 2 protocols — assuming your communication data was designed with UDP or any other lossy communication in mind.
TCP creates an abstraction in which all network packets arrive, and they arrive in the exact order in which they were sent. To implement such an abstraction on a lossy channel, it must implement retransmissions and timeouts, which consume time. If you send 2 updates on TCP, and a packet of the first update gets lost, you will not see the second update until:
- The loss of the first update is detected.
- A retransmission of the first update is requested.
- the retransmission has arrived and been processed.
It doesn’t matter how fast this is done in TCP, because with UDP you simply discard the first update and use the second, newer one, right now. Unlike TCP, UDP does not guarantee that all packets arrive and it does not guarantee that they arrive in order.
This requires you to send the right kind of data, and design your communication in such a way that losing data is acceptable.
If you have data where every packet must arrive, and the packets must be processed by your game in the order they were sent, then UDP will not be faster. In fact, using UDP, in this case, would likely be slower because you’re reconstructing TCP and implementing it by means of UDP in which case you might as well use TCP.
Normally, the packet loss rate on Ethernet is very low, but it becomes much higher once WiFi is involved or if the user has an upload/download in progress. Let’s assume we have a perfectly uniform packet loss of 0.01% (one way, not round-trip). On a first-person shooter, clients should send updates whenever something happens, such as when the mouse cursor turns the player, which happens about 20 times per second. They could also send updates per frame or on a fixed interval, which would be 60–120 updates per second. Since these updates get sent at different times, they will/should be sent in one packet per update. On a 16 player game, all 16 players send these 20–120 packets per second to the server, resulting in a total of 320–1920 packets per second. With our packet loss rate of 0.01%, we expect to lose a packet every 5.2–31.25 seconds. In this example, we ignore the packets sent from the server to the players for simplicity.
On every packet we receive after the lost packet, we’ll send a DupAck, and after the 3rd DupAck, the sender will retransmit the lost packet. So the time TCP requires to initiate the retransmit is 3 packets, plus the time it takes for the last DupAck to arrive at the sender. Then we need to wait for the retransmission to arrive, so in total, we wait for 3 packets + 1 roundtrip latency. The roundtrip latency is typically 0–1 ms on a local network and 50–200 ms on the internet. 3 packets will typically arrive in 25 ms if we send 120 packets per second, and in 150ms if we send 20 packets per second.
In contrast, with UDP we recover from a lost packet as soon as we get the next packet, so we lose 8.3 ms if we send 120 packets per second, and 50 ms if we send 20 packets per second.
With TCP things get messier if we also need to consider Nagle (if the developer forgets to turn off send coalescing, or can’t disable delayed ACK), network congestion avoidance, or if packet loss is bad enough that we have to account for multiple packet losses (including lost Ack and DupAck). With UDP we can easily write faster code because we quite simply don’t care about being a good network citizen like TCP does.
After we choose the best communication protocol [{TCP-ZeroMQ-Rep or UDP-ZeroMQ-Rep}] we will introduce the DAG-Protocol based on the chosen communication layer.
This will be the first step that will give us critical objective information about efficiency. Without objective data, any decision will be blind.
Security issues in first layers
Transaction consensus adding certificates that will solve this issue. This step will be developed later. First, we need to have solved the first step.
Moving to finality and stability point to make it usable.
Smart contracts
Creating a simulation. A wallet is simply a generated request. We can implement in a parallel way the key storage, actions, and conditions and the UI, etc. After we get the communication system working we can do some testing adding some wallet data simulation.
Example: Send some data from the wallet, the coin (that is only a message, like user buys)
Custom Smart contract
Hyperledger inside the docker will join an additional docker and inside this is created a specific Golang code that runs in the docker with a machine and we can develop it like that. Golang has many advantages that can allow us fast coding and testing.
Develop the system understanding this limitation. The costs will be higher on the server-side as the processing and communication will not be so efficient but we will have the security of development time.
Why GOlang
GOLang has an excellent implementation of goroutines and is strong-typed- There were hardware limitations but after manufacturers started adding more cores to the processor and hyper-threading and additional cache the performance has increased. The limitations of the physical limits of the cache, making it slower, do not allow us to scale indefinitely. It does not allow us to rely on the hardware improvements only, so we need to choose the most efficient language. The above limitation will have an effect only on Massive Backend services like Whatsapp, Telegram, Facebook Messenger. Cryptocurrency service no needs a billion messages.
Go was released in 2009 goroutines that have in mind on keeping the concurrency. Go has goroutines instead of threads. They consume almost 2KB of memory from the heap. So, you can spin millions of goroutines at any time.
Benefits
- Growable segmented stacks (use more memory when needed only
- Faster startup time than threads
- Built-in primitives to communicate safely between channels
- Avoids having to resort to mutex locking when sharing data structures
- A single goroutine can run on multiple threads. They are multiplexed into a small number of OS threads
- Runs directly on the underlying hardware
Go uses C/C++ and is compiled and not interpreted, giving the processors the code without having a delay of human-readable code that is transcribed to byte-code. Processors understand binaries. When developing an application using any languages and it is compiled the human-readable code will be translated into byte code for virtual machines and then to binaries so the processor can work with them.
While on the other side, C/C++ does not execute on VMs and that removes one step from the execution cycle and increases the performance. It directly compiles the human-readable code to binaries.
Additionally, GO has no classes, divided into packages only and has only structs instead of classes, does not support inheritance that will make code easy to modify, no constructors, no annotations, no generics and no exceptions.
The investor's Introduction
Blocks: The investors have no blocks as in Bitcoin, transactions will have their own blocks, and they do not need to be connected in the same chain. Based on the fact that blocks are linked linearly, their time intervals and their size are optimized for “near” synchronization between nodes, so that nodes can share a new block with each other much faster than is normally required to generate a new block. This ensures that the most likely to see the same block as the last block, and orphanhood is minimized. As Bitcoin grows, the blocks become heavier. They are either limited in size, in which case the growth is also limited, or too long for distribution. Instead, a transaction can be associated with several previous transactions, and the whole transaction set is not a linear list, is a DAG.
Calculation cost: Bitcoin transactions are safe because it’s too expensive to remake all the PoWs that are included in the blocks created after the transaction. But it also means that you need to pay to build a legitimate PoW that is strong enough to repel any attacks. This payment is spent on the electricity required for the construction of PoW. For the investors, there is no PoW, instead, we use another consensus algorithm based on the old idea of what was known long before Bitcoin.
Finality. The finality of the transaction in Bitcoin is probabilistic. There are no strict and simple criteria when you can say that a transaction will never be canceled. Rather, you can only argue that the probability that a transaction will be reversed exponentially decays as more blocks are added. To further complicate the situation, the completion of the transaction also depends on its amount. If the amount is small, you can be reasonably sure that no one will try to immediately spend on you. However, if the amount put on the card is more than the reward of the block (100 BTC at the time of writing), you can assume that the payer could temporarily use the hash power to play another chain of blocks that do not contain the transaction that you are paying. Therefore, you need to wait for confirmation before you are sure that a transaction with a high cost is final. The investors, there are deterministic criteria for the deal to be considered final, no matter how large it is.
Privacy. All Bitcoin transactions and balances of all addresses on the blockchain. Although there are ways to fabricate your transactions and balances, this is not what people expect from a currency. Transactions in bytes (base currency) in The investors are equally noticeable.
Compliance. Bitcoin was designed as an anonymous currency, where people have absolute control over their debt. This goal was achieved, however, it made bitcoins incompatible with existing rules and, therefore, inappropriate to use in the financial industry. In The investors, you can release assets with any rules that regulate their portability, from any restrictions in general, such as Bitcoin, to something like that each transfer must be coordinated by the issuer (for example, a bank) or limited to a limited set of users with a white list.
Implementation of regulatory compliance using regulated assets: Regulated institutions can issue assets that are compatible with KYC/AML requirements. Every transfer of such asset is to be cosigned by the issuer, and if there is anything that contradicts the regulations, the issuer won’t cosign.
To create Indivisible Asset: Indivisible Asset is based on black bytes (untrackable bytes)
Database & Database structure
Choosing a database is the most important point in building web applications. Byteball uses the Mysql database. But we give a comparative characteristic with the database MariaDB.
Who Uses These Databases?
MariaDB: MariaDB is being used by many large corporations, Linux distributions, and more. Some organizations that use MariaDB include Google, Craigslist, Wikipedia, Archlinux, RedHat, CentOS, and Fedora.
MySQL: MySQL has generated a strong following since it was started in 1995. Some organizations that use MySQL include GitHub, US Navy, NASA, Tesla, Netflix, WeChat, Facebook, Zendesk, Twitter, Zappos, YouTube, Spotify etc
What About Database Structure?
MariaDB: Since MariaDB is a fork of MySQL, the database structure and indexes of MariaDB are the same as MySQL. This allows you to switch from MySQL to MariaDB without having to alter your applications since the data and data structures will not need to change.
MySQL: MySQL is an open-source relational database management system (RDBMS). Just like all other relational databases, MySQL uses tables, constraints, triggers, roles, stored procedures, and views as the core components that you work with. A table consists of rows, and each row contains the same set of columns. MySQL uses primary keys to uniquely identify each row (a.k.a record) in a table, and foreign keys to assure the referential integrity between two related tables.
Why should you migrate from MySQL to MariaDB?
First and foremost, MariaDB offers more and better storage engines. NoSQL support, provided by Cassandra, allows you to run SQL and NoSQL in a single database system. MariaDB also supports TokuDB, which can handle big data for large organizations and corporate users.
MySQL’s usual (and slow) database engines MyISAM and InnoDB are replaced in MariaDB by Aria and XtraDB respectively. Aria offers better caching, which makes a difference when it comes to disk-intensive operations. Temporary tables also use Aria, which speeds up complex queries, such as those involving GROUP BY and DISTINCT. XtraDB gets rid of all of the InnoDB problems with slow performance and stability, especially in high-load environments.
Additional, unmatched features in MariaDB provide better monitoring through the introduction of microsecond precision and extended user statistics. MariaDB also enhances the KILL command to allow you to kill all queries for a user (KILL USER username) or to kill a query ID (KILL QUERY ID query_id). MariaDB also switched to Perl-compatible regular expressions (PCRE), which offer more powerful and precise queries than standard MySQL regex support.
In addition to more features, MariaDB has also applied a number of query optimizations for queries connected with disk access, join operations, subqueries, derived tables and views, execution control, and even explain statements. To see what these mean for database performance, visit the MariaDB optimizer benchmark page.
In addition, in the latest Red Hat 7 release, MariaDB replaces MySQL in the default software repository. This means automatic migration to MariaDB for most people who use the default distribution packages. Corporate users’ migration to MariaDB will be facilitated with additional support from Red Hat.
Database structure
When a user wants to add data to the database, he creates a new memory block and passes it to his authors. The storage unit includes (among other things):
- The data to be stored. A unit can include more than one data packet, called a message. There are many different types of messages, each of which has its own structure. One type of message is a payment, which is used to send bytes or other assets to peers
- Signature (s) of one or more users who created the device. Users are identified by their addresses. Individual users (and their persuasion) have several addresses, for example, in Bitcoin. In the simplest case, the address is derived from a public key, again similar to Bitcoin.
- References to one or more previous units (parents) identified by their hashes.
References to parents — this is what sets only a partial order of units and generalizes the structure of the blockchain. Because we are not limited to each other, we could not tolerate more children. If we go ahead in connection with the parent descendant, we will notice a lot of forks when the same block, which will be replaced by several later units, and many merge into the same block. This structure is known in graph theory as a directed acyclic graph (DAG). Units are the vertices, and the parent-child references are the edges of the graph.
Like in blockchain, where each new block confirms all previous blocks (and transactions therein), every new child unit in the DAG confirms its parents, all parents of parents, parents of parents of parents, etc. If you are trying to edit an item, it also must change your hash. Inevitably, it will break all the children who refer to this unit by its hash, because both the signatures and hashes of the children depend on the parent hashes. Therefore, it is impossible to revise the unit without cooperating with all of its children or stealing private keys. Children, in turn, can not review their units without cooperating with their children (grandchildren of the original unit), etc. After the device is transferred to the network, and other users start building their blocks on top of it (referring to it as on the parent), the number of secondary revisions needed to edit this device is increasing.
Unlike Blockchain, The system has no two-level system of ordinary users and miners. Instead, users help each other: adding a new device, its author also confirms all previous units.
Unlike Bitcoin, where an attempt to revise a past transaction requires a large computational effort, an attempt to revise a past record in The investors requires coordination with a large and growing number of other users, most of whom are anonymous strangers. The immutability of past records is therefore based on the sheer complexity of coordinating with such a large number of strangers, who are difficult to reach, have no interest in cooperation, and where every single one of them can veto the revision.
Turning to parents, the unit includes a parent. It does not include the full content of the parent; rather, it depends on its information through the parent’s hash. Similarly, the unit indirectly depends and, therefore, includes the parents of the parent, their parents, etc., and each unit collectively includes a genesis block. There is a protocol rule in which you cannot refer to surplus parents-these are parents that one parent includes the other. For example, if block B refers to block A, then block C cannot simultaneously refer to both blocks A and B. A is already, in some sense, contained inside B. This rule removes unnecessary links that do not add any new useful link to the schedule.
Native currency: bytes
Let’s imagine some friction to protect against a spam database of useless messages. The entry hurdle should correspond to reflect the usable capacity for the user and the cost of storage for the network. The simplest measure for both sides of this block. Thus, in order to store your data in a global decentralized database, you must pay a fee (similar to that in Byteball), and the amount you pay equals the size of the data that you are going to store (including all headers, signatures, etc).
To keep the incentives aligned with the interests of the network, there is one exception in the rules for calculating the size. To calculate the size of the unit of measurement, that there are exactly two parents in the block, the number is a real number. Therefore, the size of the two hashes of the parent blocks is always included in the size of the unit. This exception ensures that users will not try to include only one because the Cost will be the same.
We need to encourage users to include as many parents as possible (as mentioned earlier, this does not adversely affect the amount of payment), and, first of all, include it as a parent.
Bytes can not be used for other services. To send a payment, the user creates a new unit that includes a payment message such as the following
The message contains:
- An array of outputs: one or more addresses that receive the bytes and the amounts they receive.
- An array of inputs: one or more references to previous outputs that are used to fund the transfer. These are outputs that were sent to the author address(es) in the past and are not yet spent.
The sum of inputs should be equal to the sum of outputs plus commissions (input amounts are read from previous outputs and are not explicitly indicated when spending). The unit is signed with the author’s private keys.
The total number of bytes in circulation is 1019 (unlike Byteball 1015), and this number is constant. All bytes are issued in the genesis unit, then transferred from user to user. Fees are collected by other users who help to keep the network healthy (more details about that later), so they stay in circulation. The number 1019 was selected as the largest round int64 that can be represented in Golang. Byteball is implemented based on Node.js therefore the biggest number should be 1015. Amounts can only be only unsigned integers. Larger units of the currency are derived by applying standard prefixes: 1 kilobyte (Kb) is 1,000 bytes, 1 megabyte (Mb) is 1 million bytes, etc.
Double payment
If a user tries to spend the same output twice, there are two possible situations:
- There is partial order between the two units that try to spend the same output, i.e. one of the units (directly or indirectly) includes the other unit and therefore comes after it. In this case, it is obvious that we can safely reject the later unit.
- There is no partial order between them. In this case, we accept both. We establish a total order between the units later on when they are buried deep enough under newer units (see below how we do it). The one that appears earlier on the total order is deemed valid, while the other is deemed invalid.
There is one more protocol rule that simplifies the definition of the total order. We require, that if the same address posts more than one unit, it should include (directly or indirectly) all its previous units in every subsequent unit, i.e. there should be partial order between consecutive units from the same address. In other words, all units from the same author should be serial.
If someone breaks this rule and posts two units such that there is no partial order between them (nonserial units), the two units are treated like double-spends even if they don’t try to spend the same output. Such non-serials are handled as described in situation 2 above.
If a user follows this rule but still tries to spend the same output twice, the double-spends become unambiguously ordered and we can safely reject the later one as in situation 1 above. The double-spends that are not non-serials at the same time are hence easily filtered out.
This rule is in fact quite natural. When a user composes a new unit, he selects the most recent other units as parents of his unit. By putting them on his parent's list, he declares his picture of the world, which implies that he has seen these units. He has therefore seen all parents of parents, parents of parents of parents, etc up until the genesis unit. This huge set should obviously include everything that he himself has produced, and therefore has seen.
By not including a unit (even indirectly, through parents) the user denies that he has seen it. If we see that by not including his own previous unit a user denies having seen it, we’d say it’s odd, something fishy is going on. We discourage such behavior.
In the case of double-spend, the version that comes earlier on the main chain wins. Therefore, if your node is well-connected and you see a few other transactions piling up on top of the new unconfirmed transaction, and the time since its arrival is significantly larger than the typical network latency, then you can be reasonably sure that even if a double-spend appears later it will be sorted later, hence voided.
Avoiding double spends without blocks
A raw DAG structure cannot detect malicious double spends. Transactions can be guaranteed to come after their parents, but the order between two transactions that are siblings (or descended from siblings) is undefined. This is super important because the only way to resolve a double spend is to decide which transaction came first!
This ambiguity increases as you add more and more branching items to the DAG. To make things worse, anyone can deliberately hide a branch from the network (including some double-spends) then broadcast it later as part of an attack. If that branch is given priority over honest transactions by the network, fake transactions will be honored over honest ones.
To avoid fake branches/items, current strategies invoke “witnesses”. A witness creates items that can be used as clear reference posts. For each item posted by a witness, we can unambiguously say that every other item either comes before or after that point. This doesn’t perfectly order every individual item but it does give us incremental “checkpoints” to protect against hidden branches. The checkpoint functions similarly to a block in a blockchain but:
- has no fixed size
- no fixed schedule
- doesn’t require the witness to be aware of any items other than those at the end of each branch (provided we can delegate pre-validating the rest of the branch)
- new items aren’t floating in a “mempool” backlog, they are continuously self-organizing into pre-validated branches to be included in an upcoming checkpoint
These differences between witnessing and block producing provide additional flexibility in how DAG networks can operate.
Because no branch or item is confirmed until a witness has seen it, hidden branches cannot reverse already-seen items.
There are a few things to note at this point:
- As there are no “blocks” the blockchain scaling equation doesn’t strictly apply (there are network limits, but we have more wiggle room)
- Without blocks with size limits, any item can hold any amount of data
- We still cannot differentiate between siblings within a witness level, so we need a fair tie-breaker rule (e.g. lowest hash wins)
- Without witnesses we must prove a negative (impossible) “no attacker is hiding a branch” but with them, we can simply show “my transaction has been seen”
- Items that haven’t yet been seen by a witness have no order and are not confirmed, but can be pre-validated and included in new branches immediately (no amorphous mempool)
- We rely on witnesses to not collude with an attacker to award a malicious branch a high “seen” score
- Choosing different items to witness a DAG reorders the DAG — the choice of witness defines the order of a DAG, in a sense the witnesses items define time in the DAG
In real-world systems, we cannot simply rely on one witness (total centralization). We cannot randomly assign witnesses because an attacker can create millions of fake accounts to outnumber honest accounts, then very quickly take control (Sybil attack).
Must requirements to sign transactions. (80 hours)
Transaction signing
In this chapter, we introduce the signing of a transaction. The main idea is to sign every single transaction to avoid double-spending. It should be the “MUST” condition in the protocol.
The signing transaction is available only for full nodes. If a user has a light version of a wallet then all signing procedures should be done inside the chosen full node.
If a user wants to create a transaction he should initialize a signing procedure inside the trust wallet. Each transaction should include information about user coins.
Trust wallet should be an obfuscated binary executable file. The signing a transaction will help The investors to be faster to avoid the double-spending.
The main chain
Byteball DAG is an improved DAG. Under normal use, people basically connect their new units with slightly less recent units, which means that the DAG grows in only one direction. It can be thought of as a thick cord with a lot of alternating wires inside. This property is foreseen that we could select one chain along the chain, and then assign all the units to this chain. All units will either lie directly on this chain, and we will call the chain, or be accessed from it. by a large number of jumps along the edges of the chart. It’s like a highway with connecting side roads.
One way to build the main chain is to develop an algorithm that, considering all the parents of the unit, selects one of them as the “best parent”.
The selection algorithm should be based only on the knowledge available to the block, ie, on the data contained in the block itself and all its ancestors. Starting with any childless DAG unit, we then go back to the history of the best parent links. Thus, we are building the main chain and, ultimately, we come to the subdivision of genesis. Note that the main chain built from a specific device will never change as new units are added.
Once we have the main chain (MC), we can establish a common order between the two conflicting non-primary units
Recognizing that many (or even all) parent units can be organized by an intruder and, remembering that the choice of the best parent is in fact the choice among the versions of the story, we must require from our best algorithm the choice of the parent element so that it supports stories that are “real” from the point of view of the children’s unit. Therefore, we need to develop a “reality test”, which our algorithm will work against all MC candidates to choose the one that is evaluated
The algorithm allows you to choose MC that gravitates to the units created by witnesses and witnesses are considered real realities. If, for example, an attacker unfolds from an honest part of the network and 10 secretly builds a long chain of his own units (a shadow chain), one of which contains a double waste, and then merges his plug into an honest DAG, the best algorithm for selecting the parent at the merge point is choosing the parent, which controls the MC in the honest DAG, since there are witnesses here.
Witnesses
Witnesses could not publish the shadow chain simply because they did not see it before the merger. This choice of MC reflects the order, as seen by the witnesses, and the user who applied them to himself. After the attack is over, the entire shadow chain will land on the MC at some point, and the double waste contained in the shadow chain will be considered invalid because its real counterpart comes earlier, to the point of confluence. In this example, it is shown why most witnesses only have to be trusted on a post in a serial fashion. The majority should not collude with the attacker and publish it in the shadow chain. When an attacker rejoins his shadow DAG into the lit DAG, his units lose the competition to become the best parent as the choice favors those paths that have more witnesses (marked with w).
Finality
As new units arrive, each user monitors the current MC, which is constructed as if it is going to release a new block based on all existing childless units. The current MC may be different. Suppose that the current MC will be created without taking into account the list of witnesses, that is, your own parent's list. This means that if two users have the same set of childless units, but they have different lists of witnesses, their current MCs will still be identical. The current MC will constantly change as new units arrive. However, since we are about the show, this is part of the current.
We expect that witnesses (or rather, most of the elements) will behave honestly, so be sure to include their previous unit in the next division. This means that when a witness makes up a new unit, only the last units can be selected as parents. Therefore, we can expect that all future current MCs will not converge (when they return on time). Indeed, the unit of genesis is the natural starting point of stability. Suppose we built the current MC based on the current set of childless units, and on this MC there was a certain point that was previously considered stable, that is, all future current MCs are believed to converge to either this time (again, when you travel backward in time) and then move along the same route. If we can find a way out of this,
Note that if we forget about all parents except the best parent, our DAG will be reduced to Obviously, all MCs will pass along the branches of this tree. MCI: We need to consider the following cases: MCI
First, suppose that the tree does not branch. Then you need to consider adding a new branch and its support from the witnesses so that it overgrows the existing branch. Another possibility is that witnesses wear a back and so on. Let’s calculate the possibility of the latter. Remember that the best parent is selected as the parent with the highest attested level. Let us return in time on the current MC from the tip, until we meet most of the witnesses. If at least one of them lies before the current stability point, we are not trying to advance the stability point. In this case, all these witnesses are already “invested” in the current MC. Among these witnesses, we find the minimum level of evidence min_wl. When any of these witnesses sends a new block, this device will have a new generation of MC and MC. the direction of the next. Because the witness must include his previous block, the attested level of the parent leading to the current one, MC will be at least min_wl. The witness level of any parent leading to the alternative industry will never exceed the level of current stability, even if all the remaining (minority) witnesses flock to the alternative branch. Therefore, if the current MC, the alternative branch lost all chances of winning and MCI.
Thus, there is a point on the current MC before which the MC will never change (assuming that most witnesses do not publish non-primary units). General order. If we were stale, our decision, regarding which one of them is valid, is also final. If a new non-serial appears that conflicts with something already on the stable MC, the new non-serial element will be ordered after the old instance, and the new one will be invalidated. Thus, any payment made in the bundle is included in a stable MC is already irreversible. Each user creates his own (subjective) current MC based on the units he sees. Since the distribution of new units is not instantaneous, and they can arrive in a different order for different users,
We will use 17 witnesses for every 1000 nodes to find the main chain. This is guaranteed low latency and fast transaction time. These witnesses can be changed and at one point there can be only one chance in the witness list. Witnesses do not have all the authority. They are only used to find the main chain. To include witnesses to the system we need to implement the best performance communication
Scalability
On its face, the DAG (Directed Acyclic Graph) design used by distributed ledgers like IOTA and Byteball, looks like an amazing innovation. Instead of bulky blocks used as transaction containers in existing blockchain designs, the DAG builds a graph of transactions that reference older transactions and thus can confirm transactions immediately when they are received by a node instead of having to wait for the next block.
Anyone who tries to use a blockchain will quickly appreciate that fast confirmation times of transactions are an advantage over having to wait for transactions to be grouped into a block before being able to rely on their state or balance changes.
And indeed the DAG confirms transactions quickly as long as the DAG node already knows about the two transactions approved by this transaction. But synchronizing the state between nodes seems to be a major issue for existing DAG implementations, for example, IOTA currently relies on a single coordinator node while The investors rely on 17 witness nodes all controlled by the developer himself to checkpoint the state of the DAG.
But why is the synchronization of a DAG more difficult than the synchronization of a blockchain? Simply put, the blockchain state is modified by every block while the DAG state is modified by every transaction.
As someone who spent endless hours working on blockchain load testing and scaling, I can testify that decentralized networks operating under heavy load will naturally generate forks even when all participants are honest and work by the protocol rules. I can also testify that switching to a better fork is one of the heaviest operations a node needs to perform since it has to undo changes made by existing transactions in the popped-off blocks and apply the changes made by the transactions included in the blocks of the better fork.
However, I claim that an advantage of a blockchain over DAG is that a blockchain is only sensitive to the order of blocks not to the order of transactions and this fact makes it no less scalable than a DAG.
in a distributed network, different nodes will see transactions at different times and in a different order, and the higher the rate of transactions the more frequent these ordering differences will occur.
For a blockchain, this is not a serious problem since block generation is made artificially difficult thus limiting average block generation frequency to 15 seconds on Ethereum, 1 minute on NXT/Ardor, and 10 minutes on Bitcoin. This means that even under heavy load, forks caused by conflicting blocks are infrequent and when they do occur, nodes have time to perform the large processing required to switch to a better fork in order to synchronize the state with another node and before yet more blocks are generated.
In a DAG implementation, however, the latency of transaction propagation and the possible order differences is causing several problems while transaction throughput increases and nodes start getting transactions submitted by other nodes for which one or two of the approved transactions are not yet known to them and therefore will be unable to add them to the DAG.
DAG advocates state that you have to have a large number of simultaneous transactions in order to remove the coordinators and whiteness nodes requirement, in response, I would like to point out several weaknesses in the DAG implementation which I predict will surface on a DAG network which accepts 1000 simultaneous transactions per second and perhaps much less.
In a 1000 TPS network, when a remote DAG node receives transactions, say, a second after they were submitted (recall that the speed of light is final and that the internet operates much slower than the speed of light) this remote node will already lag 1000 transactions behind other more central nodes in the network, this will cause several problems
- This node is almost certain to not be able to process some of the transactions immediately because missing approved transactions are seen by the submitter node, which still propagates through the network, thus will have to keep them in a possibly ever-growing unconfirmed pool which will drain the node resources.
- When finally all approved ancestors of a sub tangle of a transaction will arrive at the node, these transactions may approve transactions that are already buried many levels of nesting behind the current tips of the DAG and when the load reaches some tipping point the DAG will start cloud-like expansion in all directions with an ever-growing number of tips representing an ever-growing number of transactions without approval.
- Assuming a transaction is executed when it’s added to the DAG. Transaction execution order will differ significantly between nodes thus rendering the DAG useless for some applications which require some guarantees about order of execution like votes in polls that need to arrive before the poll ends.
- This will also prevent pruning of the DAG since there won’t be a stable state in which nodes can snapshot (in the absence of a single coordinator node) in order to prune all prior transaction history.
To summarize, I suspect that the ability of a DAG to confirm transactions as they arrive causes it to become more sensitive to transaction propagation latency and the order in which transactions arrive at a node, which under load may cause accumulation of unconfirmed transactions and growth of the DAG into a cloud-like shape where acceptance of transactions referring to old tips is not an exception but the norm and ever-increasing number of tips remain unapproved.
Storage of non-serial units
When we decide that a unit is unbound, we still need to keep it. However, some of its data is replaced by a hash of data. This rule serves two purposes. First, to reduce the amount of storage consumed by the unit, for which no one paid (all the content of the non-serial unit is considered invalid, including its commission payment). Secondly, to reduce the utility of non-serial for the user who sent it, because the hash replaces all the useful data that the author wanted to keep (free). This prevents an attack by attackers as a way to store large amounts of data for free.
A hash that is stored in place of full content still has some usefulness for an attacker because it can store itself or its own data and use a hash to prove that the data exists. But remember that:
- He still has to pay for one unit, which is considered valid
- If an attacker already internally stores the metadata necessary to interpret The investor's data, he can do it well, simply merge all of his data into the Merkle tree and using The investors to store only his Merkle root for the cost of one small unit.
In accordance with this design, there is no interest in trying to send non-serials.
We still try to avoid, if possible, incompatible: the algorithm for selecting the parent elements excludes defects if they are childless. For this reason, it is desirable that peers learn about incompatibility as soon as possible.
The investor's Balls
After a unit becomes stable (i.e. it is included on the stable part of the MC) we create a new structure based on this unit, we call it a ball:
The investors_ball: {
“unit”: “hash of unit”,
“parent_balls”: [array of hashes of balls based on parent units], is_nonserial: true, // this field included only if the unit is nonserial
“skiplist_balls”: [array of earlier balls used to build skiplist]
}
The investor's ball includes information about all its ancestor balls (via parents), hence the amount of information it depends on grows like a snowball. We also have a flag in the ball that tells us if it ended up being invalid (non-serial), and we have references to older balls that we’ll use later to build proofs for light clients.
We can only build a ball when the corresponding unit becomes stable and we know for certain whether it is serial. Since the current MCs as viewed by different users are eventually consistent, they will all build exactly the same ball based on the same unit.
Last The investor's Ball
To protect The investor's balls (most importantly, the is_nonserial flag) from the modification, we require that each new block includes the hash of the last ball that the author knows (which is a ball built from the last stable unit and it lies on the MS). Thus, the last ball will be protected by the author’s signature. Later, the new unit will itself (directly or indirectly) be included by witnesses.
If someone who does not have the entire The investor's database wants to know if a particular unit is serial, he will provide us with a list of witnesses whom he trusts to behave honestly and we will build a chain of last units that includes the majority witnesses, then read the last The investors-ball from the oldest unit of the chain and used The investor's beads to build a hash tree that has the last ball at the top, and turns on the requested block somewhere below. This hash tree looks like a very high Merkle tree, with additional data being fed to each node. The tree can be optimized with the help of the skip list.
The reference to the last ball also lets users see what their peers think about the stability point of the MC and compare it with their own vision.
We also require that the last ball lies no sooner than the last ball of every parent. This ensures that the last ball either advances forward along with the MC or stays in the same position, but never retreats.
To further reduce opponents, we add one more requirement: the list of participants must be compatible with the list of witnesses of each unit located on the back of the MC of this device, between this device and the last ball unit. This requirement ensures that all changes in the list will pass, how to try another change. Otherwise, the attacker can list the changed list of witnesses on the MC and stop publishing from the addresses of new witnesses. In such cases, the view taken by the witnesses of the attacker.
The requirement that witness lists of all contemporary units are mostly similar means that all users have mostly similar views about who can be trusted to serve as lighthouses for the community at the current time.
Skip list
Some of the balls contain a skip list array which enables the faster building of proofs for light clients (see below). Only those balls that lie directly on the MC, and whose MC index is divisible by 0, have a skip list. The skills list the nearest previous MC balls whose index has the same or smaller number of zeros at the end. For example, the ball at MCI 190 has a skip list that references the ball at MCI 180. The ball at MCI 3000 has a skip list that references the balls at MCIs 2990, 2900, and 2000.
Witness list unit
It is expected that many users will want to use exactly the same witness list. In this case, to save space, they don’t list the addresses of all 17 witnesses. Rather, they give a reference to another earlier unit, which listed these witnesses explicitly. The witness list unit must be stable from the point of view of the referencing unit, i.e. it must be included in the last ball unit.
Unit example
- version is the protocol version number.
- alt is an identifier of alternative currency.
- the message is an array of one or more messages that contain actual data
- app is the type of message, e.g. ‘payment’ for payments, ‘text’ for arbitrary text messages, etc;
- payload_location says where to find the message payload. It can be ‘inline’ if the payload is included in the message, ‘URI’ if the payload is available at an internet address, ‘none’ if the payload is not published at all, is stored and/or shared privately, and payload_hash serves to prove it existed at a specific time;
- payload_hash is a hash of the payload in base64 encoding;
- the payload is the actual payload (since it is ‘inline’ in this example). The payload structure is app-specific. Payments are described as follows:
- input is an array of input coins consumed by the payment. All owners of the input coins must be among the signers (authors) of the unit;
- unit is the hash of the unit where the coin was produced. To be spendable, the unit must be included in last_ball_unit;
- message_index is an index into the messages array of the input unit. It indicates the message where the coin was produced;
- output_index is an index into the outputs array of the message_index’th message of the input unit. It indicates the output where the coin was produced;
- the output is an array of outputs that say who receives the money;
- the address is The investor's address of the recipient;
- the amount is the amount he receives;
- the author is an array of the authors who created and signed this unit. All input coins must belong to the authors;
- the address is the author’s investor's address;
- authentifiers is a data structure that proves the author’s authenticity. Most commonly these are ECDSA signatures;
- parent_units is an array of hashes of parent units. It must be sorted alphabetically;
- last_ball and last_ball_unit are hashes of the last ball and its unit, respectively;
- witness_list_unit is a hash of the unit where one can find the witness list.
All hashes are in base64 encoding.
Note that there is no timestamp field in the unit structure. There are no protocol rules that rely on clock time. It’s simply not needed, as it is enough to rely on the order of events alone.
The timestamp is still added to units when they are forwarded from node to node. However, this is only advisory and used by light clients to show in wallets the approximate time when a unit was produced, which may significantly differ from the time it was received as light clients may go offline for extended periods of time.
Commissions
As mentioned earlier, the cost of storing a unit is the size in bytes. The commission is divided into two parts: a commission commission and a commission. Commission for the payload is equal to the size of messages; The headline commission is the size of everything else. Two types of commissions are distributed in different ways.
The commission of headings goes to one of the future blocks, which the payer takes as the parent. The receiver is selected only after the MCI of the payer and the subsequent MCI become stable. To determine the recipient, we take those children whose MCI is equal to or greater than the MCI of the payer. The hashes of each of these children are combined with a hash of a unit lying on the next MCI (relative to the payer), and the child with the lowest hash value (in hex) wins the commission for the headers. This hashing of the next MC-block is intended for unpredictability (the next MC-block is not known in advance) and makes useless any attempts to improve its chances of getting a commission playing with its own unit hash. At the same time, the restriction of candidates for those whose MCI does not exceed 1 more than the MCI of the payer, stimulates the selection of the most recent units as parents. This is useful for the maximum possible reduction of the DAG.
We pay only the headers commission and not the entire commission to those who are quick to pick our unit as a parent, for the following reason. If we did pay the entire commission, we would have incentivized abusive behavior: split one’s data into several chunks and build a long chain of one’s own units storing one chunk per unit. All the commissions paid in a previous unit would then be immediately collected by the same user in the next unit. As we pay only the headers commission, such behavior is not profitable because to produce each additional element of the chain one has to spend additional headers commission–roughly the same as one earns. We use the remaining (payload) commission to incentivize others whose activity is important for keeping the network healthy.
Payload commission goes to witnesses. To incentivize witnesses to post frequently enough, we split payload commission equally among all witnesses who are quick enough to post within 100MC indexes after the paying unit(the faster they post, the faster this unit becomes stable). If all 17 witnesses have posted within this interval, each receives 1/17 of the payload commission. If only one witness has posted, he receives the entire payload commission. In the special case that no witness has posted within this interval, they all receive 1/17 of payload commission. If the division produces a fractional number, it is rounded according to mathematical rules. Because of this rounding, the total commission paid out to witnesses may not be equal to the total payload commission received from the unit’s author(s), so the total money supply will change slightly as well. Obviously, the distribution happens only after MCI+100 becomes stable, where MCI is the MCI of the paying unit.
To spend the earned headers commissions or witnessing commissions, the following input is used:
Such inputs sweep all headers or witnessing commissions earned by the author from commission paying units that were issued between main chain indexes from_main_chain_index and to_main_chain_index. Naturally, to_main_chain_index must be stable.
When a unit signed by more than one author earns headers commission, there is so far ambiguity as to how the commission is split among the authors. To remove the ambiguity, each unit that is signed by more than one author must include a data structure that describes the proportions of revenue sharing:
The addresses who receive the commissions needn’t be the same as the author addresses — the commission can be sent to any address. Even if the unit is signed by a single author, it can include his field to redirect headers commissions elsewhere.
Confirmation time
Confirmation time is the time from a unit entering the database to reaching stability. It depends on how often the witnesses post since to reach stability we need to accumulate enough witness-authored units on the MC after the newly added unit. To minimize the confirmation period, the witnesses should post frequently enough (which they are already incentivized to do via commission distribution rules) but not too frequently. If two or more witnesses issue their units nearly simultaneously (faster than it typically takes to propagate a new unit to other witnesses), this may cause unnecessary branching of the tree composed of best-parent links, which would delay stability. For this reason, the best confirmation times are reached when the witnesses are well connected and run on fast machines so that they are able to quickly validate new units. We estimate the best confirmation times to be around 20 seconds (less then Byteball because we use the different communication layer); this is only reachable if the flow of new units is large enough so that the witnesses earn more from witnessing commissions than they spend for posting their own units.
Despite the period of full confirmation being rather long, a node that trusts its peers to deliver all-new units without filtering may be reasonably sure that once a unit was included by at least one witness, plus a typical latency has elapsed (the time it takes a new unit to travel from peer to peer), the unit will most likely reach finality and be deemed valid. Even if a double-spend appears later, it will be likely ordered after this unit.
Transaction serial numbering
Putting the serial numbers with each transaction can avoid all issues with the network. It will create an environment where DLT (Distributed ledger technology) will be most autonomous and less reliant on human emotion vulnerable to deviation.
Each wallet will issue a serial number linked to xs at the same time when a transaction will be sent. The same serial number will be sent to the recipient wallet and the rest of the network simultaneously. Each wallet balance will be computed by the computational node wallet and record sent to the rest of DLT at the same time. As each wallet is putting serial numbers to each transaction and wallet balance is calculated at the same time and communicated to the rest of DLT, it makes it the most secure and flexible network.
Witness protocol will be used in case any discrepancies happen in the network as a second protector.
Partitioning risk
The network of The investor's nodes can never be partitioned into two parts that would both continue operating without noticing. Even in the event of a global network disruption such as a sub-Atlantic rat cutting the cable that connects Europe and America, at least one of the sides of the split will notice that it has lost the majority of witnesses, meaning that it can’t advance the stability point, and nobody can spend outputs stuck in the unstable part of the MC. Even if someone tries to send a double-spend, it will remain unstable (and therefore unrecognized) until the connection is restored. The other part of the split where the majority of witnesses happen to be will continue as normal.
Censorship
By design, it is already impossible to modify or erase any past records in The investors. It is also quite hard to stop any particular type of data from entering the database.
First, the data itself can be concealed and only its hash be actually posted to the database to prove that the data existed. The data may only be revealed after the hash is stored and it's unit
has been included by other units so that it has become unrevisable.
Second, even when the data is open, the decision to include or not include it in the database is delegated to numerous anonymous users who might (and in fact are incentivized to) take the new unit as a parent. Someone who tries to censor undesirable units will have to not only avoid including them directly (as parents) but also indirectly, through other units. (This is different from Bitcoin where miners or mining pools can and do filter individual transactions directly. Besides, Bitcoin users have no say in who is to become a miner.)As the number of units that include the “offending” unit snowballs, any attempt to avoid it would entail censoring oneself. Only the majority of witnesses can effectively impose forbidden content rules –if users choose such witnesses.
Choosing witnesses
Reliance on witnesses is what makes The investors rooted in the real world. At the same time, it makes it highly dependent on human decisions. The health of the system depends on users responsibly setting the lists of witnesses they do trust. This process cannot be safely automated, for example, if most users start auto-updating their witness lists to match the lists of most recently observed units, just to be compatible, this can be easily exploited by an attacker who floods the network with his own units that gradually change the predominant witness list to something of the attacker’s choosing.
While the maximalist recommendation could be “only edit witness lists manually”, which is too burdensome for most users, a more practical approach to witness list management is tracking and somehow averaging the witness lists of a few “captains of industry” who either have interest in caring for the network health or who have earned a good reputation in activities not necessarily connected with The investors. Some of them may be acting witnesses themselves. Unlike witness lists, the lists of captains of industry don’t have to be compatible, and failing to update the list frequently enough doesn’t have any immediate negative implications such as being unable to find compatible parents and post a new unit. We expect that most users will use one of a relatively small number of most popular wallets, and such wallets will be set up by default to follow the witness list of the wallet vendor, who in turn likely watches the witness lists of other prominent players.
Witnesses also have their witness lists, and it is recommended that users select those witnesses who they trust to keep their witness list representative of ordinary users’ beliefs. This is very important because no change to the predominant witness list can pass without the approval of the majority of the current witnesses. It is recommended that witnesses and would-be witnesses publicly declare their witness list policy (such as following and averaging witness lists of other reputable users) and that users evaluate their fitness for the job based on this policy, among other factors. Any breach of the declared policy will be immediately visible and will likely trigger a witness replacement campaign. The same is true for an unjustified amendment to the policy. The policy binds the witness and makes him follow public opinion, even when it turns against the witness himself or his friends.
As mentioned before, our protocol rules require that:
- the best parent is selected only among parents whose witness list has no more than 1 mutation;
- there should be no more than 1 mutation relative to the witness list of the last ball unit;
- there should be no more than 1 mutation relative to the witness lists of all the unstable MC units up to the last ball unit;
- the stability point advances only when the current witnesses (as defined in the current stability point) post enough units after the current stability point.
These rules are designed to protect against malicious and accidental forks. At the same time, they imply that any changes to the predominant witness list have to be gradual, and each step has to be approved by the majority of the current witnesses. A one-position change has to first reach stability and recognition of the majority of old witnesses before another change can be undertaken. If the community decides abruptly that two witnesses need to be replaced immediately, then after one change makes its way onto the MC, the second change will be blocked by rule 3 above until the first change reaches stability.
Despite all the recommendations above it is still possible that due to the negligence of industry leaders, such witnesses are elected who later form a cartel and collectively block all attempts to replace any one of them in an attempt to keep the profits they are earning from witnessing commissions. If they do behave this way, it will be evident to everybody because their witness lists will remain unchanged, while the witness lists of most other industry leaders will differ by one mutation (the maximum allowed to remain compatible). If the old witnesses do not give in despite such evident pressure, the only recourse of the pro-change users is a “revolution” –i.e. to start a new coin that inherits all the balances, user addresses, etc from the old coin at some point but starts with a new witness list and adds a special protocol rule to handle this incompatible change at the moment of the schism. To distinguish from the old coin, they would then assign a new value to the ‘alt’ field (this what ‘alt’ is for)and use it in all units issued under the new coin. As a result, users will hold two coins (the old alt=”1”, and the new e.g. alt=”2”) and will be able to spend both independently. If the split was justified, the old coin will probably be abandoned, but all the data accumulated prior to the schism will be available as normal in the new coin. Since the protocol is almost identical (except for the rule that handles the schism and the change of alt), it will be easy to update software installed on all user and merchant devices. If someone just wants to start a new coin to experiment with another set of protocol rules, he can also use the ‘alt’ field to inherit everything from the old coin, make the switch comfortable for users, and have a large set of users with balances from day one.
Light clients
Light clients do not store the entire investor's database. Instead, they download a subset of data they are interested in, such as only transactions where any of the user’s addresses are spending or being funded.
Light clients connect to full nodes to download the units they are interested in. The light client tells the full node the list of witnesses it trusts (not necessarily the same witnesses it uses to create new units) and the list of its own addresses. The full node searches for units the light client is interested in and constructs a proof chain for each unit in the following way:
- Walk back in time along with the MC until the majority of requested witnesses are met. Collect all these MC units
- From the last unit in this set (which is also the earliest in time), read the last ball.
- Starting from this last ball, walk back in time along with the MC until any ball with a skip list is met. Collect all these balls
- Using the skiplist, jump to an earlier ball referenced from the skiplist. This ball also has a skiplist, jump again. Where there are several balls in skiplist array, always jump by the largest distance possible, so we accelerate jumping first by 10 indexes, then by 100, then by 1000, etc.
- If the next jump by the skiplist would throw us behind the target ball, decelerate by jumping by a smaller distance. Ultimately, leave the skiplist and walk along with the MC one index at a time using just parent links
This chain has witness-authored units, in the beginning, making it trustworthy from the light client’s point of view. All the elements of the chain are linked by either parent unit links (while accumulating the witnesses), or by last ball reference, or by parent ball links, or by skiplist links. At the end of the chain, we have the unit whose existence was to be proved.
Second, even when the data is open, the decision to include or not include it in the database is delegated to numerous anonymous users who might (and in fact are incentivized to) take the new unit as a parent. Someone who tries to censor undesirable units will have to not only avoid including them directly (as parents) but also indirectly, through other units. (This is different from Bitcoin where miners or mining pools can and do filter individual transactions directly. Besides, Bitcoin users have no say in who is to become a miner.)As the number of units which include the “offending” unit snowballs, any attempt to avoid it would entail censoring oneself. Only the majority of witnesses can effectively impose forbidden content rules –if users choose such witnesses.
Multilateral signing
A unit can be signed by multiple parties. In such instances, the author's array in the unit has two or more elements.
This can be useful, for example. if two or more parties want to sign a contract (a plain old dumb contract, not a smart one). They would both sign the same unit that contains a text message (app=’text’). They don’t have to store the full text of the contract in the public database, and pay for it –a hash would suffice (payload_location=’none’), and the parties themselves can store the text privately.
Another application of multilateral signing is an exchange of assets. Assume user A wants to send asset X to user B in exchange for asset Y (the native currency ‘bytes’ is also an asset –the base asset). Then they would compose a unit that contains two payment messages: one payment sends asset X from A to B, the other payment sends asset Y from B to A. They both sign the dual-authored unit and publish it. The exchange is atomic–that is, either both payments execute at the same time or both fail. If one of the payments appears to be a double-spend, the entire unit is rendered invalid and the other payment is also deemed void.
This simple construction allows users to exchange assets directly, without trusting their money to any centralized exchanges
Addresses
A unit can be signed by multiple parties. In such instances, the author's array in the unit has two or more elements.
Users are identified by their addresses, transaction outputs are sent to addresses, and, like in Bitcoin, it is recommended that users have multiple addresses and avoid reusing them. In some circumstances, however, reuse is normal. For example, witnesses are expected to repeatedly post from the same address
An address represents a definition, which is a Boolean expression (remotely similar to Bitcoin script). When a user signs a unit, he also provides a set of identifiers (usually ECDSA signatures) which, when applied to the definition, must evaluate true in order to prove that this user had the right to sign this unit. We write definitions in JSON. For example, this is the definition for an address that requires one ECDSA signature to sign: [“sig”,{“pubkey”:”Ald9tkgiUZQQ1djpZgv2ez7xf1ZvYAsTLhudhvn0931w”}]
The definition indicates that the owner of the address has a private key whose public counterpart is given in the definition (in base64 encoding), and he will sign all units with this private key. The above definition evaluates to true if the signature given in the corresponding authenticity is valid, or otherwise false. The signature is calculated overall data of the unit except for the identifiers.
Given a definition object, the corresponding address is just a hash of the initial definition object plus a checksum. The checksum is added to avoid typing errors. Unlike usual checksum designs, however, the checksum bits are not just appended to the end of the uncheck summed data. Rather, they are inserted into multiple locations inside the data. This design makes it hard to insert long strings of illegal data in fields where an address is expected. The address is written in base32 encoding. The above definition corresponds to address
A2WWHN7755YZVMXCBLMFWRSLKSZJN3FU
When an address is funded, the sender of the payment knows and specifies only the address (the checksummed hash of the definition) in the payment output. The definition is not revealed and it remains unknown to anyone but the owner until the output is spent.
When a user sends his first unit from an address, he must reveal its definition (so as to make signature verification possible) in the author's array
If the user sends a second unit from the same address, he must omit the definition (it is already known on The investors). He can send the second unit only after the definition becomes stable, i.e. the unit where the definition was revealed must be included in the last ball unit of the second unit.
Users can update definitions of their addresses while keeping the old address. For example, to rotate the private key linked to an address, the user needs to post a unit that contains a message such as:
- include this address_definition_changeunit in its last ball unit, i.e. it must be already stable;
- reveal the new definition in the author's array in the same way as for the first message from an address.
After the change, the address is no longer equal to the checksummed hash of its current definition. Rather, it remains equal to the checksummed hash of its initial definition.
The definition change is useful if the user wants to change the key(s) (e.g. when migrating to a new device) while keeping the old address, e.g. if this address already participates in other long-lived definitions (see below)
When a user sends his first unit from an address, he must reveal its definition (so as to make signature verification possible) in the author's array:
If the user sends a second unit from the same address, he must omit the definition (it is already known on The investors). He can send the second unit only after the definition becomes stable, i.e. the unit where the definition was revealed must be included in the last ball unit of the second unit.
Users can update definitions of their addresses while keeping the old address.
Here, definition_chash indicates the checksummed hash of the new address definition (which is not revealed until later), and the unit itself must be signed by the old private keys. The next unit from this address must:
include this address_definition_change unit in its last ball unit, i.e. it must be already stable;
reveal the new definition in the author's array in the same way as for the first message from an address
After the change, the address is no longer equal to the checksummed hash of its current definition. Rather, it remains equal to the checksummed hash of its initial definition.
The definition change is useful if the user wants to change the key(s) (e.g. when migrating to a new device) while keeping the old address, e.g. if this address already participates in other long-lived definitions (see below).
Logical operators
A definition can include “and” conditions, for example:
which is useful when, in order to sign transactions, signatures from two independent devices are required, for example, from a laptop, and from a smartphone.
“Or” conditions, such as this
are useful when a user wants to use the same address from any of his devices. The conditions can be nested
A definition can require a minimum number of conditions to be true out of a larger set, for example, a 2-of-3 signature:
(“r” stands for “required”) which features both the security of two mandatory signatures and the reliability, so that in case one of the keys is lost, the address is still usable and can be used to change its definition and replace the lost 3rdkey with a new one.
Also, different conditions can be given different weight, of which a minimum is required:
An address can contain a reference to another address:
which delegates signing to another address and is useful for building shared control addresses (addresses controlled by several users). This syntax gives the user the flexibility to change definitions of their own component addresses whenever they like, without bothering the other user
In most cases, a definition will include at least one signature(directly or indirectly)
[“sig”, { pubkey : “pubkey in base64”}]
Instead of a signature, a definition may require a preimage fora hash to be provided:
[“hash”,{“hash”:”value of sha256 hash in base64"}]
which can be useful for cross-chain exchange algorithms[7]. In this case, the hash preimage is entered as one of the identifiers.
The default signature algorithm is ECDSA on curve secp256k1(same as Bitcoin). Initially, it is the only algorithm supported. If other algorithms are added in the future, an algorithm identifier will be used in the corresponding part of the definition, such as for the quantum secure NTRU algorithm
[“sig”, {algo: “ntru”, pubkey:”NTRU public key in base64"}]
Multisignature definitions allow one to safely experiment with unproven signature schemes when they are combined with more conventional signatures
The identifiers object in unit headers contains signatures or other data (such as hash preimage) keyed by the path of the authentifier-requiring sub definition within the address definition. For a single-sig address such as
[“sig”, {pubkey: “pubkey in base64”}]
the path is simply “r” (r stands for root). If the authentifier-requiring sub definition is included within another definition (such as and/or), the path is extended by an index into the arraywhere this sub definition is included, and path components are delimited by a dot. For example, for address definition:
the paths are “r.0.0”, “r.0.1”, and “r.1”. When there are optional signatures, such as 2-of-3, the paths tell us which keys were actually used.
A definition can also reference a definition template:
[“definition template”, [
“hash of unit where the template was defined”,
{ param1: “value1”, param2: “value2”}
]]
The parameters specify values of variables to be replaced in the template. The template needs to be saved before (and as usual, be stable before use) with a special message type app=’definition_template’, the template itself is in the message payload, and the template looks like normal definition but may include references to variables in the syntax @param1, @param2. Definition templates enable code reuse. They may in turn reference other templates
A sub definition may require that the unit be cosigned by another address:
[“cosigned by”, “ANOTHER ADDRESS IN BASE32”]
Another possible requirement for a sub definition: that an address was seen as an author in at least one unit included into the last ball unit:
[“seen address”,”ANOTHER ADDRESS IN BASE32"]
One very useful condition can be used to make queries about data previously stored in The investors
[“in data feed”, [
[“ADDRESS1”, “ADDRESS2”, … ],
“data feed name”, “=”, “expected value”
]]
This condition evaluates to true if there is at least one message that has “data feed name” equal to “expected value” among the data feed messages authored by the listed addresses “ADDRESS1”, “ADDRESS2”, ..(oracles).
Data fields can be used to design definitions that involve oracles. If two or more parties trust a particular entity (the oracle) to provide true data, they can set up a shared control address that gives the parties different rights depending on data posted by the oracle(s).
Initially, the two parties fund the address defined by this definition (to remove any trust requirements, they use multilateral signing and send their stakes in a single unit signed by both parties). Then if the EUR/USD exchange rate published by the exchange address ever exceeds 1.1500, the first party can sweep the funds. If this doesn’t happen before Oct 1, 2018, and the timestamping oracle posts any later date, the second party can sweep all funds stored on this address.If both conditions are true and the address balance is still non-empty, both parties can try to take the money from it at the same time, and the double-spend will be resolved as usual.
The comparison operators can be “=”, “!=”, “>”, “>=”, “<”,and”<=”. The data feed message must come before the last ball unit as usual. To reduce the risks that arise in case any single oracle suddenly goes offline, several feed provider addresses can be listed.
Another example would be a customer who buys goods from a merchant but he doesn’t quite trust that merchant and wants his money back in case the goods are not delivered. The customer pays to a shared address defined by:
which evaluates to true if the specified hash of expected value is included in any of the Merkle roots posted in the data feed on addresses “ADDRESS1”, “ADDRESS2”,… Using this syntax, FedEx would only periodically post Merkle roots of all shipments completed since the previous posting. To spend from this address, the merchant would have to provide the Merkle path that proves that the specified value is indeed included in the corresponding Merkle tree.The Merkle path is supplied as one of the identifiers.
Queries
A definition can also include queries about the unit itself. This sub definition
[‘has’, {
what: ‘input’|’output’,
asset: ‘assetID in base64 or “base” for bytes’,
type: ‘transfer’|’issue’,
own_funds: true,
amount_at_least: 123,
amount_at_most: 123,
amount: 123,
address: “INPUT OR OUTPUT ADDRESS IN BASE32”
}]
evaluates to true if the unit has at least one input or output (depending on what’ field)that passes all the specified filters, with all filters being optional.
A similar condition ‘has one’ requires that there is exactly one input or output that passes the filters
The ‘has’ condition can be used to organize a decentralized exchange. Previously, we discussed the use of multilateral signing to exchange assets.
However, multilateral signing alone doesn’t include any mechanism for price negotiation. Assume that a user wants to buy 1,200 units of another asset for which he is willing to pay no more than 1,000 bytes. Also, he is not willing to stay online all the time while he is waiting for a seller. He would rather just post an order at an exchange and let it execute when a matching seller comes along.
The first or-alternative lets the user take back his bytes whenever he likes, this canceling the order. The second alternative delegates the exchange the right to spend the funds provided that another output on the same unit pays at least 1,200 units of the other asset to the user’s address. The exchange would publicly list the order, a seller would find it, compose a unit that exchanges assets, and multilaterally sign it with the exchange.
One can also use the ‘has’ condition for collateralized lending. Assume a borrower holds some illiquid asset and needs some bytes (or another liquid asset). The borrower and a lender can then multilaterally sign a unit. One part of the unit sends the bytes he needs to the borrower, the other part of the unit locks the illiquid asset into an address defined by:
The first or alternative allows the lender to seize the collateral if the loan is not paid back in time. The second alternative allows the borrower to take back the collateral if he also makes a payment of 10,000 bytes (the agreed loan size including interest) to the lender. The third alternative allows the parties to amend the terms if they both agree.
It evaluates to true if there is at least one pair of inputs or outputs that satisfy the search criteria (the first element of the pair is searched by the first set of filters; the second by the second) and some of their fields are equal.
A similar condition ‘has one equal’ requires that there is exactly one such pair.
Another sub definition may compare the sum of inputs or outputs filtered according to certain criteria to a target value or values:
Any condition that does not include “sig”, “hash”, “address”, “cosigned by”, or “in Merkle” can be negated
[“not”, [“in data feed”, [[“NOAA ADDRESS”], “wind_speed”, “>”, “200”]]]
Since it is legal to select very old parents (that didn’t see the newer data feed posts), one usually combines negative conditions such as the above with the requirement that the timestamp is after a certain date.
Profiles
Users can store their profiles on The investors if they want. They use a message like this:
The amount of data they disclose about themselves, as well as its veracity, is up to the users themselves. To be assured that any particular information about a user is true, one has to look for attestations.
Voting
Anyone can set up a poll by sending a message with app=’poll’
Determining which votes qualify is up to the organizer of the poll. Byteball doesn’t enforce anything except the stipulation that the choices are within the allowed set. For example, the organizer might accept only votes from attested users or votes from a predetermined whitelist of users. Unqualified votes would hence still be recorded, but should be excluded by the organizer when he counts the votes.
Weighting the votes and interpreting results is also up to the organizer of the poll. If users vote by their balances, one should remember that they can move the balance to another address and vote again. Such votes should be handled properly.
Smart Contract
Internet limitations and connection configuration
- ZeroMQ protobuf, GOlang, Node.js, C++
- Buy predefined 4 instances from Amazon and Digital Ocean
- Install communication, send messages to create the architecture for the network
- Two instances will make joint work to control ping, message load stress
- Implement the first layer using high-performance library ZeroMQ.
A real-world contract, simply stated, is an agreement governing outcomes for actions, given a set of inputs. A contract can range from formal legal contracts (e.g., a financial transaction) to something as simple as the “rules” of a game. Typical actions can be things such as fund transfers (in the case of a financial contract) or game moves (in the case of a game contract).
A The investors Smart Contract is software registered on the blockchain and executed on The investors-oracle nodes, that implements the semantics of a “contract” whose ledger of action requests are being stored on the blockchain. The Smart Contract defines the interface (actions, parameters, data structures) and the code that implements the interface. The code is compiled into a canonical bytecode format that nodes can retrieve and execute. The blockchain stores the transactions (e.g., legal transfers, game moves) of the contract. Each Smart Contract must be accompanied by a Ricardian Contract that defines the legally binding terms and conditions of the contract.
The investors Smart Contract consists of a set of action and type definitions. Action definitions specify and implement the behaviors of the contract. The type definitions specify the required content and structures. The investor's actions operate primarily in a message-based communication architecture. A client invokes actions by sending (pushing) messages to nodes. This can be done using the close command. It can also be done using one of The investor's send methods (e.g., The investors_send). Node Patcher dispatches action requests to the WASM code that implements a contract. That code runs in its entirety, then processing continues to the next action.
The investors Smart Contracts can communicate with each other, e.g., to have another contract perform some operation pertinent to the completion of the current transaction, or to trigger a future transaction outside of the scope of the current transaction.
The communication takes the form of requesting other actions that need to be executed as part of the calling action. Actions operate with the same scopes and authorities of the original transaction and are guaranteed to execute with the current transaction. These can effectively be thought of as nested transactions within the calling transaction. If any part of the transaction fails, the actions will unwind with the rest of the transaction. Calling the inline action generates no notification outside the scope of the transaction, regardless of success or failure.
Assets
We have designed a database that allows immutable storage of any data. Of all classes of data, the most interesting for storage in a common database are those that have social value, i.e. the data that is valuable for more than one or two users. One such classic asset. Assets can be owned by anybody among a large number of people, and the properties of immutability and total ordering of events that we have in The investors are very important for establishing the validity of long chains of ownership transfers. Assets in The investors can be issued, transferred, and exchanged, and they behave similarly to the native currency ‘bytes’. They can represent anything that has value, for example, debt, shares, loyalty points, airtime minutes, commodities, other fiat or cryptocurrencies
Node Patcher defines a new asset, the defining user sends a message like this
- the cap is the maximum amount that can be issued. For comparison with the predefined native currency bytes, the bytes cap is 1019
- is_private indicates if the asset is transferred privately or publicly (see below). Bytes are public;
- is_transferrable indicates if the asset can be transferred between third parties without passing through the definer of the asset. If not transferrable, the definer must always be either the only sender or the only receiver of every transfer. Bytes are transferable;
- auto_destroy indicates if the asset is destroyed when it is sent to the definer. Bytes are not auto-destroyed;
- fixed_denominations indicates if the asset can be sent in any integer amount (arbitrary amounts) or only in fixed denominations (e.g. 1, 2, 5, 10, 20, etc), which is the case for paper currency and coins. Bytes are in arbitrary amounts;
- issued_by_definer_only indicates if the asset can be issued by definer only. For bytes, the entire money supply is issued in the genesis unit
- cosigned_by_definer indicates if every transfer must be cosigned by the definer of the asset. This is useful for regulated assets. Transfers in bytes needn’t be cosigned by anybody;
- spender_attested indicates if the spender has to be attested in order to spend. If he happened to receive the asset but is not yet attested, he has to pass attestation with one of the attestors listed under the definition, in order to be able to spend. This requirement is also useful for regulated assets. Bytes do not require attestation;
- attestors is the list of attestor addresses recognized by the asset definer (only if spender_attested is true). The list can be later amended by the definer by sending an ‘asset_attestors’ message that replaces the list of attestors;
- denominations (not shown in this example and used only for fixed_denominations assets) lists all allowed denominations and the total number of coins of each denomination that can be issued;
- transfer_condition is a definition of a condition when the asset is allowed to be transferred. The definition is in the same language as the address definition, except that it cannot reference anything that requires authenticity, such as “sig”. By default, there are no restrictions apart from those already defined by other fields;
- issue_condition is the same as transfer_condition but for issue transactions.
Before it can be transferred, an asset is created when a user sends an issue transaction. Issue transactions have a slightly different format for inputs:
The entire supply of capped arbitrary-amounts assets must be issued in a single transaction. In particular, all bytes are issued in the genesis unit. If the asset is capped, the serial number of the issue must be 1. If it is not capped, the serial numbers of different issues by the same address must be unique.
An asset is defined only once and cannot be amended later, only the list of attestors can be amended.
It’s Up to the definer of the asset what this asset represents. If it is the issuer’s debt, it is reasonable to expect that the issuer is attested or waives his anonymity to earn the trust of the creditors.
While end-users are free to use or not to use an asset, asset definers can impose any requirements on transactions involving the asset.
By combining various asset properties the definer can devise assets that satisfy a wide range of requirements, including those that regulated financial institutions have to follow. For example, by requiring that each transfer be cosigned by the definer, financial institutions can effectively veto all payments that contradict any regulatory or contractual rules. Before cosigning each payment, the financial institution (who is also the definer and the issuer)would check that the user is indeed its client, that the recipient of the funds is also a client, that both clients have passed all the Know Your Client (KYC)procedures, that the funds are not arrested by court order, as well as carry out any other checks required by the constantly changing laws, regulations, and internal rules, including those that were introduced after the asset was defined.
Bank-issued asset
Having the security of being fully compliant (and also assured in the familiar deterministic finality of all funds transfers), banks can issue assets that are pegged to national currencies and backed by the bank’s assets (which are properly audited and monitored by the central banks). The legal nature of any operations with such assets is exactly the same as with all other bank money and is familiar to everybody. The only novelty is that the balances and transfers are tracked in The investor's database instead of the bank’s internal database. Being tracked in The investor's database has two consequences:
- All operations are public, which is familiar from Bitcoin, and mitigated by using multiple semi-anonymous addresses of which only the bank knows the real persons behind the addresses. Another more robust way to preserve privacy is private payments, which we’ll discuss later;
- the bank-issued asset can be exchanged for bytes or other assets on-chain, in a peer-to-peer manner, without having to trust any third parties such as exchanges.
The banks here are similar to Ripple gateways.
In the exchange scenario above, one leg of the exchange is payment from one user to another user in a bank-issued asset. If both users are clients of the same bank, this process is straightforward. When users hold accounts at different banks, the banks may facilitate interbank transfers by opening correspondent accounts at each other. Let’s Assume user U1 wants to transfer money to user U2 in circumstances where user U1 holds an account at bank B1 and user U2 holds an account at bank B2. Bank B2 also opens an account at B1. U1 then transfers the money to B2’s account at B1 (it is an internal bank transfer within B1 Which is cosigned by B1). At the same time, B2(which has just increased its assets at B1) issues new money to its user U2. All this must be atomic. All three participants: (U1, B1, and B2) must therefore sign a single unit that both transfers B1’s money from U1 to B2 and issues B2’s money to U2.
The net result is that U1 decreased his balance at B1, U2 increased his balance at B2, and B2 increased his balance at B1. The bank B1 will also have a correspondent account at B2, the balance of which will grow as reverse payments are processed from users of B2 to users of B1. The mutual obligations (B1 at B2 and B2 at B1) can be partially canceled by the banks mutually signing a transaction that sends equal amounts to the respective issuer(it is convenient to have the money auto-destroyed by sending it to the issuer). What is not canceled can be periodically settled through traditional interbank payments. To trigger the settlement, the bank with a positive net balance sends his balance to the issuer bank, and since there is no reverse transfer in the same transaction, this triggers a traditional payment in fiat money from the issuer to the holder bank.
When there are many banks, setting up direct correspondent relations with each peerbankcan be cumbersome. In such instances, the banks agree about a central counterparty C (a large member bank or a new institution) and pass all payments exclusively through this central counterparty and settle only with it. The same transfer from U1 to U2 will then consist of 3 transactions:
Non-financial assets
Other applications that are not necessarily financial can use The investor's assets internally. For example, loyalty programs may issue loyalty points as assets and use The investors Existing infrastructure to allow people to transact in these points, including peer-to-peer(if allowed by the program’s rules). The same is true for game developers, who can track game assets on The investors.
Bonds
Businesses can issue bonds to investors. The legal structure of the issue is the same as for conventional bonds, the only difference being that the depository will now track bond ownership using The investors rather than an internal database (similar to banks above). Having bonds in The investors enables their holders to trade directly, without a centralized exchange. When bank money is also on The investors, an instant delivery versus payment (a fiat payment in this context) becomes possible, without counterparty risk and without any central institution. The title to the bond and payment are exchanged simultaneously as the parties sign the same unit that performs both transfers.
Bonds, if liquid enough, can also be used by third parties as a means of payment.
When a bond is issued, the issuer and the investor would multilaterally sign a common unit that sends the newly issued bonds to the investor and at the same time sends bytes (or another asset used to purchase the bonds, such as a bank-issued fiat-pegged asset) from the investor to the borrower. When the bond is redeemed, they sign another multilateral unit that reverses the exchange(most likely, at a different exchange rate). The price of the bond paid during redemption is its face value, while the price it is sold for when issued must be lower than the face value to reflect interest(assuming zero-coupon bond for simplicity). During its lifetime, the secondary market price of the bond stays below face value and gradually approaches it.
In a growing economy where there are many projects to finance, bonds and other debt issued on The investors to finance investment will be issued more often than they are redeemed. When the economy slows down, the total supply of all bonds shrinks, as there are fewer projects to finance. Thus, the total supply of bonds self regulates, which is important if they are actively used as a means of payment.
If two businesses transact on net-30 terms, both buyer and seller have the option to securitize the trade credit during the 30-day period. For example, the buyer can issue 30-day bonds and use them to pay the seller immediately. The seller can then either wait for the 30 days to pasand redeem the bonds, or use the bonds as a means of payment to its own suppliers. In this case, it will be the suppliers who redeem the bonds when they mature.
Funds
For individual users, it might be difficult to track the huge number of bonds that are available on the market. Instead, they would rather choose to invest in funds that are professionally managed and hold a large diversified portfolio of bonds. The fund would issue its own assets that track the aggregate value of the fund’s portfolio. Every time an investor buys a newly issued asset of the fund, the fund would use the proceeds to buy bonds. When a user exits, the fund sells some of the bonds held and destroys the fund-issued assets returned by the user. The fund’s asset is not capped; its total supply varies as investors enter and exit. Its value is easily auditable as all the bonds held by the fund are visible to The investors. Being more liquid than the underlying bonds, the fund’s asset has a higher chance of becoming a means of payment.
Settlements
A group of banks can use assets for interbank settlements. Some of the larger banks issue fiat-pegged assets that can only be used by attested users and only group members can be attested. The asset is backed by the issuing bank’s reserves. When a smaller bank wants to settle with another smaller bank, it just sends the asset. The receiving bank can use the asset, in the same way, to settle with other banks, or redeem it for fiat currency with the issuing bank. The banks can also exchange USD-pegged assets for EURO-pegged assets or similar. All such transfers and trades are settled immediately, they are final and irrevocable. In SWIFT, banks exchange only information about payments, while the actual transfer of money is a separate step. The investors, information is money.
Node
Imagine a fishing net: the nodes would be the knots holding the lines of rope together. Every device in The investor's network is technically a node, whether a light client/platform, a full platform, a relay, or a hub. Informally, the node is used to mean a full platform. See wiki article Node for different roles.
Roles of different types of nodes
Abbreviations are used to the table is viewable on most mobile screens:
- e2ee = end-to-end encrypted
- W = wallet
HUB
This is a node for The investor's network that serves as a relay, plus it facilitates the exchange of messages among devices connected to The investor's network. Since all messages are encrypted with the recipient’s key, the hub cannot read them. The hub does not hold any private keys and cannot send payments itself.
The messages are used for the following purposes:
Private-payment information
Conveying private-payment (such as black bytes) information from payer to payee.
Multisig address
Exchanging partially-signed transactions when sending from a multi-sig address. One of the devices initiates a transaction and signs it with its private key, then it sends the partially-signed transaction to the other devices that participate on the multisig address. The user(s) confirm the transaction on the other devices, they sign and return the signatures to the initiator.
Multilateral signing
Multilateral signing, when several addresses sign the same unit, e.g. when exchanging one asset for another, or when signing a contract. The exchange of messages is similar to the multisig scenario above.
Chat between users
Plain text chat between users; in particular, users can send each other the newly generated addresses to receive payments too.
Chat with bots
Plain text chat with bots that offer a service and can receive or send payments. Faucet is an example of such a bot. The hub helps deliver such messages when the recipient is temporarily offline or is behind NAT. If the recipient is connected, the message is delivered immediately, otherwise, it is stored and delivered as soon as the recipient connects to the hub. As soon as delivered, the message is deleted from the hub.
Node
Imagine a fishing net: the nodes would be the knots holding the lines of rope together. Every device in The investor's network is technically a node, whether a light client/platform, a full platform, a relay or a hub. Informally, the node is used to mean a full platform. See wiki article Node for different roles.
HUB
This is a node for The investors network that serves as a relay, plus it facilitates the exchange of messages among devices connected to The investor's network. Since all messages are encrypted with the recipient’s key, the hub cannot read them. The hub does not hold any private keys and cannot send payments itself.
The messages are used for the following purposes:
Private-payment information
Conveying private-payment (such as black bytes) information from payer to payee.
Multisig address
Exchanging partially-signed transactions when sending from a multi-sig address. One of the devices initiates a transaction and signs it with its private key, then it sends the partially-signed transaction to the other devices that participate on the multisig address. The user(s) confirm the transaction on the other devices, they sign and return the signatures to the initiator.
Multilateral signing
Multilateral signing, when several addresses sign the same unit, e.g. when exchanging one asset for another, or when signing a contract. The exchange of messages is similar to the multisig scenario above.
Chat between users
Plain text chat between users; in particular, users can send each other the newly generated addresses to receive payments too.
Chat with bots
Plain text chat with bots that offer a service and can receive or send payments. Faucet is an example of such a bot. The hub helps deliver such messages when the recipient is temporarily offline or is behind NAT. If the recipient is connected, the message is delivered immediately, otherwise, it is stored and delivered as soon as the recipient connects to the hub. As soon as delivered, the message is deleted from the hub.
Live API
Requirements for LIVE API. LIVE API should have an HTTPS valid certificate. The investors SDK provides the request library supported HTTPS requests with a certificate pinning feature. HTTP Public Key Pinning (HPKP is an Internet security mechanism delivered via an HTTP header that allows HTTPS API to resist impersonation by attackers using miss-issued or otherwise fraudulent certificates. In order to do so, it delivers a set of public keys to the client (browser), which should be the only ones trusted for connections to this domain. The server communicates the HPKP policy to the user agent via an HTTP response header field named Public-Key-Pins (or Public-Key-Pins-Report-Only for reporting-only purposes).
The HPKP policy specifies hashes of the subject public key info of one of the certificates in the API’s authentic X.509 public-key certificate chain (and at least one backup key) in pin-sha256 directives, and a period of time during which the user agent shall enforce public key pinning in the max-age directive, optional includeSubDomains directive to include all subdomains (of the domain that sent the header) in pinning policy and optional report-URI directive with URL where to send pinning violation reports. At least one of the public keys of the certificates in the certificate chain needs to match a pinned public key in order for the chain to be considered valid by the user agent.
At the time of publishing, RFC 7469 only allowed the SHA-256 hash algorithm. Hashes for HPKP policy can be generated by shell commands mentioned in Appendix A. of RFC 7469 or third-party tools.
ALIVE API operator can choose to either pin the root certificate public key of a particular root certificate authority, allowing only that certificate authority (and all intermediate authorities signed by its key) to issue valid certificates for the API’s domain, and/or to pin the key(s) of one or more intermediate issuing certificates, or pin the end-entity public key.
At least one backup key must be pinned, in case the current pinned key needs to be replaced. The HPKP is not valid without this backup key (a backup key is defined as a public key not present in the current certificate chain).
Job Scope
Milestone I
Implementation of the foundational layer with custom DAG focused on encryption and total security for the wallet and the option to be saved offline.
- Implementation of a foundational layer (custom DAG) for new coin
- Ensure security by encryption SHA256
- Creation of decentralized smart contract framework with conditional payments for:
- Lottery
- Insurance
- Prediction betting
- Auctions
- E-commerce
- Creation of an automated communication network for devices, (3º DAG layer)
- Layers will work simultaneously
- Token system will be used as payments
- Communication layer for confirming transactions
Technical Requirements
- Transaction speed > 500Tx/S
- Settlement 1 seconds
- Identity (KYC/AML)
- Turing complete smart contract
- Data Oracle
- Random Number Generator
- Multi-signature transactions
- Enforceable ownership
Server configuration, connection configuration, and test of the internet limitations by installing communication and sending messages creating the architecture for the network. We will implement the first layer using ZeroMQ as it has shown high-performance.
- ZeroMQ protobuf, Golang, C++
- Buy predefined 4 instances from Amazon and Digital Ocean
- Install communication, send messages to create the architecture for the network
- Two instances will make joint work to control ping, message load stress
- Implement the first layer using high-performance library ZeroMQ.
- Reimplementation and planning next sprints
Testing nodes from different countries to evaluate the speed, load, and ping. Those tests will allow us to implement an intelligent system that will prioritize the fastest working node in each transaction, and as well provide the objective data that will identify where the development can be improved.
Implementation of DAG will be performed after the testing nodes have given us positive feedback. Each sprint will be tested when possible simulated on a local network on an operative test level.
Stability point for Finality where we will position-specific units in the DAG. Between the Genesis unit and the stability point, all nodes have a consistent view of the ledger. If a node does not receive all the fresh units yet, then that node still has an “older” version of the stability point, that is closer to the Genesis unit.
Development of Smart Contract and Wallet security system Parallel development of these tasks concentrating in hash security using SHA256
Smart Contract
- Web Assembly is cross-platform compiled could be run as native code
- Calculation of a number of operations
- Calculation message sending
- Ping stress test
- Distribution lager based on DAG — scalable system
Wallet security system
- Hash Security: SHA256
- Develop the cold wallet
- framework QT
- Windows development
- Mac development
- Encryption SQLLite database
- KeePassX
Milestone II
Implementation of a decentralized smart contract framework and creation Dapps for the different services (conditional payment, lottery, insurance, prediction betting, auctions, e-commerce, shopping chatbot, and social media) The investor's token used as a payment method for services and products.
The estimation and planning of the second part of the project will be defined by the results from the 1º milestone, therefore detailed planning will be performed after the finalization of milestone I.
Conditional payment
Payment from one token account to another within the same network. Payment delivery is only completed after fulfillment of conditions predefined by the sender.
Payment will expire, returning tokens to the payer if conditions aren’t met.
Lottery
The transparent system with distributed ownership, with decentralized agents, without the overhead and low margin fees with lottery payout the maximum amount of prices.
Insurance
Instantaneous and automatic resolution of claims. Triggers can be configured to make automatic payouts when conditions are met. Smart contract payments are paid automatically.
Auction House
Sellers can create auctions and defines the ownership of the asset/service. The user who takes part in it and submits the bid amount is available for the withdrawal process. At the end of the auction, if the reserve price is reached, the winner will receive ownership of the asset/service. If not, the asset will be returned to the seller.
E-commerce
Payments, decentralized marketplace, supply chains, secure data, and management system, invoice generation, accounting, and auditing. The decentralized platform that facilitates transactions between B2B, B2C, and C2C. with high-security standards
Shopping Chatbot
The smart bot was the consumer can see the history of successfully confirmed transactions to increase the consumer to cooperate with an automated system. This system will show a rating score to establish the trust rating of various chatbots.
Social Media
Social interaction in a decentralized social platform will reduce incidents of misconduct. A social platform where users are the owners of their own data, creating a total control system using blockchain, giving access or removing it through PKI architecture
Conclusions
Custom development will allow total control over the outcome. Many crypto-solutions have a lack of efficiency as they were focused as flexible systems, not focusing on real case scenarios but theoretical options that do not fit with the real world, that has taken them into large systems that are producing heavy server issues, lack of dynamic solutions and mostly security breaks.
The focused custom system, from a real business perspective, ensures adaptability to real issues giving real specific solutions. Market value and needs determine the development, making it flexible in the real scenario making the best for the investors and businessmen that are ready to pay and create 3º model applications.
Thinking and looking at the needs of the market can make a strong custom solution that has usability, scalability, and security.