How bitcoin works - Bitcoin Wiki

StableCoin

This subreddit is dedicated to inform and discuss the revolutionary cryptocurrency Stablecoin.
[link]

Cool explanation of the SHA-256 Algorithm and how it's used to mine Bitcoin

Cool explanation of the SHA-256 Algorithm and how it's used to mine Bitcoin submitted by Ditochi to videos [link] [comments]

How Bitcoin mining REALLY works - an in-depth technical explanation of the proof-of-work algorithm that makes Bitcoin the most secure currency in the world

How Bitcoin mining REALLY works - an in-depth technical explanation of the proof-of-work algorithm that makes Bitcoin the most secure currency in the world submitted by SimilarAdvantage to BitcoinAll [link] [comments]

Looking for a good explanation of the public key algorithm. /r/Bitcoin

Looking for a good explanation of the public key algorithm. /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Interactive explanation of Public-Key Encryption by RSA Algorithm (Bitcoin uses ECDSA instead of RSA)

submitted by f00000000 to CryptocurrencySA [link] [comments]

Why Bitcoin is a long shot: an algorithmic explanation

Boolean willBitcoinWork() { if !is_outlawed if is_convenient if is_safe if is_accepted_by_merchants if !is_over_regulated if !is_51_percent_takeover if !is_diluted_by_altcoins if !is_replaced_by_FED if is_scalable if !is_hacked if !is_created_by_NSA if !is_other_fatal_flaw return BITCOIN_SUCCESS else return BITCOIN_FAIL } 
That's a lot of nested IFs. In probabilistic terms, the outcome of these dependencies is expressed as not the sum, but the product of individual events. As an example, if we define each of these conditions as probability .5, the chances that Bitcoin will ultimately work are ony 1 in 212 or 1 in 4096.
submitted by wonderkindel to Bitcoin [link] [comments]

Kleiman's Response to Wright's Sanctions Appeal

submitted by Zectro to bsv [link] [comments]

Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (1)

Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (1)

https://preview.redd.it/7skleasc80a51.png?width=553&format=png&auto=webp&s=fc18cee10bff7b65d5b02487885d936d23382fc8
Table 1 Classification of consensus system
Source: Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm"
Figure 4 Evolution of consensus algorithm

Figure 4 Evolution of consensus algorithm
Source: Network data

Foreword
The consensus mechanism is one of the important elements of the blockchain and the core rule of the normal operation of the distributed ledger. It is mainly used to solve the trust problem between people and determine who is responsible for generating new blocks and maintaining the effective unification of the system in the blockchain system. Thus, it has become an everlasting research hot topic in blockchain.
This article starts with the concept and role of the consensus mechanism. First, it enables the reader to have a preliminary understanding of the consensus mechanism as a whole; then starting with the two armies and the Byzantine general problem, the evolution of the consensus mechanism is introduced in the order of the time when the consensus mechanism is proposed; Then, it briefly introduces the current mainstream consensus mechanism from three aspects of concept, working principle and representative project, and compares the advantages and disadvantages of the mainstream consensus mechanism; finally, it gives suggestions on how to choose a consensus mechanism for blockchain projects and pointed out the possibility of the future development of the consensus mechanism.
Contents
First, concept and function of the consensus mechanism
1.1 Concept: The core rules for the normal operation of distributed ledgers
1.2 Role: Solve the trust problem and decide the generation and maintenance of new blocks
1.2.1 Used to solve the trust problem between people
1.2.2 Used to decide who is responsible for generating new blocks and maintaining effective unity in the blockchain system
1.3 Mainstream model of consensus algorithm
Second, the origin of the consensus mechanism
2.1 The two armies and the Byzantine generals
2.1.1 The two armies problem
2.1.2 The Byzantine generals problem
2.2 Development history of consensus mechanism
2.2.1 Classification of consensus mechanism
2.2.2 Development frontier of consensus mechanism
Third, Common Consensus System
Fourth, Selection of consensus mechanism and summary of current situation
4.1 How to choose a consensus mechanism that suits you
4.1.1 Determine whether the final result is important
4.1.2 Determine how fast the application process needs to be
4.1.2 Determining the degree to which the application requires for decentralization
4.1.3 Determine whether the system can be terminated
4.1.4 Select a suitable consensus algorithm after weighing the advantages and disadvantages
4.2 Future development of consensus mechanism
Chapter 1 Concept and Function of Consensus Mechanism
1.1 Concept: The core rules for the normal operation of distributed ledgers
Since most cryptocurrencies use decentralized blockchain design, nodes are scattered and parallel everywhere, so a system must be designed to maintain the order and fairness of the system's operation, unify the version of the blockchain, and reward users maintaining the blockchain and punish malicious harmers. Such a system must rely on some way to prove that who has obtained the packaging rights (or accounting rights) of a blockchain and can obtain the reward for packaging this block; or who intends to harm , and will receive certain penalty. Such system is consensus mechanism.
1.2 Role: Solve the trust problem and decide the generation and maintenance of new blocks
1.2.1 Used to solve the trust problem between people
The reason why the consensus mechanism can be at the core of the blockchain technology is that it has formulated a set of rules from the perspective of cryptographic technologies such as asymmetric encryption and time stamping. All participants must comply with this rules. And theese rules are transparent, and cannot be modified artificially. Therefore, without the endorsement of a third-party authority, it can also mobilize nodes across the network to jointly monitor, record all transactions, and publish them in the form of codes, effectively achieving valuable information transfer, solving or more precisely, greatly improving the trust problem between two unrelated strangers who do not trust each other. After all, trusting the objective technology is less risky than trusting a subjective individual.
1.2.2 Used to decide who is responsible for generating new blocks and maintaining effective unity in the blockchain system
On the other hand, in the blockchain system, due to the high network latency of the peer-to-peer network, the sequence of transactions observed by each node is different. To solve this, the consensus mechanism can be used to reach consensus on transactions order within a short period of time to decide who is responsible for generating new blocks in the blockchain system, and to maintain the effective unity of the blockchain.
1.3 The mainstream model of consensus algorithm
The blockchain system is built on the P2P network, and the set of all nodes can be recorded as PP, generally divided into ordinary nodes that produce data or transactions, and"miner" nodes (denoted as M) responsible for mining operations, like verifying, packaging, and updating the data generated by ordinary nodes or transactions. The functions of the two types of nodes may be overlapped; miner nodes usually participate in the consensus competition process in general, and will select certain representative nodes and replace them to participant in the consensus process and compete for accounting rights in specific algorithms. The collection of these representative nodes is recorded as DD; the accounting nodes selected through the consensus process are recorded as AA. The consensus process is repeated in accordance with the round, and each round of the consensus process generally reselects the accounting node for the round . The core of the consensus process is the "select leader" and "accounting" two parts. In the specific operation process, each round can be divided into four stages: Leader election, Block generation, Data validation and Chain updating namely accounting). As shown in Figure 1, the input of the consensus process is the transaction or data generated and verified by the data node, and the output is the encapsulated data block and updated blockchain. The four stages are executed repeatedly, and each execution round will generate a new block.
Stage 1: Leader election
The election is the core of the consensus process, that is, the process of selecting the accounting node AA from all the miner node sets MM: we can use the formula f(M)→f(M)→AA to represent the election process, where the function ff represents the specific implementation of the consensus algorithm. Generally speaking, |A|=1,|A|=1, that is, the only miner node is finally selected to keep accounts.
Stage 2: Block generation
The accounting node selected in the first stage packages the transactions or data generated by all nodes PP in the current time period into a block according to a specific strategy, and broadcasts the generated new block to all miner nodes MM or their representative nodes DD. These transactions or data are usually sorted according to various factors such as block capacity, transaction fees, transaction waiting time, etc., and then packaged into new blocks in sequence. The block generation strategy is a key factor in the performance of the blockchain system, and it also exposes the strategic behavior of miners such as greedy transactions packaging and selfish mining.
Stage 3: Verification
After receiving the broadcasted new block, the miner node MM or the representative node DD will verify the correctness and rationality of the transactions or data encapsulated in the block. If the new block is approved by most verification/representative nodes, the block will be updated to the blockchain as the next block.
Stage 4: On-Chain
The accounting node adds new blocks to the main chain to form a complete and longer chain from the genesis block to the latest block. If there are multiple fork chains on the main chain, the main chain needs to be based on the consensus algorithm judging criteria to choose one of the appropriate fork chain as the main chain.
Chapter 2 The Origin of Consensus Mechanism
2.1 The two armies problems and the Byzantium generals problem
2.1.1 The two armies


Figure 2 Schematic diagram of the two armed forces
Selected from Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm", Journal of Automation, 2018, 44(11): 2011-2022
As shown in the figure, the 1st and 2nd units of the Blue Army are stationed on two sides of the slope, and cannot communicate remotely between each other. While the White Army is just stationed in the middle of the two Blue Army units. Suppose that the White Army is stronger than either of the two Blue Army units, but it is not as strong as the two Blue Army units combined. If the two units of the Blue Army want to jointly attack the White Army at the same time, they need to communicate with each other, but the White Army is stationed in the middle of them. It is impossible to confirm whether the messengers of two Blue Army units have sent the attack signal to each other, let alone the tampering of the messages. In this case, due to the inability to fully confirm with each other, ultimately no effective consensus can be reached between the two Blue Army units, rendering the "paradox of the two armies".
2.1.2 The Byzantine generals problem


Figure 3 Diagram of the Byzantine generals' problem
Due to the vast territory of the Byzantine roman empire at that time, in order to better achieve the purpose of defense, troops were scattered around the empire, and each army was far apart, and only messengers could deliver messages. During the war, all generals must reach an agreement, or decide whether to attack the enemy based on the majority principle. However, since it is completely dependent on people, if there is a situation where the general rebels or the messenger delivers the wrong message, how can it ensure that the loyal generals can reach agreement without being influenced by the rebels is a problem which was called the Byzantine problem.
The two armies problems and the Byzantine generals problem are all elaborating the same problem: in the case of unreliable information exchange, it is very difficult to reach consensus and coordinate action. The Byzantine general problem is more like a generalization of the "paradox of the two armies".
From the perspective of the computer network, the two armies problem and the Byzantine problem are common contents of computer network courses: the direct communication between two nodes on the network may fail, so the TCP protocol cannot completely guarantee the consistence between the two terminal networks. However, the consensus mechanism can use economic incentives and other methods to reduce this uncertainty to a level acceptable to most people.
It is precisely because of the two armies problem and the Byzantine problem that the consensus mechanism has begun to show its value.
2.2 Development history of consensus mechanism
2.2.1 Classification of consensus mechanism
Because different types of blockchain projects have different requirements for information recording and block generation, and as the consensus mechanism improves due to the development of blockchain technology, there are currently more than 30 consensus mechanisms. These consensus mechanisms can be divided into two categories according to their Byzantine fault tolerance performance: Byzantine fault tolerance system and non-Byzantine fault tolerance system.

Table 1 Classification of consensus mechanism
Source: Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm"
2.2.2 Development frontier of consensus mechanism
-Development of consensus algorithm
According to the proposed time of the consensus algorithm, we can see relatively clearly the development of the consensus algorithm.
Source: Network data

Figure 4 Development frontier of consensus algorithm

Figure 5 Historical evolution of blockchain consensus algorithm
Source: Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm"
The consensus algorithm has laid the foundation for the blockchain consensus mechanism. Initially, the research of consensus algorithms was mainly used by computer scientists and computer professors to improve the spam problem or conduct academic discussions.
For example, in 1993, American computer scientist and Harvard professor Cynthia Dwork first proposed the idea of proof of work in order to solve the spam problem; in 1997, the British cryptographer Adam Back also independently proposed to solve the spam problem by use of the mechanism of proof of work for hashing cash and published officially in 2002; in 1999, Markus Jakobsson officially proposed the concept of "proof of work", which laid the foundation for the subsequent design of Satoshi Nakamoto's Bitcoin consensus mechanism.
Next lecture: Chapter 3 Detailed Explanation of Consensus Mechanism Technology
CelesOS
As the first DPOW financial blockchain operating system, CelesOS adopts consensus mechanism 3.0 to break through the "impossible triangle". It provides both high TPS and decentralization. Committed to creating a financial blockchain operating system that embraces regulation, providing services for financial institutions and the development of applications on the regulation chain, and developing a role and consensus eco-system regulation level agreement for regulation.
The CelesOS team is committed to building a bridge between blockchain and regulatory agencies / finance industry. We believe that only blockchain technology that cooperates with regulators will have a bright future and strive to achieve this goal.
📷Website
https://www.celesos.com/
📷 Telegram
https://t.me/celeschain
📷 Twitter
https://twitter.com/CelesChain
📷 Reddit
https://www.reddit.com/useCelesOS
📷 Medium
https://medium.com/@celesos
📷 Facebook
https://www.facebook.com/CelesOS1
📷 Youtube
https://www.youtube.com/channel/UC1Xsd8wU957D-R8RQVZPfGA
submitted by CelesOS to u/CelesOS [link] [comments]

Questions Regarding BTC Mining

I have been wondering about some of the details related to bitcoin mining bit couldn't find an answer, I would bet the answer can be found was I capable of looking up the mining algorithms but I'm not that savvy (not yet at least) so here it goes.
I understand that during mining, the miners take the hash calculated from a given block then appends a nonce to it and calculate SHA256 for the whole expression, if the hash value is larger than the limit set by mining difficulty, the miner must attempt again the SHA256 calculation again by appending a different nonce and repeat until a hash smaller than the limit is found.
What I wanted to ask is the following:
1) Is my understanding above correct? If not then please disregard the below questions since they would be garbage most likely (correcting the fault lines in my understanding would more than enough).
2) How are these nonces to be appended chosen? Are they chosen randomly at every attempt or changed sequentially by adding 1 for example?
3) Does the bitcoin blockchain enforces the use of a specific algorithm for generating nonces or is it left to the miners to concoct their own algorithms as they see fit? (If enforced by the bitcoin block chain, I'd appreciate an explanation why)
4) If the choice is left to miners to generate nonces as they see fit, what is the best approach to generating these nonces available?
5) In a mining pools where many ASICs are hashing together, is there any coordination at the pool or at least at individual ASIC miner level to ensure no two ASIC chips are calculating the hash for the same nonce while trying to find the block? If not, what are the difficulties preventing such an implementation?
Thanks in advance and if there are any useful resources addressing these questions please share them especially ones describing the mining algorithm generating nonces.
submitted by BitcoinAsks to BitcoinMining [link] [comments]

This is just a theory. What do you guys think?

Just theory if Satoshi wrote the name of the creator which would be 256th puzzle of a puzzle game 14 years ago, and the card has written "find me" in Japanese at side forming this puzzle. Just for looking this picture is it possible to find this gentleman on the internet as the location from the picture been discovered " Kaysersberg, Alsace, France". It would be a great coincidence if the owner of the 256th card was really Satoshi in a ranking of 256 cards? This will be very important figure for 256 Bitcoin value. People might on here might ask why and explain your theory? Well just for a explanation this puzzle is complex and if his card is 256th puzzle card and is a value of 256. What if the answer is 2SHA256 which SHA stands for Secure Hash Algorithm that Bitcoin has been using for mining and address generation. This hash is one of those high security cryptography functions and also the length would have data fix that might contribute of harmony between these blocks.
1.) For example, word would be "squanch" with SHA256 encryption -> “5bfdd901369fbb2ae5052ab5307c74f97651e09bd83e80cf3153952bb81cc7b8”.
2.) satoshi -> DA2876B3EB31EDB4436FA4650673FC6F01F90DE2F1793C4EC332B2387B09726F
3.) Satoshi -> 002688CC350A5333A87FA622EACEC626C3D1C0EBF9F3793DE3885FA254D7E393
** you can play around with it => https://passwordsgenerator.net/sha256-hash-generato **
SHA256 with its code consist 32 bits and 64 digits, so we should not get too far from solving this puzzles some how if this was an method of solving this question via value. Also, the puzzle from this game began in which is called "The city of Perplex". This game has a original concept and also promise reward $200,000 when all the puzzles on the cards are solved. But, think about it f the 256th card is Satoshi that has not been solved it has not been resolved on card number 238. As you can imagine, the 256th card, which is “Satoshi”, has not been resolved. Otherwise, it has not been resolved on card number 238. Hint that our card gives to everyone to solve the puzzle is “ My name is Satoshi ...”. Needless to say with the game has been on the market since 1-2 years before the generation of Bitcoin and Crypto has started. Although I"m also thinking the man might not be Satoshi as his a player, so looking that either looks and style similar is only hope.
submitted by LeftSubstance to FindSatoshi [link] [comments]

CYPHERIUM ENHACES BLOCKCHAIN TECHNOLOGY

OVERVIEW
Rarely has any technology such as blockchain attracted the public and media organisations. Institutions designed to catalyze the fourth industrial revolution are experimenting with technology, and investors have invested hundreds of millions of dollars in blockchain companies. This is a low-risk, experimental environment with error protection. Innovation is a combination of creativity and implementation. Ideas often must go through an evolutionary or cyclical phase before they are ready for commercialization. In fact, the cycle is so long that it is too expensive, inefficient in terms of time and money to generate and generate ideas, and in most cases almost never reaches commercial value. Thus, almost 99% of venture capital firms fail.
A fast growing technology that has come to enhance the blockchain technology is CYPHERIUM.

CHALLENGES FACING THE BLOCKCHAIN TECHNOLOGY
The Bitcoin framework is one of the most notable usage of blockchain innovations in circulated exchange based frameworks. In Bitcoin, each system hub seeks the benefit of putting away a lot of at least one exchanges in another square of the blockchain by comprehending a complex computational math issue, here and there alluded to as a mining verification of-work (POW). Under current conditions, a lot of exchanges is ordinarily put away in another square of the Bitcoin blockchain at a pace of around one new square like clockwork, and each square has an inexact size of one megabyte (MB). As needs be, the Bitcoin framework is dependent upon a looming versatility issue: as it were 3 to 7 exchanges can be handled every second, which is far underneath the quantity of exchanges handled in other exchange based frameworks, for example, the roughly 30,000 exchanges for each second in the Visa™ exchange framework. The most huge disadvantage of the Nakamoto accord is its absence of irrevocability. Conclusion implies once an exchange or an activity is performed on the blockchain, it is for all time recorded on the blockchain and difficult to turn around. This is fundamental to the wellbeing of money related repayment frameworks as exchanges must not be saved once they are made. For Bitcoin's situation, noxious on-screen characters can alter the exchange history given enough hash power, causing a twofold spending assault, given that there is sufficient motivator and money related practicality to complete such assaults. Given that mining gear leasing and botnets are at present predominant around the world, such an assault has become achievable.
Because of this absence of conclusiveness, Nakamoto accord must depend on additional measures, for example, confirmation of-work to forestall pernicious exercises. This hinders the capacity ofNakamoto accord to scale in light of the fact that a exchange must hang tight for various affirmations before coming to "probabilistic absolution".
In this way, wellbeing isn't ensured by Nakamoto agreement, and so as to secure the system, each exchange must experience extra an ideal opportunity to process. For Bitcoin's situation, an exchange isn't considered last until in any event six affirmations. Since Bitcoin can just process a couple of exchanges every second, the exchange cost is preposterously high, making it unreasonable for little installments like shopping for food or eatery feasting. This extraordinarily frustrates Bitcoin's utilization as an installment strategy in this present reality.

CYPHERIUM SOLUTIONS
Cypherium's exclusive algorithm, CypherBFT conquers burdens of the earlier craftsmanship by giving a circulated exchange framework including a gathering of validator hubs that are known to each other in a system however are undefined to the next system hubs in the system. As utilized thus, the gathering of validator hubs might be alluded to as a "Board of trustees" of validator hubs. In a few explanations, the framework reconfigures at least one validator hubs in the Committee dependent on the consequences of confirmation of-work (POW) challenges. As per some uncovered epitomes, a system hub that isn't as of now a validator hub in the Committee might be added to the Committee on the off chance that it effectively finishes a POW challenge. In such an occasion, the system hub may turn into another validator hub in the Committee, supplanting a current validator hub. In elective epitomes, a system hub may become another validator hub in the Committee dependent on a proof-of-stake (POS) accord. In yet another epitome, a system hub may turn into another validator hub in the Committee dependent on a verification of-authority (POA) agreement. In other elective exemplifications, a system hub may turn into a new validator hub in the Committee dependent on a mix of any of POW, POA, and POS accord.

In some revealed exemplifications, the new validator hub replaces a validator hub in the Committee. The substitution might be founded on a foreordained guideline known by all the hubs in the system. For model, the new validator hub may supplant the most established validator hub in the Committee. As indicated by another model, the new validator hub may supplant a validator hub that has been resolved to have gone disconnected, become bargained (e.g., hacked), fizzled (e.g., because of equipment breakdown), or in any case is inaccessible or not, at this point trusted. In the praiseworthy exemplifications, the circulated framework expect that for an adaptation to non-critical failure of f hubs, the Committee incorporates at any rate 3f +1 validator hubs.
Since the validator hubs in the Committee might be every now and again supplanted, for instance, contingent upon the measure of time required to finish the POW challenges, it is hard for vindictive outsiders to identify the total arrangement of validator hubs in the Committee at some random time.

BENEFITS OF CYPHERIUM BLOCKCHAIN TECHNOLOGY
Cypherium runs its exclusive CypherBFT accord, tied down by the HotStuff calculation, and can genuinely offer moment irrevocability for its system clients. With its HotStuff-based structure, the CypherBFT's runtime keeps going just 20-30 milliseconds (ms). A few affirmations are all that is required to for all time acknowledge a proposed obstruct into the blockchain, and it just takes 90ms for these affirmations to come to pass, making the procedure essentially quicker than the two-minutes required by EOS.
Cypherium's CypherBFT, which additionally uses HotStuff, doesn't have to pick between responsiveness and linearity. Cypherium's double blockchain structure incorporates the velocities of a dag, however its review for clients can occur a lot more straightforward and quicker, which adds to the accessibility of data and makes the data more decentralized.
As per some revealed epitomes, the validator hubs in the Committee may get exchange demands from other system hubs, for instance, in a P2P organize. The Committee may incorporate at any rate one validator hub that fills in as a "Pioneer" validator hub; the other validator hubs might be alluded to as "Partner" validator hubs. The Leader hub might be changed occasionally, on request, or inconsistently by the individuals from the Committee. At the point when any validator hub gets another exchange demand from a non-validator hub in the system, the exchange solicitation might be sent to the entirety of the validator hubs in the Committee. Further to the unveiled epitomes, the Pioneer hub facilitates with the other Associate validator hubs to arrive at an accord of an attitude (e.g., acknowledge or dismiss) for an exchange square containing the exchange solicitation and communicates the accord to the whole P2P arrange. In the event that the accord is to acknowledge or in any case approve the exchange demand, the mentioned exchange might be included another square of a blockchain that is known to in any event a portion of the system hubs in the system.
In conclusion, CYPHERIUM'S distributed smart-contracts block-chain is ideal for a good number of use cases which include (but not limited to):
Finance
Messaging
Voting
Notarization
Digital Agreements (Contracts)
Secure data storage
A.I (Artificial Intelligence)
IoT (Internet of Things
To know more about CYPHERIUM kindly visit the following links:
WEBSITE: https://cypherium.io/
GITHUB: https://github.com/cypherium
WHITEPAPER: https://github.com/cypherium/patent/blob/maste15224.0003%20-%20FINAL%20Draft%20Application%20(originally%200003%20invention%201)%20single%20chain%20in%20pipeline.pdf
TELEGRAM: https://t.me/cypherium_supergroup
TWITTER: http://twitter.com/cypheriumchain
FACEBOOK: https://www.facebook.com/CypheriumChain/
AUTHOR: Nwali Jennifer
submitted by iphygurl to BlockchainStartups [link] [comments]

Dive Into Tendermint Consensus Protocol (I)

Dive Into Tendermint Consensus Protocol (I)
This article is written by the CoinEx Chain lab. CoinEx Chain is the world’s first public chain exclusively designed for DEX, and will also include a Smart Chain supporting smart contracts and a Privacy Chain protecting users’ privacy.
longcpp @ 20200618
This is Part 1 of the serialized articles aimed to explain the Tendermint consensus protocol in detail.
Part 1. Preliminary of the consensus protocol: security model and PBFT protocol
Part 2. Tendermint consensus protocol illustrated: two-phase voting protocol and the locking and unlocking mechanism
Part 3. Weighted round-robin proposer selection algorithm used in Tendermint project
Any consensus agreement that is ultimately reached is the General Agreement, that is, the majority opinion. The consensus protocol on which the blockchain system operates is no exception. As a distributed system, the blockchain system aims to maintain the validity of the system. Intuitively, the validity of the blockchain system has two meanings: firstly, there is no ambiguity, and secondly, it can process requests to update its status. The former corresponds to the safety requirements of distributed systems, while the latter to the requirements of liveness. The validity of distributed systems is mainly maintained by consensus protocols, considering the multiple nodes and network communication involved in such systems may be unstable, which has brought huge challenges to the design of consensus protocols.

The semi-synchronous network model and Byzantine fault tolerance

Researchers of distributed systems characterize these problems that may occur in nodes and network communications using node failure models and network models. The fail-stop failure in node failure models refers to the situation where the node itself stops running due to configuration errors or other reasons, thus unable to go on with the consensus protocol. This type of failure will not cause side effects on other parts of the distributed system except that the node itself stops running. However, for such distributed systems as the public blockchain, when designing a consensus protocol, we still need to consider the evildoing intended by nodes besides their failure. These incidents are all included in the Byzantine Failure model, which covers all unexpected situations that may occur on the node, for example, passive downtime failures and any deviation intended by the nodes from the consensus protocol. For a better explanation, downtime failures refer to nodes’ passive running halt, and the Byzantine failure to any arbitrary deviation of nodes from the consensus protocol.
Compared with the node failure model which can be roughly divided into the passive and active models, the modeling of network communication is more difficult. The network itself suffers problems of instability and communication delay. Moreover, since all network communication is ultimately completed by the node which may have a downtime failure or a Byzantine failure in itself, it is usually difficult to define whether such failure arises from the node or the network itself when a node does not receive another node's network message. Although the network communication may be affected by many factors, the researchers found that the network model can be classified by the communication delay. For example, the node may fail to send data packages due to the fail-stop failure, and as a result, the corresponding communication delay is unknown and can be any value. According to the concept of communication delay, the network communication model can be divided into the following three categories:
  • The synchronous network model: There is a fixed, known upper bound of delay $\Delta$ in network communication. Under this model, the maximum delay of network communication between two nodes in the network is $\Delta$. Even if there is a malicious node, the communication delay arising therefrom does not exceed $\Delta$.
  • The asynchronous network model: There is an unknown delay in network communication, with the upper bound of the delay known, but the message can still be successfully delivered in the end. Under this model, the network communication delay between two nodes in the network can be any possible value, that is, a malicious node, if any, can arbitrarily extend the communication delay.
  • The semi-synchronous network model: Assume that there is a Global Stabilization Time (GST), before which it is an asynchronous network model and after which, a synchronous network model. In other words, there is a fixed, known upper bound of delay in network communication $\Delta$. A malicious node can delay the GST arbitrarily, and there will be no notification when no GST occurs. Under this model, the delay in the delivery of the message at the time $T$ is $\Delta + max(T, GST)$.
The synchronous network model is the most ideal network environment. Every message sent through the network can be received within a predictable time, but this model cannot reflect the real network communication situation. As in a real network, network failures are inevitable from time to time, causing the failure in the assumption of the synchronous network model. Yet the asynchronous network model goes to the other extreme and cannot reflect the real network situation either. Moreover, according to the FLP (Fischer-Lynch-Paterson) theorem, under this model if there is one node fails, no consensus protocol will reach consensus in a limited time. In contrast, the semi-synchronous network model can better describe the real-world network communication situation: network communication is usually synchronous or may return to normal after a short time. Such an experience must be no stranger to everyone: the web page, which usually gets loaded quite fast, opens slowly every now and then, and you need to try before you know the network is back to normal since there is usually no notification. The peer-to-peer (P2P) network communication, which is widely used in blockchain projects, also makes it possible for a node to send and receive information from multiple network channels. It is unrealistic to keep blocking the network information transmission of a node for a long time. Therefore, all the discussion below is under the semi-synchronous network model.
The design and selection of consensus protocols for public chain networks that allow nodes to dynamically join and leave need to consider possible Byzantine failures. Therefore, the consensus protocol of a public chain network is designed to guarantee the security and liveness of the network under the semi-synchronous network model on the premise of possible Byzantine failure. Researchers of distributed systems point out that to ensure the security and liveness of the system, the consensus protocol itself needs to meet three requirements:
  • Validity: The value reached by honest nodes must be the value proposed by one of them
  • Agreement: All honest nodes must reach consensus on the same value
  • Termination: The honest nodes must eventually reach consensus on a certain value
Validity and agreement can guarantee the security of the distributed system, that is, the honest nodes will never reach a consensus on a random value, and once the consensus is reached, all honest nodes agree on this value. Termination guarantees the liveness of distributed systems. A distributed system unable to reach consensus is useless.

The CAP theorem and Byzantine Generals Problem

In a semi-synchronous network, is it possible to design a Byzantine fault-tolerant consensus protocol that satisfies validity, agreement, and termination? How many Byzantine nodes can a system tolerance? The CAP theorem and Byzantine Generals Problem provide an answer for these two questions and have thus become the basic guidelines for the design of Byzantine fault-tolerant consensus protocols.
Lamport, Shostak, and Pease abstracted the design of the consensus mechanism in the distributed system in 1982 as the Byzantine Generals Problem, which refers to such a situation as described below: several generals each lead the army to fight in the war, and their troops are stationed in different places. The generals must formulate a unified action plan for the victory. However, since the camps are far away from each other, they can only communicate with each other through the communication soldiers, or, in other words, they cannot appear on the same occasion at the same time to reach a consensus. Unfortunately, among the generals, there is a traitor or two who intend to undermine the unified actions of the loyal generals by sending the wrong information, and the communication soldiers cannot send the message to the destination by themselves. It is assumed that each communication soldier can prove the information he has brought comes from a certain general, just as in the case of a real BFT consensus protocol, each node has its public and private keys to establish an encrypted communication channel for each other to ensure that its messages will not be tampered with in the network communication, and the message receiver can also verify the sender of the message based thereon. As already mentioned, any consensus agreement ultimately reached represents the consensus of the majority. In the process of generals communicating with each other for an offensive or retreat, a general also makes decisions based on the majority opinion from the information collected by himself.
According to the research of Lamport et al, if there are 1/3 or more traitors in the node, the generals cannot reach a unified decision. For example, in the following figure, assume there are 3 generals and only 1 traitor. In the figure on the left, suppose that General C is the traitor, and A and B are loyal. If A wants to launch an attack and informs B and C of such intention, yet the traitor C sends a message to B, suggesting what he has received from A is a retreat. In this case, B can't decide as he doesn't know who the traitor is, and the information received is insufficient for him to decide. If A is a traitor, he can send different messages to B and C. Then C faithfully reports to B the information he received. At this moment as B receives conflicting information, he cannot make any decisions. In both cases, even if B had received consistent information, it would be impossible for him to spot the traitor between A and C. Therefore, it is obvious that in both situations shown in the figure below, the honest General B cannot make a choice.
According to this conclusion, when there are $n$ generals with at most $f$ traitors (n≤3f), the generals cannot reach a consensus if $n \leq 3f$; and with $n > 3f$, a consensus can be reached. This conclusion also suggests that when the number of Byzantine failures $f$ exceeds 1/3 of the total number of nodes $n$ in the system $f \ge n/3$ , no consensus will be reached on any consensus protocol among all honest nodes. Only when $f < n/3$, such condition is likely to happen, without loss of generality, and for the subsequent discussion on the consensus protocol, $ n \ge 3f + 1$ by default.
The conclusion reached by Lamport et al. on the Byzantine Generals Problem draws a line between the possible and the impossible in the design of the Byzantine fault tolerance consensus protocol. Within the possible range, how will the consensus protocol be designed? Can both the security and liveness of distributed systems be fully guaranteed? Brewer provided the answer in his CAP theorem in 2000. It indicated that a distributed system requires the following three basic attributes, but any distributed system can only meet two of the three at the same time.
  1. Consistency: When any node responds to the request, it must either provide the latest status information or provide no status information
  2. Availability: Any node in the system must be able to continue reading and writing
  3. Partition Tolerance: The system can tolerate the loss of any number of messages between two nodes and still function normally

https://preview.redd.it/1ozfwk7u7m851.png?width=1400&format=png&auto=webp&s=fdee6318de2cf1c021e636654766a7a0fe7b38b4
A distributed system aims to provide consistent services. Therefore, the consistency attribute requires that the two nodes in the system cannot provide conflicting status information or expired information, which can ensure the security of the distributed system. The availability attribute is to ensure that the system can continuously update its status and guarantee the availability of distributed systems. The partition tolerance attribute is related to the network communication delay, and, under the semi-synchronous network model, it can be the status before GST when the network is in an asynchronous status with an unknown delay in the network communication. In this condition, communicating nodes may not receive information from each other, and the network is thus considered to be in a partitioned status. Partition tolerance requires the distributed system to function normally even in network partitions.
The proof of the CAP theorem can be demonstrated with the following diagram. The curve represents the network partition, and each network has four nodes, distinguished by the numbers 1, 2, 3, and 4. The distributed system stores color information, and all the status information stored by all nodes is blue at first.
  1. Partition tolerance and availability mean the loss of consistency: When node 1 receives a new request in the leftmost image, the status changes to red, the status transition information of node 1 is passed to node 3, and node 3 also updates the status information to red. However, since node 3 and node 4 did not receive the corresponding information due to the network partition, the status information is still blue. At this moment, if the status information is queried through node 2, the blue returned by node 2 is not the latest status of the system, thus losing consistency.
  2. Partition tolerance and consistency mean the loss of availability: In the middle figure, the initial status information of all nodes is blue. When node 1 and node 3 update the status information to red, node 2 and node 4 maintain the outdated information as blue due to network partition. Also when querying status information through node 2, you need to first ask other nodes to make sure you’re in the latest status before returning status information as node 2 needs to follow consistency, but because of the network partition, node 2 cannot receive any information from node 1 or node 3. Then node 2 cannot determine whether it is in the latest status, so it chooses not to return any information, thus depriving the system of availability.
  3. Consistency and availability mean the loss of the partition tolerance: In the right-most figure, the system does not have a network partition at first, and both status updates and queries can go smoothly. However, once a network partition occurs, it degenerates into one of the previous two conditions. It is thus proved that any distributed system cannot have consistency, availability, and partition tolerance all at the same time.

https://preview.redd.it/456x2blv7m851.png?width=1400&format=png&auto=webp&s=550797373145b8fc1471bdde68ed5f8d45adb52b
The discovery of the CAP theorem seems to declare that the aforementioned goals of the consensus protocol is impossible. However, if you’re careful enough, you may find from the above that those are all extreme cases, such as network partitions that cause the failure of information transmission, which could be rare, especially in P2P network. In the second case, the system rarely returns the same information with node 2, and the general practice is to query other nodes and return the latest status as believed after a while, regardless of whether it has received the request information of other nodes. Therefore, although the CAP theorem points out that any distributed system cannot satisfy the three attributes at the same time, it is not a binary choice, as the designer of the consensus protocol can weigh up all the three attributes according to the needs of the distributed system. However, as the communication delay is always involved in the distributed system, one always needs to choose between availability and consistency while ensuring a certain degree of partition tolerance. Specifically, in the second case, it is about the value that node 2 returns: a probably outdated value or no value. Returning the possibly outdated value may violate consistency but guarantees availability; yet returning no value deprives the system of availability but guarantees its consistency. Tendermint consensus protocol to be introduced is consistent in this trade-off. In other words, it will lose availability in some cases.
The genius of Satoshi Nakamoto is that with constraints of the CAP theorem, he managed to reach a reliable Byzantine consensus in a distributed network by combining PoW mechanism, Satoshi Nakamoto consensus, and economic incentives with appropriate parameter configuration. Whether Bitcoin's mechanism design solves the Byzantine Generals Problem has remained a dispute among academicians. Garay, Kiayias, and Leonardos analyzed the link between Bitcoin mechanism design and the Byzantine consensus in detail in their paper The Bitcoin Backbone Protocol: Analysis and Applications. In simple terms, the Satoshi Consensus is a probabilistic Byzantine fault-tolerant consensus protocol that depends on such conditions as the network communication environment and the proportion of malicious nodes' hashrate. When the proportion of malicious nodes’ hashrate does not exceed 1/2 in a good network communication environment, the Satoshi Consensus can reliably solve the Byzantine consensus problem in a distributed environment. However, when the environment turns bad, even with the proportion within 1/2, the Satoshi Consensus may still fail to reach a reliable conclusion on the Byzantine consensus problem. It is worth noting that the quality of the network environment is relative to Bitcoin's block interval. The 10-minute block generation interval of the Bitcoin can ensure that the system is in a good network communication environment in most cases, given the fact that the broadcast time of a block in the distributed network is usually just several seconds. In addition, economic incentives can motivate most nodes to actively comply with the agreement. It is thus considered that with the current Bitcoin network parameter configuration and mechanism design, the Bitcoin mechanism design has reliably solved the Byzantine Consensus problem in the current network environment.

Practical Byzantine Fault Tolerance, PBFT

It is not an easy task to design the Byzantine fault-tolerant consensus protocol in a semi-synchronous network. The first practically usable Byzantine fault-tolerant consensus protocol is the Practical Byzantine Fault Tolerance (PBFT) designed by Castro and Liskov in 1999, the first of its kind with polynomial complexity. For a distributed system with $n$ nodes, the communication complexity is $O(n2$.) Castro and Liskov showed in the paper that by transforming centralized file system into a distributed one using the PBFT protocol, the overwall performance was only slowed down by 3%. In this section we will briefly introduce the PBFT protocol, paving the way for further detailed explanations of the Tendermint protocol and the improvements of the Tendermint protocol.
The PBFT protocol that includes $n=3f+1$ nodes can tolerate up to $f$ Byzantine nodes. In the original paper of PBFT, full connection is required among all the $n$ nodes, that is, any two of the n nodes must be connected. All the nodes of the network jointly maintain the system status through network communication. In the Bitcoin network, a node can participate in or exit the consensus process through hashrate mining at any time, which is managed by the administrator, and the PFBT protocol needs to determine all the participating nodes before the protocol starts. All nodes in the PBFT protocol are divided into two categories, master nodes, and slave nodes. There is only one master node at any time, and all nodes take turns to be the master node. All nodes run in a rotation process called View, in each of which the master node will be reelected. The master node selection algorithm in PBFT is very simple: all nodes become the master node in turn by the index number. In each view, all nodes try to reach a consensus on the system status. It is worth mentioning that in the PBFT protocol, each node has its own digital signature key pair. All sent messages (including request messages from the client) need to be signed to ensure the integrity of the message in the network and the traceability of the message itself. (You can determine who sent a message based on the digital signature).
The following figure shows the basic flow of the PBFT consensus protocol. Assume that the current view’s master node is node 0. Client C initiates a request to the master node 0. After the master node receives the request, it broadcasts the request to all slave nodes that process the request of client C and return the result to the client. After the client receives f+1 identical results from different nodes (based on the signature value), the result can be taken as the final result of the entire operation. Since the system can have at most f Byzantine nodes, at least one of the f+1 results received by the client comes from an honest node, and the security of the consensus protocol guarantees that all honest nodes will reach consensus on the same status. So, the feedback from 1 honest node is enough to confirm that the corresponding request has been processed by the system.

https://preview.redd.it/sz8so5ly7m851.png?width=1400&format=png&auto=webp&s=d472810e76bbc202e91a25ef29a51e109a576554
For the status synchronization of all honest nodes, the PBFT protocol has two constraints on each node: on one hand, all nodes must start from the same status, and on the other, the status transition of all nodes must be definite, that is, given the same status and request, the results after the operation must be the same. Under these two constraints, as long as the entire system agrees on the processing order of all transactions, the status of all honest nodes will be consistent. This is also the main purpose of the PBFT protocol: to reach a consensus on the order of transactions between all nodes, thereby ensuring the security of the entire distributed system. In terms of availability, the PBFT consensus protocol relies on a timeout mechanism to find anomalies in the consensus process and start the View Change protocol in time to try to reach a consensus again.
The figure above shows a simplified workflow of the PBFT protocol. Where C is the client, 0, 1, 2, and 3 represent 4 nodes respectively. Specifically, 0 is the master node of the current view, 1, 2, 3 are slave nodes, and node 3 is faulty. Under normal circumstances, the PBFT consensus protocol reaches consensus on the order of transactions between nodes through a three-phase protocol. These three phases are respectively: Pre-Prepare, Prepare, and Commit:
  • The master node of the pre-preparation node is responsible for assigning the sequence number to the received client request, and broadcasting the message to the slave node. The message contains the hash value of the client request d, the sequence number of the current viewv, the sequence number n assigned by the master node to the request, and the signature information of the master nodesig. The scheme design of the PBFT protocol separates the request transmission from the request sequencing process, and the request transmission is not to be discussed here. The slave node that receives the message accepts the message after confirming the message is legitimate and enter preparation phase. The message in this step checks the basic signature, hash value, current view, and, most importantly, whether the master node has given the same sequence number to other request from the client in the current view.
  • In preparation, the slave node broadcasts the message to all nodes (including itself), indicating that it assigns the sequence number n to the client request with the hash value d under the current view v, with its signaturesig as proof. The node receiving the message will check the correctness of the signature, the matching of the view sequence number, etc., and accept the legitimate message. When the PRE-PREPARE message about a client request (from the main node) received by a node matches with the PREPARE from 2f slave nodes, the system has agreed on the sequence number requested by the client in the current view. This means that 2f+1 nodes in the current view agree with the request sequence number. Since it contains information from at most fmalicious nodes, there are a total of f+1 honest nodes that have agreed with the allocation of the request sequence number. With f malicious nodes, there are a total of 2f+1 honest nodes, so f+1represents the majority of the honest nodes, which is the consensus of the majority mentioned before.
  • After the node (including the master node and the slave node) receives a PRE-PREPARE message requested by the client and 2f PREPARE messages, the message is broadcast across the network and enters the submission phase. This message is used to indicate that the node has observed that the whole network has reached a consensus on the sequence number allocation of the request message from the client. When the node receives 2f+1 COMMIT messages, there are at least f+1 honest nodes, that is, most of the honest nodes have observed that the entire network has reached consensus on the arrangement of sequence numbers of the request message from the client. The node can process the client request and return the execution result to the client at this moment.
Roughly speaking, in the pre-preparation phase, the master node assigns a sequence number to all new client requests. During preparation, all nodes reach consensus on the client request sequence number in this view, while in submission the consistency of the request sequence number of the client in different views is to be guaranteed. In addition, the design of the PBFT protocol itself does not require the request message to be submitted by the assigned sequence number, but out of order. That can improve the efficiency of the implementation of the consensus protocol. Yet, the messages are still processed by the sequence number assigned by the consensus protocol for the consistency of the distributed system.
In the three-phase protocol execution of the PBFT protocol, in addition to maintaining the status information of the distributed system, the node itself also needs to log all kinds of consensus information it receives. The gradual accumulation of logs will consume considerable system resources. Therefore, the PBFT protocol additionally defines checkpoints to help the node deal with garbage collection. You can set a checkpoint every 100 or 1000 sequence numbers according to the request sequence number. After the client request at the checkpoint is executed, the node broadcasts messages throughout the network, indicating that after the node executes the client request with sequence number n, the hash value of the system status is d, and it is vouched by its own signature sig. After 2f+1 matching CHECKPOINT messages (one of which can come from the node itself) are received, most of the honest nodes in the entire network have reached a consensus on the system status after the execution of the client request with the sequence numbern, and then you can clear all relevant log records of client requests with the sequence number less than n. The node needs to save these2f+1 CHECKPOINTmessages as proof of the legitimate status at this moment, and the corresponding checkpoint is called a stable checkpoint.
The three-phase protocol of the PBFT protocol can ensure the consistency of the processing order of the client request, and the checkpoint mechanism is set to help nodes perform garbage collection and further ensures the status consistency of the distributed system, both of which can guarantee the security of the distributed system aforementioned. How is the availability of the distributed system guaranteed? In the semi-synchronous network model, a timeout mechanism is usually introduced, which is related to delays in the network environment. It is assumed that the network delay has a known upper bound after GST. In such condition, an initial value is usually set according to the network condition of the system deployed. In case of a timeout event, besides the corresponding processing flow triggered, additional mechanisms will be activated to readjust the waiting time. For example, an algorithm like TCP's exponential back off can be adopted to adjust the waiting time after a timeout event.
To ensure the availability of the system in the PBFT protocol, a timeout mechanism is also introduced. In addition, due to the potential the Byzantine failure in the master node itself, the PBFT protocol also needs to ensure the security and availability of the system in this case. When the Byzantine failure occurs in the master node, for example, when the slave node does not receive the PRE-PREPARE message or the PRE-PREPARE message sent by the master node from the master node within the time window and is thus determined to be illegitimate, the slave node can broadcast to the entire network, indicating that the node requests to switch to the new view with sequence number v+1. n indicates the request sequence number corresponding to the latest stable checkpoint local to the node, and C is to prove the stable checkpoint 2f+1 legitimate CHECKPOINT messages as aforementioned. After the latest stable checkpoint and before initiating the VIEWCHANGE message, the system may have reached a consensus on the sequence numbers of some request messages in the previous view. To ensure the consistency of these request sequence numbers to be switched in the view, the VIEWCHANGE message needs to carry this kind of the information to the new view, which is also the meaning of the P field in the message. P contains all the client request messages collected at the node with a request sequence number greater than n and the proof that a consensus has been reached on the sequence number in the node: the legitimate PRE-PREPARE message of the request and 2f matching PREPARE messages. When the master node in view v+1 collects 2f+1 VIEWCHANGE messages, it can broadcast the NEW-VIEW message and take the entire system into a new view. For the security of the system in combination with the three-phase protocol of the PBFT protocol, the construction rules of the NEW-VIEW information are designed in a quite complicated way. You can refer to the original paper of PBFT for more details.

https://preview.redd.it/x5efdc908m851.png?width=1400&format=png&auto=webp&s=97b4fd879d0ec668ee0990ea4cadf476167a2948
VIEWCHANGE contains a lot of information. For example, C contains 2f+1 signature information, P contains several signature sets, and each set has 2f+1 signature. At least 2f+1 nodes need to send a VIEWCHANGE message before prompting the system to enter the next new view, and that means, in addition to the complex logic of constructing the information of VIEWCHANGE and NEW-VIEW, the communication complexity of the view conversion protocol is $O(n2$.) Such complexity also limits the PBFT protocol to support only a few nodes, and when there are 100 nodes, it is usually too complex to practically deploy PBFT. It is worth noting that in some materials the communication complexity of the PBFT protocol is inappropriately attributed to the full connection between n nodes. By changing the fully connected network topology to the P2P network topology based on distributed hash tables commonly used in blockchain projects, high communication complexity caused by full connection can be conveniently solved, yet still, it is difficult to improve the communication complexity during the view conversion process. In recent years, researchers have proposed to reduce the amount of communication in this step by adopting aggregate signature scheme. With this technology, 2f+1 signature information can be compressed into one, thereby reducing the communication volume during view change.
submitted by coinexchain to u/coinexchain [link] [comments]

Proof of Authority

Proof of Authority
https://preview.redd.it/hiu3umys1j451.png?width=560&format=png&auto=webp&s=a918610c070d00bce65edc4dea52ca2d22b3aabe
The Blockchain industry is continuously progressing since its inception. The consensus mechanism is the core of a decentralized ecosystem that helps it to achieve consensus in the network. Till now, many consensus methods have been invented and implemented to achieve consensus within a blockchain system. I am writing a series of articles on different consensus mechanisms with a detailed explanation of their advantages and disadvantages over each other. I have already covered PoW and PoS, so here in this article, I will focus on PoA.
The PoW consensus algorithm used by Bitcoin is considered a reliable and secure consensus mechanism but it doesn’t support scalability. As a result, it restricts the performance of the Bitcoin network along with its transaction speed. The major disadvantage of this method is that it requires high energy consumption and system resources which are needed to solve the complex mathematical puzzles.
With some more features, Proof of Stake came into existence which offers better performance than PoW. There are several PoS projects which are still under development so what new features it can offer and how much it can deal with the drawback of the existing consensus mechanism is depends on the success rate of future projects.
Then there is another consensus mechanism called Proof of Authority which is the enhanced version of PoS. It supports better performance by allowing more transactions per second. Now let’s discuss it in detail.
What is Proof of Authority?
The Proof-Of-Authority (PoA) is a consensus method where a group of validators is already chosen as the authority. Their task is to check and validate all the newly added identities, validate transactions, and blocks to add to the network. To ensure efficiency and security in the network the validator group is usually kept small (~25 or less).
Proof of Authority (PoA) is an enhanced version of Proof of Stake (PoS) where the validator’s identity is used as a stake in the network.
A node needs to complete a mandatory process to authenticate itself to receive the right to generate new blocks. Validators need to register themselves in the public notary database using government-issued documents with the same identity that they have on the platform. Thus, Blocks and transactions are verified by participants, whose identity is already verified and acts as an authority of the system.
With the power under a limited number of users, PoA consensus can be adopted as a solution for private networks rather than public blockchains.
PoA was proposed by a group of developers in March 2017 (coined by Gavin Wood) as a blockchain-based on the Ethereum protocol. It was developed with the idea to solve the problem of spam attacks on Ethereum’s Ropsten test network. The new network was named Kovan, the main test network that all Ethereum users use today.
Pre-Requisites for Proof of Authority Consensus
The PoA consensus algorithm is usually based upon the following criteria:
· Validators need to disclose and confirm their identities by giving government-issued documents.
· The standard procedure for verifying the identity of validators.
· Complex and robust criteria to define a validator so that they can put his reputation at stake and commit to a long-term alliance.
Advantages of PoA consensus
As compared to other consensus methods, PoA offers the following advantages:
· High transaction rate.
· High-performance hardware is not required.
· PoA networks are very scalable as compare to PoW blockchains
· Less power extensive.
· Low transaction fees.
· Sequentially block generation with fixed time interval by authorized network nodes. This increases transaction validity speed.
· No communication is required to reach the consensus between the nodes.
· Network operation is independent of the number of available genuine nodes.
· The chance of a node to become a forge depends upon both its stake and overall holding.
Drawback
· Proof-of-Authority based networks lack in decentralization.
· PoA validator's identities are visible in the network.
· PoA does not guarantee censorship resistance.
Practical Implementation
PoA consensus algorithm can be applied in various fields and industries to achieve high throughput ranging from supply chains to banking sectors. PoA is considered as an effective and reasonable solution along with cost-saving benefit.
Below is the list of projects which has adopted PoA :
· Ethereum’s test net Kovan built on the Parity's PoA Protocol
· PoA Network by the Proof of Authority, LLC. (an Ethereum sidechain)
· The VeChainThor platform.
Conclusion
Every consensus method, be it PoW, PoS or PoA has its own set of advantages and disadvantages. But if we talk about PoA particularly, it somehow compromises in the decentralization area to achieve scalability and throughput.
Proof-Of-Authority can, therefore, be treated as a better option for a centralized solution because of its efficiency and less power consumption property.
Read More: Mastering Basic Attention Token (BAT)
Follow me on Twitter
submitted by RumaDas to u/RumaDas [link] [comments]

[Part 1] KAVA Historical AMA Tracker! (Questions & Answers)

ATTN: These AMA questions are from Autumn 2019 - before the official launch of the Kava Mainnet, and it's fungible Kava Token.
These questions may no longer be relevant to the current Kava landscape, however, they do provide important historical background on the early origins of Kava Labs.
Please note, that there are several repeat questions/answers.

Q1:

Kava is a decentralized DEFI project, why did you implement the countries restrictions to run the node? Will there be such restrictions by the time of the mainnet?

Q2:

According to the project description it has been indicated that staking reward (in KAVA tokens) varies from 3 to 20% per annum. But how will you fight with inflation?

We all know how altcoins prices are falling, and their bottom is not visible. And in fact, we can get an increase in the number of tokens for staking, but not an increase in the price of the token itself and become a long-term investor.

  • Answer: Kava is both inflationary with block rewards, but deflationary when we burn CDP fees. Only stakers who bond their Kava receive inflationary rewards - users and traders on exchanges do not get this. In this way, rewards are inflated, but given to stakers and removed value from the traders who are speculating like a tax. The Deflationary structure of fees should help counterbalance the price drops from inflation if any. In the long-term as more CDPs are used, Kava should be a deflationary asset by design if all things go well

Q3:

In your allocation it is indicated that 28.48% of the tokens are in the "Token treasury" - where will these tokens be directed?

  • Answer: Investors in financing rounds prior to the IEO have entered into long-term lock-up agreements in-line with their belief in Kava’s exciting long-term growth potential and to allow the projects token price to find stability. Following the IEO, the only tokens in circulation will be those sold through the IEO on Binance and the initial Treasury tokens released.
  • No private sale investor tokens are in circulation until the initial release at the end of Q1 2020 and then gradually over the [36] months The initial Treasury tokens in circulation will be used for a mixture of ecosystem grants, the expenses associated with the IEO as well as initial market making requirements as is typical with a listing of this size. Kava remains well financed to execute our roadmap following the IEO and do not envisage any need for any material financings or token sales for the foreseeable future.

Q4:

Such a platform (with loans and stable coins) is just the beginning since these aspects are a small part of many Defi components. Will your team have a plan to implement other functions, such as derivatives, the dex platform once the platform is successfully launched?

  • Answer: We believe Kava is the foundation for many future defi products. We need stable coins, oracles, and other infrastructure first that Kava provides. Once we have that, we can apply these to derivatives and other synthetics more easily. For example, we can use the price feeds and USDX to enable users to place 100x leverage bets with each other. If they both lock funds into payment channels, then they can use a smart contract based on the price feed to do the 100x trade/bet automatically without counter party risk. In this way, Kava can expand its financial product offerings far beyond loans and stable coins in the future.

Q5:

There are several options for using USDX on the KAVA platform, one of which is Margin Trading / Leverage. Is this a selection function or a compulsory function? Wondering since there are some investors who don`t like margin. What is the level of leverage and how does a CDP auction work?

  • Answer: This is a good #Q . Kava simply provides loans to users in USDX stable coins. What the users do is completely up to them. They can use the loans for everyday payments if they like. Leverage and hedging are just the main use cases we foresee - there are many ways people can use the CDP platform and USDX.

Q6:

Most credit platforms do not work well in the current market. What will you do to attract more people to use your platform and the services you provide? Thank you

  • Answer: Most credit platforms do not work well in the current market? I think that isn't correct at least for DeFi. Even in the bear market, MakerDao and Compound saw good user growth. Regardless, our efforts at Kava to build the market are fairly product and BD focused. 1) we build more integrations of assets and expand financial services to attract new communities and users. 2) we focus on building partnerships with high quality teams to promote and build Kava's core user base. Kava is just the developer. Our great partners like Ripple, Stakewith.Us, P2P, Binance - they have the real users that demand Kava. They are like our system integrators that package Kava up nicely and present it to their users. In order to grow, we need to deepen our partnerships and bring in new ones around the world.

Q7:

KAVA functions as a reserve currency in situations where the system is undercollateralized. In such cases new KAVA is minted and used to buy USDX off the market until USDX becomes safely overcollateralized.

Meaning, there will be no max supply of KAVA?

  • Answer: Yes, there is no max supply of Kava.

Q8:

Why Kava?

  • Answer: ...because people are long BTC and the best way to go long BTC without giving up custody is Kava's platform. Because it is MakerDao for bitcoin. Bitcoin has a 10x market cap of ETH and Maker is 10x the size of Kava. I think we're pretty undervalued right now.

Q9:

How do you plan to make liquidity in Kava?

  • Answer: Working with Binance for the IEO and as the first exchange for KAVA to trade on will be a huge boost in increasing the liquidity of trading KAVA.

Q10:

Most crypto investors or crypto users prefer easy transaction and low fees, what can we expect from KAVA about this?

  • Answer: Transaction fees are very low and confirm if seconds. The user experience is quite good on Tendermint-based blockchains.

Q11:

How do I become a note validator on KavA?

Q12:

It is great to know that KAVA is the first DEFI-supported project sponsored by Binance Launchpad, do you think this is the meaning that CZ brings: Opening the DEFI era, as a leader, you feel like how ?

  • Answer: We are the first DeFi platform that Launchpad has supported. We are a very strategic blockchain for major crypto like BNB. Kava's platform will bring more utility to the users of BNB and the Binance DEX. It feels good of course to have validation from the biggest players in the space like Cosmos, Ripple, CZ/Binance, etc.

Q13:

Since decentralized finance applications is already dominating, how do you intend to surpass those leading in the market?

  • Answer: The leaders are only addressing ethereum. BTC, XRP, BNB, ATOM is a much larger set to go after that current players cannot.

Q14:

What does Ripple play in the Kava's ecosystem, since Ripple is like a top tier company and it’s impressive that you are partnered with them?

  • Answer: Ripple is an equity investor in Kava and a big supporter of our work in cross-chain settlement research and implementations. Ripple's XRP is a great asset in terms of users and liquidity that the Kava platform can use. In addition, Ripple's money service business customers are asking for a stable coin for remittances to avoid the currency heading risk that XRP presents. Ripple will not use USDC or other stable coins, but they are open to using USDX as it can be XRP-backed.

Q15:

Considering the connectivity, Libra could be the biggest competitor if KAVA leverages interchain for efficiency.

  • Answer: With regard to USDX, it is important to understand the users interacting with the Kava blockchain have no counterparty that people could go after for legal actions. A user getting a USDX loan has no counterparty. The software holds the collateral and creates the loan. The only laws that would apply are to the very users that are using the system.

Q16:

Wonder how KAVA will compete with the tech giants

  • Answer: Libra is running into extreme issues with the US Senate and regulators. Even the G7-G20 groups are worried. Its important to understand that Libra is effectively a permissioned system. Only big companies that law makers can go after are able to run nodes. In Kava, nodes can be run by anyway and our nodes are based all over the world. It's incredibly hard for a law maker to take down Kava because they would need to find and legally enforce hundreds of business in different jurisdictions to comply. We have an advantage in this way over the larger projects like Libra or Clayton.

Q17:

In long-term, what's the strategy that KAVA has for covering the traditional finance users as well? Especially regarding the "stability"

  • Answer: Technical risk is unavoidable for DeFi. Only time will tell if a system is trustworthy and its never 100% that it will not fail or be hacked. This is true with banks and other financial systems as well. I think for DeFi, the technical risk needs to be priced in to the expected returns to compensate the market. DeFi does have a better user experience - requiring no credit score, identity, or KYC over centralized solutions.
  • With our multi-collateral CDP system, even with it overcollateralized, people can get up to 3x leverage on assets. Take 100 USD in BTC, get a USDX loan for 66 USDX, then buy $66 BTC and do another loan - you can do this with a program to get 3x leverage with the same risk profile. This is enough for most people.
  • However, it will be possible once we have Kava's CDP platform to extend it into products that offer undercollateralized financial products. For example, if USER 1 + USER 2 use payment channels to lock up their USDX, they can use Kava's price feeds to place bets between each other using their locked assets. They can bet that for every $1 BTC/USD moves, the other party owes 3x. In this way we can even do 100x leverage or 1000x leverage and create very fun products for people to trade with. Importantly, even in places where margin trading is regulated and forbidden, Kava's platform will remain open access and available.

Q18:

In long-term, what's the strategy that KAVA has for covering the traditional finance users as well? Especially regarding the "stability"

  • Answer: Kava believes that stable coins should be backed not just by crypto or fiat, but any widely used, highly liquid asset. We think in the future the best stablecoin would be backed by a basket of very stable currencies that include crypto and fiat or whatever the market demands.

Q19:

Compound, maker they're trying to increase their size via the competitive interests rates. THough it shows good return in terms of growth rate, still it's for short-term. Wonder other than financial advantage, KAVA has more for the users' needs?

  • Answer: Robert, the CEO of Compound is an investor and advisor to Kava. We think what Compound does with money markets is amazing and hope to integrate when they support more than just Ethereum assets. Kava's advantage vs others is to provide basic DeFi services like returns on crypto and stable coins today when no other platform offers that. Many platforms support ETH, but no platform can support BTC, XRP, BNB, and ATOM in a decentralized way without requiring centralized custody of these assets.

Q20:

The vast majority of the cryptocurrency community's priorities is symbolic pricing. When prices rise, the community rejoices and grows. When they fall, many people begin to cast in a negative way. How will KAVA solve the negative problem when the price goes down? What is your plan to strengthen and develop the community to persuade more people to look at the product than the price?

  • Answer: We believe price is an important factor for faith in the market. One of Kava's key initiatives was selecting only long-term partners that are willing to work with kava for 2 years. That is why even after 6 months, 0 private investor or kava team tokens will be liquid on the market.
  • We believe not in fast pumps and then dumps that destroy faith, but rather we try and operate the best we can for long-term sustainable growth over time. It's always hard to control factors in the market, and some factors are out of our control such as BTC price correlations, etc - however, we treat this like a public company stock - we want long-term growth of Kava and try to make sure our whole community of Kava holders is aligned with that the best we can.

Q21:

Do you have any plans to attract non-crypto investors to Kava and how? What are the measures to increase awareness of kava in non-crypto space?

  • Answer: We are 100% focused on crypto, not the general market. We solve the problems of crypto traders and investors - not the average grandma who needs a payment solution. Kava is geared for decentralized leverage and hedging.

Q22:

Adoption is crucial for all projects and crypto companies, what strategy are you gonna use/follow or u are now following to get Kava adopted and used by many people all over the world?

Revenue is an important aspect for all projects in order to survive and keep the project/company up and running for long term, what are the ways that Kava generates profits/revenue and what is its revenue model?

  • Answer: We have already partnered with several large exchanges, long-term VCs, and large projects like Ripple and Cosmos. These are key ways for us to grow our community. As we build support for more assets, we plan to promote Kava's services to those new communities of traders.
  • Kava generates revenue as more people use the platform. As the platform is used, KAVA tokens are burned when users pay stability fees. This deflates the total supply of Kava and should in most cases give rise to the value of KAVA like a stock-buyback in the public markets.

Q23:

In order to be success in Loan project of Cryptocurrency, I think marketing is very important to make people using this service without any registration. What is main strategy for marketing?

  • Answer: Our main strategy is to build a great experience and offer products that are not available to communities with demand. Currently no DeFi products can serve BTC users for example. Centralized exchanges can, but nothing truly trustless. Kava's platform can finally give the vast audiences of BTC, BNB, and ATOM holders access to core DeFi services they cannot get on their own due to the smart contract limitations of those platforms.

Q24:

Currently, some project have policies for their ambassadors to create a contribution and attract recognition for the project! So the KAVA team plans to implement policies and incentives for KAVA ambassadors?

  • Answer: Yes, we will be creating a KAVA ambassador program and releasing that soon. Please follow our social media channels to learn about it in the coming weeks.

Q25:

Currently there are so many KAVA tokens sold on exchanges, why is this happening while KAVA is going to IEO on Binance? Are those KAVA codes fake or not?

  • Answer: For everyone's safety, please understand Kava tokens do not exist yet and they will only exist starting with the Binance IEO. Any other token listings or offerings of Kava are not supported by Kava Labs and I highly discourage you all from trying to get them there. It is most likely a big scam. Please only trust Binance for this.

Q26:

KAVA have two tokens, the first is called Kava - a governance and staking token; the second is called USDX - an algorithmically managed crypto-backed stable coin. What are the advantages of USDX compared to other stablecoins such as: USDT, USDC, TUSD, GUSD, ...?

  • Answer: USDX is one of the few stablecoins to be fully backed by crypto-assets. This means that we do not deal with fiat to back the value, and thus we don't have some of the issues when it comes to storing fiat funds with banks and custodians. This also makes our product fully digital and built for the future of crypto growth.

Q27:

As a CEO, does your background in Esports and Gaming industry help anything to your management and development of KAVA Labs?

  • Answer: Esports no. But having been a multi-time venture-backed foundeCEO and have gone through the start-up phase before has made creating and running a 2nd company easier. Right now Kava is still small, Fnatic had over 80 employees. It was at a larger scale. I would say developing software is much more than doing the hardware at fnaticgear.com

Q28:

Why did Kava choose to launch IEO on Binance and not other exchanges like: Kucoin, Houbi, Gate, ....?

  • Answer: Kava had a lot of interest from exchanges to partner with for IEO. We decided based on a lot of factors such as userbase, diverse exposure across multiple regions and countries, and an amazing team that provides so much insight into so many communities such as this one. Binance has been a tremendous partner and we also look forward to continuing our partnership far into the future.

Q29:

Currently if Search on coinmarketcap has 3 types of stablecoins bearing the USDX symbol (but these 3 stablecoins are no information). So, what will KAVA do to let users know that Kava's USDX is another stablecoin?

  • Answer: All these USDX have no volume or listings. We will be on Binance. I am not worried.

Q30:

In addition to the Token Allocation for Binance Launchpad, what is the Token Treasury in the Initial Circulating Supply?

  • Answer: This is controlled by Kava Labs, but with the big cash we have saved from fundraising, we see no reason why these tokens would be sold on the market. The treasury tokens are for use in grants, ecosystem growth initiatives, development, and other incentive programs to drive adoption of the platform.

Q31:

How you will compete with your competitors? Currently i don't see much but for future how you will maintain this consistency ? No doubt it is Great and Unique project, what is the main problem that #KAVA is currently facing?

  • Answer: Because our industry is just starting out, I don't like to think of them as our direct competitors. We are all working to grow the size of the pie rather than get a larger slice from a small pie. The one thing that we believe will allow us to stand apart is the community we are building. Being able to utilize our own community along with Cosmos and our other partners like Binance for the IEO, we have a strong footing to get a lot of early users onto our platform. Also, we are also focusing on growing Kava internationally particularly Asia. We hope to build our platform for an even larger userbase than just the west.

Q32:

How do you explain your project to a random person who has never heard of your project?

  • Answer: non-crypto = Kava is a lending platform for users of cryptocurrencies.
  • crypto = Kava is a cross-chain DeFi platform for loans and stablecoins backed by BTC, BNB, XRP, ATOM and other major cryptocurrencies.

Q33:

Will KAVA team have a plan on implementing DAO module on your platform since its efficiency on autonomy, decentralization and transparency?

  • Answer: All voting is already transparent on the Kava blockchain. We approved a number of proposals on our test net.

Q34:

how to use usdx token :only for your platform or you have plan to use usdx for payment ?

  • Answer: Payments is a nice use case, but demand for crypto payments is still small. We may choose to focus here later if demand for crypto payments increases. Currently it is quite small with the bulk of use remaining in trading and speculative use cases.

Q35:

Do you have plans to spread KAVA ecosystem across other continents. if yes, what are the strategies and how can I as a community member contribute to making it possible?

  • Answer: We are already across many continents - I don't think we are in antarctica yet. Africa might be light on nodes as well. I think as we grow on major exchanges like Binance, new node operators will get interested and help decentralize Kava further.

Q36:

Maker's CDP lending system is on top in this market and its Dominance is currently sitting on 64.90 % , how kava will compete will maker and compound?

  • Answer: adding assets like bitcoin which have more value and more users than ETH. It's a bigger market that Maker cannot compete with Kava in.

Q37:

Currently, the community is too concerned about the price. As prices rise, the community rejoice and grow, when falling, many people start throwing negatively. So what is KAVA's solution to getting people to focus on the project rather than the price of the token?

What is your plan to strengthen and grow the community to persuade more individuals to look at the product than the price?

  • Answer: We also share similar concerns as price and price direction is always a huge factor in the crypto industry. A lot of people of course are very short-term focused on flipping for bigger profits. One of the solutions, and what Kava has done, is to make sure that everything structured is for the long-term. So that makes sure that our investors and employees are all focused on long-term gains and growth. Locking vesting periods are part of that alignment. Another thing is that we at Kava are very transparent in our progress and development. We will be regularly posting updates within our own communities to allow our users and followers to keep up with everything we're up to. Please follow us or look at our github if you're interested!

Q38:

How did Kava get on Piexgo?

  • Answer: We did not work with Piexgo. We have not distributed tokens to any exchange other than Binance. I cannot speak to what is going on there, but I would be very wary of what is happening there.

Q39:

Why was the 1st round price so much lower than the current price

  • Answer: It is natural to worry that early investors got better pricing and could dump on the market. I can assure you that our investors are in this for the long-term. All private sale rounds signed 2 year contracts to run validators - and if they don't they forfeit their tokens. You can compare our release schedule to any other project. We have one of the most restricted circulating supply schedules of any project EVER and its because all our investors are commiting to the long-term success of the project and believe in Kava.
  • About the pricing itself - it is always a function of traction like for any start-up. When we made our public announcement about the project in June, we were only a 4 man team with just some github code. We could basically run a network with a single node, our own. Which is relatively worthless. I think our pricing of Kava at this time was justified. We were effectively a seed-stage company without a product or working network.
  • By July we made severe progress on the development side and the business side. We successful launched our first test net with the help of over 70 validator business partners around the world. We had a world-wide network of hundreds of people supporting us with people and resources at this point and the risk we would fail in launching a working product was much lower. At this point, the Kava project was valued at $25M. At this point, we had many VCs and investors asking for Kava tokens that we turned away. We only accepted validators that would help us launch the network. It was our one and only goal.
  • Fast forward to today, the IEO price simply reflects the traction and market demand for Kava. Our ecosystem is much larger than it was even a month ago. We have support from Ripple, Cosmos, and Binance amongst other large crypto projects. We have 100+ validators securing our network with very sophisticated high-availability set-ups. In addition, our ecosystem partners have built products for Kava - such as block explorers and others are working on native integrations to wallets and exchanges. Launchpad will be very big for us. Kava is a system designed to cater to crypto traders and investors and in a matter of days we distributed via Binance Launchpad and put in the hands of 130+ countries and tens of thousands of users overnight. It doesn't get more DeFi than that.

Q40:

What is the treasury used for?

  • Answer: Kava's treasury is for ecosystem growth activities.
  • Investors in financing rounds prior to the IEO have entered into long-term lock-up agreements in-line with their belief in Kava’s exciting long-term growth potential and to allow the projects token price to find stability. Following the IEO, the only tokens in circulation will be those sold through the IEO on Binance and the initial Treasury tokens released. No private sale investor tokens are in circulation until the initial release at the end of Q1 2020 and then gradually over the [36] months The initial Treasury tokens in circulation will be used for a mixture of ecosystem grants, the expenses associated with the IEO as well as initial market making requirements as is typical with a listing of this size. Kava remains well financed to execute our roadmap following the IEO and do not envisage any need for any material financings or token sales for the foreseeable future.

Q41:

Everyone have heard about the KAVA token, and read about it. But it would be great to hear your explanation about it. What is the Kava token, what is it's utility? :)

  • Answer: The Kava token plays many roles. KAVA is the native staking token of the Kava blockchain and is used for securing the network. KAVA is delegated to validators, basically professional node operators that run highly-available servers to secure the Kava blockchain. The top 100 validators by weight of staked KAVA earn block rewards that range from 3-20% APR based on the total amount staked in the network. These rewards are split between the validators and the KAVA holders.
  • When users of the platform repay their loans, they must a stability fee (a percentage of the loan) in KAVA tokens. These tokens are burned by the system, effectively deflating the total supply overtime as more users use the CDP system.
  • KAVA is also the primary token used in governance of the platform. KAVA token holders can vote on key system parameter changes and upgrades such as what assets to support, how much USDX in total can be loaned by the system, what the debt-to-collateral ratio needs to be, the stability fees, etc. KAVA holders have a very important responsibility to govern the system well.
  • Lastly, Kava functions as a "Lender of Last Resort" meaning if USDX ever gets undercollateralized because the underlying asset prices drop suddenly and the system manages it poorly, KAVA is inflated in these emergency situations and used to purchase USDX off the market until USDX reaches a state of being over collateralized again. KAVA holders have incentive to only support the good high quality assets so risk of the system is managed responsibly.

Q42:

No matter how perfect and technically thought-out a DeFi protocol is, it cannot be completely protected from any unplanned situations (such as extreme market fluctuations, some legal issues, etc.)

Ecosystem members, in particular the validators on whom KAVA relies on fundamental decision-making rights, should be prepared in advance for any "critical" scenario. Considering that, unlike the same single-collateral MakerDAO, KAVA will be a multi-collateral CDP system, this point is probably even more relevant here.

In this regard, please answer the following question: Does KAVA have a clear risk management model or strategy and how decentralized is / will it be?

  • Answer: Simialar to other CDP systems and MakerDAO we do have a system freeze function where in cases of extreme issues, we can stop the auction mechanisms and return all collateral.

Q43:

Did you know that "Kava" is translated into Ukrainian like "Coffee"? I personally do love drinking coffee. I plunge into the fantasy world. Why did you name your project "Kava" What is the story behind it? What idea / fantasy did your project originate from, which inspired you to create it?

  • Answer: Kava is coffee to you.
  • Kava is Hippopotamus to Japanese.
  • Cava is a region in Spain
  • Kava is also a root that is used in tea which makes your mouth numb.
  • Kava is also crow in Hindi.
  • Kava last but not least is a DeFi platform launching on Binance :)
  • We liked the sound of Kava it was as simple as that. It doesn't have much meaning in the USA where I am from. But it's short sweet and when we were just starting, Kava.io was available for a reasonable price

Q44:

What incentives does a lender get if a person chooses to pay with KAVA? Is there a discount on interest rates on the loan amount if you pay with KAVA? Do I have to pass the KYC procedure to apply for a small loan?

  • Answer: There is no KYC for Kava. Its an open blockchain software platform where anyone with a computer can connect to it and use it.

Q45:

Let's say, I decided to bond my cryptocurrency and got USDX stable coins. For now, it`s an unknown stable coin (let's be honest). Do you plan to add USDX to other famous exchanges? Also, you have spoken about the USDX staking and that the percentage would be higher than for other stable coins. Please be so kind to tell us what is the average annual interest rate and what are the conditions of staking?

  • Answer: Yes we have several large exchanges willing to support USDX from the start. Binance/Binance-DEX is one you should all know ;)
  • The average annual rates for USDX will depend on market conditions. The rate is actually provided by the CDP fees users pay. The system reallocates a portion of those fees to USDX users. In times when USDX use needs to grow, the rates will be higher to incentivize use. When demand is strong, we can reduce the rates.

Q46:

Why should i use and choose Kava's loan if i can use the similar margin trade on Binance?

  • Answer: If margin is available to you and you trust the exchange then you should do whatever is cheaper. For a US citizen and others, margin is often not available and if it is, only for a few asset types as collateral. Kava aims to address this and offer this to everyone.

Q47:

The IEO price is $ 0.46 while the price of the first private sale is $ 0.075. Don't you think that such price gap can negatively affect the liquidity of the token and take away the desire to buy a token on the exchange?

  • Answer: It is natural to worry that early investors got better pricing and could dump on the market. I can assure you that our investors are in this for the long-term. All private sale rounds signed 2 year contracts to run validators - and if they don't they forfeit their tokens. You can compare our release schedule to any other project. We have one of the most restricted circulating supply schedules of any project EVER and its because all our investors are commiting to the long-term success of the project and believe in Kava.
  • About the pricing itself - it is always a function of traction like for any start-up. When we made our public announcement about the project in June, we were only a 4 man team with just some github code. We could basically run a network with a single node, our own. Which is relatively worthless. I think our pricing of Kava at this time was justified. We were effectively a seed-stage company without a product or working network.
  • By July we made severe progress on the development side and the business side. We successful launched our first test net with the help of over 70 validator business partners around the world. We had a world-wide network of hundreds of people supporting us with people and resources at this point and the risk we would fail in launching a working product was much lower. At this point, the Kava project was valued at $25M. At this point, we had many VCs and investors asking for Kava tokens that we turned away. We only accepted validators that would help us launch the network. It was our one and only goal.
  • Fast forward to today, the IEO price simply reflects the traction and market demand for Kava. Our ecosystem is much larger than it was even a month ago. We have support from Ripple, Cosmos, and Binance amongst other large crypto projects. We have 100+ validators securing our network with very sophisticated high-availability set-ups. In addition, our ecosystem partners have built products for Kava - such as block explorers and others are working on native integrations to wallets and exchanges. Launchpad will be very big for us. Kava is a system designed to cater to crypto traders and investors and in a matter of days we distributed via Binance Launchpad and put in the hands of 130+ countries and tens of thousands of users overnight. It doesn't get more DeFi than that.
  • TLDR - I think KAVA is undervalued and the liquid supply of tokens is primarily from the IEO so its a safer bet than other IEOs. If the price drops, it will be from the overall market conditions or fellow IEO users not due private sale investors or team sell-offs.

Q48:

Can you introduce some information abouts KAVA Deflationary Fee Structure? With the burning mechanism, does it mean KAVA will never reach its max supply?

  • Answer: When loans are repaid, users pay a fee in Kava. This is burned. However, Kava does not have a max supply. It has a starting supply of 100M. It inflates for block rewards 3-20% APR AND it inflates when the system is at risk of under collateralization. At this time, more Kava is minted and used to purchase USDX off the market until it reaches full collateralization again.
  • TLDR: If things go well, and governance is good, Kava deflates and hopefully appreciates in value. If things go wrong, Kava holders get inflated.

Q49:

In your opinion what are advantage of decentralized finance over centralized?

  • Answer: One of the main advantages is not needing to pay the costs of regulation and compliance. Open financial software that is usable by anyone removes middle men fees and reduces the barrier for new entrants to enter and make new products. Also DeFI has an edge in terms of onboarding - to get a bank account or an exchange account you need to do lots of KYC and give private info. That takes time and is troublesome. With DeFi you just load up your funds and transact. Very fast user flows.

Q50:

Plan, KAVA how to raise capital? Kava is being supported by more than 100 business entities around the world, including major cryptocurrency investment funds like Ripple and Cosmos, so what did kava do to convince investors to join the project?

  • Answer: We have been doing crypto research and development for years. Ripple and Cosmos were partners before we even started this blockchain with Kava Labs. When we announced Kava the DeFi platform they knew us already to do good work and they liked the idea so they support us.
submitted by Kava_Mod to KavaUSDX [link] [comments]

Climate Change is real, but its manufactured. Weather is the new battlefield

(GlobalIntelHub.com New York, NY) — 2/12/2020 — The US Military has become an entity with a mind of it’s own; despite efforts to curtail it’s expansion by activists and politicians, it continues to grow. Years ago a small problem arose that posed an existential threat to the system, the enemies were all defeated. The real hero of the Cold War is Richard Nixon, and his mentor Henry Kissinger, who created a financial system whereby the US Dollar was backed by bombs only, which allowed the USD to expand its balance sheet with no accountability. This ultimately allowed the US to outspend the Soviets and other enemies into oblivion, and the strategy finally worked. With the elimination of real enemies, the strategy planners inside the DOD knew they needed to create more enemies, and thus the ‘terrorist’ was born. Now that terrorists have been defeated to the point of irrelevancy, we need new and modern enemies to fight.
Enter where we are now, an age of weaponized weather and biowarfare, cyberwar and other forms of information war. Let’s first discuss Bioweapons, and how it pertains to Coronavirus. The US Military has been spending billions on Bioweapons (both offensive and defensive) for the last 20 years. We’re not going to quote numbers as part of the budget is likely part of the ‘black budget’ but some estimates have it as high as $100 Billion over a period of 20 years. There are thousands of scientists working on various forms of biowarfare. So the question remains, if they are spending all this money, what are they doing with it?
For those who understand the US Government budget policy in general, there is a ‘use it or lose it’ ethos which means if you don’t buy new computers every year your budget will be cut. If you don’t spend allocated funds they will be cut. So they spend them to the max, often some of the funds go towards ‘justification research’ to justify, perhaps in front of Congress or in a public report, why the spending is ‘vital to national security.’
But with Coronavirus spreading around Asia, a more deadly and more disruptive force is being overlooked: Weather modification. Climate Change is real but it’s not due to factors that are commonly believed (factories, traffic, cow farts). Climate Change is the scapegoat for what’s really going on: Weather Wars, weather modification, terraforming, and manipulation of the entire planet on a biological and chemical level. As you will see, this is intertwined with Coronavirus in ways you wouldn’t at first imagine.
If you believe in fairy tales including what’s on TV, you can stop reading now as this will only damage your brain and may cause you to seek medical help. WARNING: PARENTAL DISCRETION ADVISED
The US Military spends billions on R&D development through front end organizations like Darpa, InQTel, Navy Research (ONR), just to name a few. You may be surprised to learn that they not only develop technologies, but also patent them, sell them at a profit, and even participate in Venture Capital. Inside Silicon Valley there has been a program since World War 2 that drip-leaks next generation technology to Silicon Valley after it no longer has Military use (or when it’s no longer a strategic advantage, such as the internet). One of the most well known operations is the Parc labs, currently owned by Xerox.
Technologies leaked to corporate America include the microprocessor, kevlar, lasers, fiber optics, the ‘mouse’ GUI system for personal computers, and many more.
The 90’s was a success not because of Bill Clinton, it was because of a number of global geopolitical factors such as the falling of the Soviet Union, and the deregulation of the internet and proliferation for civilian use.
Let’s look at some notable patents held by the US Military apparatus. The NSA has patented thousands of encryption technologies, but the most ironic patent held is SHA 256 encryption algorithm, the technology that is behind Bitcoin.
Or perhaps it’s not so ironic, perhaps the NSA was funding Bitcoin as a surveillance mechanism all along, as it has recently been revealed the CIA owned one of the world’s most well known encrypted communications services based in Switzerland, Crypto AG.
What other interesting patents are held by the US Military? To the point of this article, 20100072297 is a “Method for Controlling Hurricanes.” You can see a long list of weather related patents held by the USG at the end of this article. But if you want evidence of weather manipulation, just look up in the sky.
If you believe this is ‘contrails’ from a plane, here’s an explanation from an expert:
For the record, all US military jet air tankers and all commercial jet carriers are equipped with “Hi Bypass Turbofan” jet engines which are by design nearly incapable of producing condensation trails except under rare and extreme conditions. The trails we increasingly see in our skies are the result of sprayed dispersions related to climate engineering, not condensation.
https://globalintelhub.com/climate-change-real-manufactured-weather-battlefield/
submitted by preiposwap to conspiracy [link] [comments]

For devs and advanced users that are still in the dark: Read this to get redpilled about why Bitcoin (SV) is the real Bitcoin

This post by cryptorebel is a great intro for newbies. Here is a continuation for a technical audience. I'll be making edits for readability and maybe even add more content.
The short explanation of why BSV is the real Bitcoin is that it implements the original L1 scripting language, and removes hacks like p2sh. It also removes the block size limit, and yes that leads to a small number of huge nodes. It might not be the system you wanted. Nodes are miners.
The key thing to understand about the UTXO architecture is that it is maximally "sharded" by default. Logically dependent transactions may require linear span to construct, but they can be validated in sublinear span (actually polylogarithmic expected span). Constructing dependent transactions happens out-of-band in any case.
The fact that transactions in a block are merkelized is an obvious sign that Bitcoin was designed for big blocks. But merkle trees are only half the story. UTXOs are essentially hash-addressed stateful continuation snapshots which can also be "merged" (validated) in a tree.
I won't even bother talking about how broken Lightning Network is. Of all the L2 scaling solutions that could have been used with small block sizes, it's almost unbelievable how many bad choices they've made. We should be kind to them and assume it was deliberate sabotage rather than insulting their intelligence.
Segwit is also outside the scope of this post.
However I will briefly hate on p2sh. Imagine seeing a stunted L1 script language, and deciding that the best way to implement multisigs was a soft-fork patch in the form of p2sh. If the intent was truly backwards-compatability with old clients, then by that logic all segwit and p2sh addresses are supposed to only be protected by transient rules outside of the protocol. Explain that to your custody clients.
As far as Bitcoin Cash goes, I was in the camp of "there's still time to save BCH" until not too long ago. Unfortunately the galaxy brains behind BCH have doubled down on their mistakes. Again, it is kinder to assume deliberate sabotage. (As an aside, the fact that they didn't embrace the name "bcash" when it was used to attack them shows how unprepared they are when the real psyops start to hit. Or, again, that the saboteurs controlled the entire back-and-forth.)
The one useful thing that came out of BCH is some progress on L1 apps based on covenants, but the issue is that they are not taking care to ensure every change maintains the asymptotic validation complexity of bitcoin's UTXO.
Besides that, The BCH devs missed something big. So did I.
It's possible to load the entire transaction onto the stack without adding any new opcodes. Read this post for a quick intro on how transaction meta-evaluation leads to stateful smart contract capabilities. Note that it was written before I understood how it was possible in Bitcoin, but the concept is the same. I've switching to developing a language that abstracts this behavior and compiles to bitcoin's L1. (Please don't "told you so" at me if you just blindly trusted nChain but still can't explain how it's done.)
It is true that this does not allow exactly the same class of L1 applications as Ethereum. It only allows those than can be made parallel, those that can delegate synchronization to "userspace". It forces you to be scalable, to process bottlenecks out-of-band at a per-application level.
Now, some of the more diehard supporters might say that Satoshi knew this was possible and meant for it to be this way, but honestly I don't believe that. nChain says they discovered the technique 'several years ago'. OP_PUSH_TX would have been a very simple opcode to include, and it does not change any aspect of validation in any way. The entire transaction is already in the L1 evaluation context for the purpose of checksig, it truly changes nothing.
But here's the thing: it doesn't matter if this was a happy accident. What matters is that it works. It is far more important to keep the continuity of the original protocol spec than to keep making optimizations at the protocol level. In a concatenative language like bitcoin script, optimized clients can recognize "checksig trick phrases" regardless of their location in the script, and treat them like a simple opcode. Script size is not a constraint when you allow the protocol to scale as designed. Think of it as precompiles in EVM.
Now let's address Ethereum. V. Buterin recently wrote a great piece about the concept of credible neutrality. The only way for a blockchain system to achieve credible neutrality and long-term decentralization of power is to lock down the protocol rules. The thing that caused Ethereum to succeed was the yellow paper. Ethereum has outperformed every other smart contract platform because the EVM has clear semantics with many implementations, so people can invest time and resources into applications built on it. The EVM is apolitical, the EVM spec (fixed at any particular version) is truly decentralized. Team Ethereum can plausibly maintain credibility and neutrality as long as they make progress towards the "Serenity" vision they outlined years ago. Unfortunately they have already placed themselves in a precarious position by picking and choosing which catastrophes they intervene on at the protocol level.
But those are social and political issues. The major technical issue facing the EVM is that it is inherently sequential. It does not have the key property that transactions that occur "later" in the block can be validated before the transactions they depend on are validated. Sharding will hit a wall faster than you can say "O(n/64) is O(n)". Ethereum will get a lot of mileage out of L2, but the fundamental overhead of synchronization in L1 will never go away. The best case scaling scenario for ETH is an L2 system with sublinear validation properties like UTXO. If the economic activity on that L2 system grows larger than that of the L1 chain, the system loses key security properties. Ethereum is sequential by default with parallelism enabled by L2, while Bitcoin is parallel by default with synchronization forced into L2.
Finally, what about CSW? I expect soon we will see a lot of people shouting, "it doesn't matter who Satoshi is!", and they're right. The blockchain doesn't care if CSW is Satoshi or not. It really seems like many people's mental model is "Bitcoin (BSV) scales and has smart contracts if CSW==Satoshi". Sorry, but UTXO scales either way. The checksig trick works either way.
Coin Woke.
submitted by -mr-word- to bitcoincashSV [link] [comments]

[Discussion] CS.DEALS now has a real money marketplace with 1% selling fees

I want to start this off with something personal, which is exceptional because I never talk about my personal life on the internet or in any way mix it with my work online. For the past 16 months my life has consisted of nothing but misfortunes, playing CS and working on the marketplace and that's why this is a big day for me and why I want to share my little story behind it. I'm not looking for pity, my story is just something I want to share as I feel it's tied to my creation. Originally I had even bigger plans and created an incredible trading engine and more for cs.deals over the time period but it's to be determined if the plans are ever going to be realised. The ultimate plan will also remain a secret.
When I started KeyVendor over three years ago I've since become a fairly well known name in this scene, but people don't really know who I am, they only recognize my name and might know what I do online. I don't have social media accounts that people could follow and I don't mention to anyone what I'm up to or how I'm feeling. I'm really honored to see that people recognize me here and care about me and my goings-on, like when I got falsely VAC banned in January and people - some of whom I don't even know - supported me and defended my side without even knowing me. Valve was quick to remove the ban and that was that.
In 2017 I was happy with how I was doing with my trading sites and Keyvendor and everything was good in general. However, on the first day of 2018 everything started tumbling down, in the end turning out to be the worst year of my life, possibly for the rest of my life. Without a warning I suddenly lost my wonderful mother to a brain aneurysm. I watched from the side as she was passing out and was rushed to the hospital with little to no hope. We share the loss with my siblings, step father and my relatives and I feel the burden is also shared, which is why it wasn't as painful to me as it might've been to someone else. Then in April all my businesses went down due to the trade lock update. It was an understandable action from Valve and I know skins are a risky business and as I said I'm not asking for pity, this is life. There's much more I could add but I want to keep this short. After the update I quickly got Keyvendor back up and then I had to start working on cs.deals and my other trade site to bring them back for the loyal users who were patiently waiting to trade out their balance (we also did manual withdrawals for those who asked) and to continue using the loved services. In the meantime of coding features for the trade locked items, I started to get a vision about the big plans that I mentioned, which now led to this marketplace. The trade sites were coming back together slowly but steadily, when in July my alcoholic father passed away from liver failure. He wasn't the best person and we hadn't really been in touch for a few years since moving away from him and it wasn't a huge deal, unlike my mother's case. I was left with all the paperwork related to his death and this somewhat postponed the launch of the trading sites. All these misfortunes have driven me to create this marketplace to make things better again.
Finally in October I had CS.DEALS and my other site back up and running, but turns out they only have expenses and don't make any profit at all. Ever since then I was even more determined to create the big plan to get me back to where I was in 2017: have something sustainable and something I can constantly work on to improve. I hope this marketplace will be the thing. I'm determined to make this the best thing the scene has seen since OPskins if not even better, even if won't be as appreciated as it would've been during OPSkins and non-tradelock times. From now on we will have a bigger team working on things and I won't be a one man army.

Anyway, that's the story. Let's get to the marketplace:

You can think of the service as a trade bot website and a real money skin marketplace combined. You can sell your skins for trade-balance, like you would do on an advanced trade bot, or you can sell them for real money and buy them like you would on a marketplace. You can also top up trade-balance by converting real money to trade-balance and vice versa. Currently all this is confusing but we are looking into ways to make it simpler to the user and your feedback would come in useful with that.
Initially we will support the sale of the same items as the tradebot has supported thus far: CSGO, TF2, H1Z1, Dota2 and Rust. More games will be supported as the site picks up pace. The plan is to keep all fees very low to make this THE place to sell skins. There will be a constant flow of new features and improvements based on you guys' feedback and whatever is needed. Right now the feature set is pretty lacking and the whole service might feel crude, but everything will improve very quickly.
More deposit and withdrawal options are coming in the next week or so, but right now only Bitcoin deposits and withdrawals are supported. The most notable deposit options will be G2A PAY and SEPA transfers. Cashouts will work via SEPA transfer, Paypal and of course Bitcoin. The cashouts will be in Euro but we will also offer occasional USD cashouts via Paypal and bank transfer.
What I feel is really unique and good about the marketplace is the PriceDecay™ pricing method. Here the explanation for it from the website and an image to go with it:
With PriceDecay™ you can cashout your skins easier than ever, for better prices than ever. It automatically lowers the price of your item over time using an advanced algorithm. That means you won't have to keep updating the price of your item and you don't need to know the exact price of your item to sell it for good value. The item will get sold when it reaches the optimal pricepoint, without much effort at all.
https://imgur.com/a/TQzeRE4
If you have any feedback or confusion about the service, I would very much appreciate if you could let us know about them so it can be improved. I will be happy to answer any questions you have about anything.
I asked the mods for a permission to make this post, as cs.money always has a post made about them whenever they add a new pixel on their site. However, I never got a response so let's see how this goes.
https://www.youtube.com/watch?v=JJSsuP_5Ld4
submitted by Jambozx to GlobalOffensiveTrade [link] [comments]

Best General RenVM Questions of January 2020

Best General RenVM Questions of January 2020

‌*These questions are sourced directly from Telegram
Q: When you say RenVM is Trustless, Permissionless, and Decentralized, what does that actually mean?
A: Trustless = RenVM is a virtual machine (a network of nodes, that do computations), this means if you ask RenVM to trade an asset via smart contract logic, it will. No trusted intermediary that holds assets or that you need to rely on. Because RenVM is a decentralized network and computes verified information in a secure environment, no single party can prevent users from sending funds in, withdrawing deposited funds, or computing information needed for updating outside ledgers. RenVM is an agnostic and autonomous virtual broker that holds your digital assets as they move between blockchains.
Permissionless = RenVM is an open protocol; meaning anyone can use RenVM and any project can build with RenVM. You don't need anyone's permission, just plug RenVM into your dApp and you have interoperability.
Decentralized = The nodes that power RenVM ( Darknodes) are scattered throughout the world. RenVM has a peak capacity of up to 10,000 Darknodes (due to REN’s token economics). Realistically, there will probably be 100 - 500 Darknodes run in the initial Mainnet phases, ample decentralized nonetheless.

Q: Okay, so how can you prove this?
A: The publication of our audit results will help prove the trustlessness piece; permissionless and decentralized can be proven today.
Permissionless = https://github.com/renproject/ren-js
Decentralized = https://chaosnet.renproject.io/

Q: How does Ren sMPC work? Sharmir's secret sharing? TSS?
A: There is some confusion here that keeps arising so I will do my best to clarify.TL;DR: *SSS is just data. It’s what you do with the data that matters. RenVM uses sMPC on SSS to create TSS for ECDSA keys.*SSS and TSS aren’t fundamental different things. It’s kind of like asking: do you use numbers, or equations? Equations often (but not always) use numbers or at some point involve numbers.
SSS by itself is just a way of representing secret data (like numbers). sMPC is how to generate and work with that data (like equations). One of the things you can do with that work is produce a form of TSS (this is what RenVM does).
However, TSS is slightly different because it can also be done *without* SSS and sMPC. For example, BLS signatures don’t use SSS or sMPC but they are still a form of TSS.
So, we say that RenVM uses SSS+sMPC because this is more specific than just saying TSS (and you can also do more with SSS+sMPC than just TSS). Specifically, all viable forms of turning ECDSA (a scheme that isn’t naturally threshold based) into a TSS needs SSS+sMPC.
People often get confused about RenVM and claim “SSS can’t be used to sign transactions without making the private key whole again”. That’s a strange statement and shows a fundamental misunderstanding about what SSS is.
To come back to our analogy, it’s like saying “numbers can’t be used to write a book”. That’s kind of true in a direct sense, but there are plenty of ways to encode a book as numbers and then it’s up to how you interpret (how you *use*) those numbers. This is exactly how this text I’m writing is appearing on your screen right now.
SSS is just secret data. It doesn’t make sense to say that SSS *functions*. RenVM is what does the functioning. RenVM *uses* the SSSs to represent private keys. But these are generated and used and destroyed as part of sMPC. The keys are never whole at any point.

Q: Thanks for the explanation. Based on my understanding of SSS, a trusted dealer does need to briefly put the key together. Is this not the case?
A: Remember, SSS is just the representation of a secret. How you get from the secret to its representation is something else. There are many ways to do it. The simplest way is to have a “dealer” that knows the secret and gives out the shares. But, there are other ways. For example: we all act as dealers, and all give each other shares of our individual secret. If there are N of us, we now each have N shares (one from every person). Then we all individually add up the shares that we have. We now each have a share of a “global” secret that no one actually knows. We know this global secret is the sum of everyone’s individual secrets, but unless you know every individual’s secret you cannot know the global secret (even though you have all just collectively generates shares for it). This is an example of an sMPC generation of a random number with collusion resistance against all-but-one adversaries.

Q: If you borrow Ren, you can profit from the opposite Ren gain. That means you could profit from breaking the network and from falling Ren price (because breaking the network, would cause Ren price to drop) (lower amount to be repaid, when the bond gets slashed)
A: Yes, this is why it’s important there has a large number of Darknodes before moving to full decentralisation (large borrowing becomes harder). We’re exploring a few other options too, that should help prevent these kinds of issues.

Q: What are RenVM’s Security and Liveliness parameters?
A: These are discussed in detail in our Wiki, please check it out here: https://github.com/renproject/ren/wiki/Safety-and-Liveliness#analysis

Q: What are the next blockchain under consideration for RenVM?
A: These can be found here: https://github.com/renproject/ren/wiki/Supported-Blockchains

Q: I've just read that Aztec is going to be live this month and currently tests txs with third parties. Are you going to participate in early access or you just more focused on bringing Ren to Subzero stage?
A: At this stage, our entire focus is on Mainnet SubZero. But, we will definitely be following up on integrating with AZTEC once everything is out and stable.

Q: So how does RenVM compare to tBTC, Thorchain, WBTC, etc..?
A: An easy way to think about it is..RenVM’s functionality is a combination of tBTC (+ WBTC by extension), and Thorchain’s (proposed) capabilities... All wrapped into one. Just depends on what the end-user application wants to do with it.

Q1: What are the core technical/security differences between RenVM and tBTC?A1: The algorithm used by tBTC faults if even one node goes offline at the wrong moment (and the whole “keep” of nodes can be penalised for this). RenVM can survive 1/3rd going offline at any point at any time. Advantage for tBTC is that collusion is harder, disadvantage is obviously availability and permissionlessness is lower.
tBTC an only mint/burn lots of 1 BTC and requires an on-Ethereum SPV relay for Bitcoin headers (and for any other chain it adds). No real advantage trade-off IMO.
tBTC has a liquidation mechanism that means nodes can have their bond liquidated because of ETH/BTC price ratio. Advantage means users can get 1 BTC worth of ETH. Disadvantage is it means tBTC is kind of a synthetic: needs a price feed, needs liquid markets for liquidation, users must accept exposure to ETH even if they only hold tBTC, nodes must stay collateralized or lose lots of ETH. RenVM doesn’t have this, and instead uses fees to prevent becoming under-collateralized. This requires a mature market, and assumed Darknodes will value their REN bonds fairly (based on revenue, not necessarily what they can sell it for at current —potentially manipulated—market value). That can be an advantage or disadvantage depending on how you feel.
tBTC focuses more on the idea of a tokenized version of BTC that feels like an ERC20 to the user (and is). RenVM focuses more on letting the user interact with DeFi and use real BTC and real Bitcoin transactions to do so (still an ERC20 under the hood, but the UX is more fluid and integrated). Advantage of tBTC is that it’s probably easier to understand and that might mean better overall experience, disadvantage really comes back to that 1 BTC limit and the need for a more clunky minting/burning experience that might mean worse overall experience. Too early to tell, different projects taking different bets.
tBTC supports BTC (I think they have ZEC these days too). RenVM supports BTC, BCH, and ZEC (docs discuss Matic, XRP, and LTC).
Q2: This are my assumed differences between tBTC and RenVM, are they correct? Some key comparisons:
-Both are vulnerable to oracle attacks
-REN federation failure results in loss or theft of all funds
-tBTC failures tend to result in frothy markets, but holders of tBTC are made whole
-REN quorum rotation is new crypto, and relies on honest deletion of old key shares
-tBTC rotates micro-quorums regularly without relying on honest deletion
-tBTC relies on an SPV relay
-REN relies on federation honesty to fill the relay's purpose
-Both are brittle to deep reorgs, so expanding to weaker chains like ZEC is not clearly a good idea
-REN may see total system failure as the result of a deep reorg, as it changes federation incentives significantly
-tBTC may accidentally punish some honest micro-federations as the result of a deep reorg
-REN generally has much more interaction between incentive models, as everything is mixed into the same pot.
-tBTC is a large collection of small incentive models, while REN is a single complex incentive model
A2: To correct some points:
The oracle situation is different with RenVM, because the fee model is what determines the value of REN with respect to the cross-chain asset. This is the asset is what is used to pay the fee, so no external pricing is needed for it (because you only care about the ratio between REN and the cross-chain asset).
RenVM does rotate quorums regularly, in fact more regularly than in tBTC (although there are micro-quorums, each deposit doesn’t get rotated as far as I know and sticks around for up to 6 months). This rotation involves rotations of the keys too, so it does not rely on honest deletion of key shares.
Federated views of blockchains are easier to expand to support deep re-orgs (just get the nodes to wait for more blocks for that chain). SPV requires longer proofs which begins to scale more poorly.
Not sure what you mean by “one big pot”, but there are multiple quorums so the failure of one is isolated from the failures of others. For example, if there are 10 shards supporting BTC and one of them fails, then this is equivalent to a sudden 10% fee being applied. Harsh, yes, but not total failure of the whole system (and doesn’t affect other assets).
Would be interesting what RenVM would look like with lots more shards that are smaller. Failure becomes much more isolated and affects the overall network less.
Further, the amount of tBTC you can mint is dependent on people who are long ETH and prefer locking it up in Keep for earning a smallish fee instead of putting it in Compound or leveraging with dydx. tBTC is competing for liquidity while RenVM isn't.

Q: I understand correctly RenVM (sMPC) can get up to a 50% security threshold, can you tell me more?
A: The best you can theoretically do with sMPC is 50-67% of the total value of REN used to bond Darknodes (RenVM will eventually work up to 50% and won’t go for 67% because we care about liveliness just as much as safety). As an example, if there’s $1M of REN currently locked up in bonded Darknodes you could have up to $500K of tokens shifted through RenVM at any one specific moment. You could do more than that in daily volume, but at any one moment this is the limit.Beyond this limit, you can still remain secure but you cannot assume that players are going to be acting to maximize their profit. Under this limit, a colluding group of adversaries has no incentive to subvert safety/liveliness properties because the cost to attack roughly outweighs the gain. Beyond this limit, you need to assume that players are behaving out of commitment to the network (not necessarily a bad assumption, but definitely weaker than the maximizing profits assumption).

Q: Why is using ETH as collateral for RenVM a bad idea?
A: Using ETH as collateral in this kind of system (like having to deposit say 20 ETH for a bond) would not make any sense because the collateral value would then fluctuate independently of what kind of value RenVM is providing. The REN token on the other hand directly correlates with the usage of RenVM which makes bonding with REN much more appropriate. DAI as a bond would not work as well because then you can't limit attackers with enough funds to launch as many darknodes as they want until they can attack the network. REN is limited in supply and therefore makes it harder to get enough of it without the price shooting up (making it much more expensive to attack as they would lose their bonds as well).
A major advantage of Ren's specific usage of sMPC is that security can be regulated economically. All value (that's being interopped at least) passing through RenVM has explicit value. The network can self-regulate to ensure an attack is never worth it.

Q: Given the fee model proposal/ceiling, might be a liquidity issue with renBTC. More demand than possible supply?A: I don’t think so. As renBTC is minted, the fees being earned by Darknodes go up, and therefore the value of REN goes up. Imagine that the demand is so great that the amount of renBTC is pushing close to 100% of the limit. This is a very loud and clear message to the Darknodes that they’re going to be earning good fees and that demand is high. Almost by definition, this means REN is worth more.
Profits of the Darknodes, and therefore security of the network, is based solely on the use of the network (this is what you want because your network does not make or break on things outside the systems control). In a system like tBTC there are liquidity issues because you need to convince ETH holders to bond ETH and this is an external problem. Maybe ETH is pumping irrespective of tBTC use and people begin leaving tBTC to sell their ETH. Or, that ETH is dumping, and so tBTC nodes are either liquidated or all their profits are eaten by the fact that they have to be long on ETH (and tBTC holders cannot get their BTC back in this case). Feels real bad man.

Q: I’m still wondering which asset people will choose: tbtc or renBTC? I’m assuming the fact that all tbtc is backed by eth + btc might make some people more comfortable with it.
A: Maybe :) personally I’d rather know that my renBTC can always be turned back into BTC, and that my transactions will always go through. I also think there are many BTC holders that would rather not have to “believe in ETH” as an externality just to maximize use of their BTC.

Q: How does the liquidation mechanism work? Can any party, including non-nodes act as liquidators? There needs to be a price feed for liquidation and to determine the minting fee - where does this price feed come from?
A: RenVM does not have a liquidation mechanism.
Q: I don’t understand how the price feeds for minting fees make sense. You are saying that the inputs for the fee curve depend on the amount of fees derived by the system. This is circular in a sense?
A: By evaluating the REN based on the income you can get from bonding it and working. The only thing that drives REN value is the fact that REN can be bonded to allow work to be done to earn revenue. So any price feed (however you define it) is eventually rooted in the fees earned.

Q: Who’s doing RenVM’s Security Audit?
A: ChainSecurity | https://chainsecurity.com/

Q: Can you explain RenVM’s proposed fee model?
A: The proposed fee model can be found here: https://github.com/renproject/ren/wiki/Safety-and-Liveliness#fees

Q: Can you explain in more detail the difference between "execution" and "powering P2P Network". I think that these functions are somehow overlapping? Can you define in more detail what is "execution" and "powering P2P Network"? You also said that at later stages semi-core might still exist "as a secondary signature on everything (this can mathematically only increase security, because the fully decentralised signature is still needed)". What power will this secondary signature have?
A: By execution we specifically mean signing things with the secret ECDSA keys. The P2P network is how every node communicates with every other node. The semi-core doesn’t have any “special powers”. If it stays, it would literally just be a second signature required (as opposed to the one signature required right now).
This cannot affect safety, because the first signature is still required. Any attack you wanted to do would still have to succeed against the “normal” part of the network. This can affect liveliness, because the semi-core could decide not to sign. However, the semi-core follows the same rules as normal shards. The signature is tolerant to 1/3rd for both safety/liveliness. So, 1/3rd+ would have to decide to not sign.
Members of the semi-core would be there under governance from the rest of our ecosystem. The idea is that members would be chosen for their external value. We’ve discussed in-depth the idea of L<3. But, if RenVM is used in MakerDAO, Compound, dYdX, Kyber, etc. it would be desirable to capture the value of these ecosystems too, not just the value of REN bonded. The semi-core as a second signature is a way to do this.
Imagine if the members for those projects, because those projects want to help secure renBTC, because it’s used in their ecosystems. There is a very strong incentive for them to behave honestly. To attack RenVM you first have to attack the Darknodes “as per usual” (the current design), and then somehow convince 1/3rd of these projects to act dishonestly and collapse their own ecosystems and their own reputations. This is a very difficult thing to do.
Worth reminding: the draft for this proposal isn’t finished. It would be great for everyone to give us their thoughts on GitHub when it is proposed, so we can keep a persistent record.

Q: Which method or equation is used to calculate REN value based on fees? I'm interested in how REN value is calculated as well, to maintain the L < 3 ratio?
A: We haven’t finalized this yet. But, at this stage, the plan is to have a smart contract that is controlled by the Darknodes. We want to wait to see how SubZero and Zero go before committing to a specific formulation, as this will give us a chance to bootstrap the network and field inputs from the Darknodes owners after the earnings they can make have become more apparent.
submitted by RENProtocol to RenProject [link] [comments]

How Bitcoin Works Under the Hood But how does bitcoin actually work? Grover's Algorithm Fully Explained How Bitcoin Works in 5 Minutes (Technical) The Bitcoin and Blockchain Technology Explained

Bitcoin uses the same strategy to compare all transactions on the Blockchain. It can do this very quickly by using powerful computer code. The process of comparing transactions using this computer code is known as Bitcoin’s Consensus Algorithm. SHA-256 is a member of the SHA-2 cryptographic hash functions designed by the NSA. SHA stands for Secure Hash Algorithm. Cryptographic hash functions are mathematical operations run on digital data; by comparing the computed "hash" (the output from execution of the algorithm) to a known and expected hash value, a person can determine the data's integrity. The result of “bitcoin mining” is twofold. First, when computers solve these complex math problems on the Bitcoin network, they produce new bitcoin, not unlike when a mining operation extracts In bitcoin, integrity, block-chaining, and the hashcash cost-function all use SHA256 as the underlying cryptographic hash function. A cryptographic hash function essentially takes input data which can be of practically any size, and transforms it, in an effectively-impossible to reverse or to predict way, into a relatively compact string (in Transactions - private keys. A transaction is a transfer of value between Bitcoin wallets that gets included in the block chain. Bitcoin wallets keep a secret piece of data called a private key or seed, which is used to sign transactions, providing a mathematical proof that they have come from the owner of the wallet. The signature also prevents the transaction from being altered by anybody

[index] [14763] [13879] [26700] [15210] [15652] [11061] [23785] [10829] [13844] [30593]

How Bitcoin Works Under the Hood

Breadth First Search Algorithm explanation. Breadth First Search Algorithm explanation. Skip navigation Sign in. ... Banking on Bitcoin YouTube Movies. 2017 · Documentary; 1:23:41. Sleepover ... presented paper on Bitcoin by Satoshi Nakamoto (His identity is unknown), He developed algorithm and spftwarre for it. Its not goverened by any central bank or government. 1 bitcoin= $9174 = 6 ... More at exacttrading.com in this video I look at a very simple starting point for the definition and example of what is known as algorithmic Forex trading. MT4 the popular retail platform affords ... The block chain is the main innovation of Bitcoin. Honest generators only build onto a block (by referencing it in blocks they create) if it is the latest block in the longest valid chain. A short introduction to how Bitcoin Works. Want more? Check out my new in-depth course on the latest in Bitcoin, Blockchain, and a survey of the most exciting projects coming out (Ethereum, etc ...

Flag Counter