Bitcoin Cash Review (LATEST 2019): The Complete Guide To ...

Much of the crypto currency ecosystem doesn’t realize what amazing strides Bitcoin Cash has made in the last two years.

A few examples are:
1. SLP Tokens.
2. On chain dividend payments to token holders.
3. Schnorr Signatures
4. 32MB maximum block size increase limit enabling ~200 tx per second
5. Second most used crypto currency in the world for payments.
6. The crypto currency with the most physically accepting merchants in the world.
7. One of the most active developer and social communities.
8. Amazing new Opcodes like OP_CheckDataSig
9. On chain privacy tools that can provide privacy on par with Monero.
10. Listed on nearly every exchange around the world with many using it as a base pair.

What other developments are you excited about that we should help get the word out about?
submitted by MemoryDealers to btc [link] [comments]

Move over Ethereum: New functionality for Bitcoin Cash makes it a Smart Contract Contender

Move over Ethereum: New functionality for Bitcoin Cash makes it a Smart Contract Contender submitted by RogueSploit to btc [link] [comments]

Upcoming Updates to Bitcoin Consensus

Price and Libra posts are shit boring, so let's focus on a technical topic for a change.
Let me start by presenting a few of the upcoming Bitcoin consensus changes.
(as these are consensus changes and not P2P changes it does not include erlay or dandelion)
Let's hope the community strongly supports these upcoming updates!

Schnorr

The sexy new signing algo.

Advantages

Disadvantages

MuSig

A provably-secure way for a group of n participants to form an aggregate pubkey and signature. Creating their group pubkey does not require their coordination other than getting individual pubkeys from each participant, but creating their signature does require all participants to be online near-simultaneously.

Advantages

Disadvantages

Taproot

Hiding a Bitcoin SCRIPT inside a pubkey, letting you sign with the pubkey without revealing the SCRIPT, or reveal the SCRIPT without signing with the pubkey.

Advantages

Disadvantages

MAST

Encode each possible branch of a Bitcoin contract separately, and only require revelation of the exact branch taken, without revealing any of the other branches. One of the Taproot script versions will be used to denote a MAST construction. If the contract has only one branch then MAST does not add more overhead.

Advantages

Disadvantages

submitted by almkglor to Bitcoin [link] [comments]

12-22 13:34 - ' Etherscan Export CSV Data - 0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2 window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-4699...' by /u/mahdiamolimoghadam removed from /r/Bitcoin within 128-138min

'''
 Etherscan Export CSV Data - 0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2 window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-46998878-6', { 'anonymize_ip': true }); .ether-search-heading { pointer-events: none !important; } .ether-search-heading .ui-menu-item-wrapper, .[link]^^1 .ui-menu-item-wrapper { cursor: default !important; border-color: transparent !important; 0 } .ui-menu-item .ether-search, .[link]^^2 .ether-search { background-color: transparent !important; border-color: #e7eaf3 !important; } .[link]^^2 .ether-search { background-color: #f8fafd !important; } .ui-menu-item .ether-search, .ui-menu-item .ether-search { background-color: transparent !important; border-color: transparent !important; } body.dark-mode .[link]^^2 .ether-search { background-color: #012137 !important; color: #a2b9c8 !important; border-color: #013558 !important; } //  Eth: $148.33 (-0.12%) Home Blockchain 
View Txns
View Pending Txns
View Contract Internal Txns
 View Blocks Forked Blocks (Reorgs) 
View Uncles
Top Accounts
Verified Contracts
 Tokens 
ERC-20 Top Tokens
View ERC-20 Transfers
ERC-721 Top Tokens
View ERC-721 Transfers
 Resources 
Ethereum Directory
Charts & Stats
Top Statistics
 More Developers 
APIs
Verify Contract
Byte to Opcode
Broadcast TXN
Vyper Online Compiler
 Swarm & ENS 
SWARM Search
SWARM Upload
ENS Events
ENS Lookup
 Service Tracker 
Dapp Tracker New
DEX Tracker
DEX OrderBooks
Gas Tracker
Node Tracker
Loan Tracker
 Misc 
Mining Calculator
Verified Signature
Similiar Contracts
Label Word Cloud
 Sign In  Explorers 
Ethereum Mainnet
Ethereum Mainnet CN
Ropsten Testnet
Kovan Testnet
Rinkeby Testnet
Goerli Testnet
EWC Chain
Beacon Testnet Eth2.0
 All FiltersAddressesTokensName TagsLabelsWebsites 
Download Data (Transactions)
The information you requested can be downloaded from this page. But before continuing please verify that you are not a robot by completing the captcha below.
 Export the earliest 5000 records starting from  Powered by Ethereum 
Etherscan is a Block Explorer and Analytics Platform for Ethereum, a decentralized smart contracts platform.
 Preferences 
Company
About Us
Advertise
Contact Us
Brand Assets
Terms of Service
Community
Developer APIs
Knowledge Base
Network Status
Disqus Comments
Social Links
Twitter
Facebook
Medium
Reddit
Etherscan آ© 2019 (A)
Donations 0x71c7656ec7ab88b098defb751b7401b5f6d8976f
 This website uses cookies to improve your experience and has an updated Privacy Policy. Got It $(window).on('load', function () { // initialization of HSMegaMenu component $('.js-mega-menu').HSMegaMenu({ event: 'hover', pageContainer: $('.container'), breakpoint: 767.98, hideTimeOut: 0 }); }); $(document).on('ready', function () { // initialization of header $.HSCore.components.HSHeader.init($('#header')); // initialization of unfold component $.HSCore.components.HSUnfold.init($('[data-unfold-target]'), { afterOpen: function () { $(this).find('input[type="search"]').focus(); } }); // initialization of malihu scrollbar $.HSCore.components.HSMalihuScrollBar.init($('.js-scrollbar')); // initialization of focus state $.HSCore.components.HSFocusState.init(); // initialization of go to $.HSCore.components.HSGoTo.init('.js-go-to'); // initialization of cubeportfolio //$.HSCore.components.HSCubeportfolio.init('.cbp'); }); var strGlobal = sessionStorage.getItem("ShowAs"); var cookieconsent = getCookie("etherscan_cookieconsent"); if (cookieconsent !== "True") { document.getElementById("divcookie").style.display = "block"; }; function getCookie(cname) { var name = cname + "="; var ca = document.cookie.split(';'); for (var i = 0; i < ca.length; i++) { var c = ca[i]; while (c.charAt(0) === ' ') { c = c.substring(1); } if (c.indexOf(name) === 0) { return c.substring(name.length, c.length); } } return ""; } $("#btnCookie").click(function () { $("#divcookie").fadeOut("slow", function () { var d = new Date(); d.setTime(d.getTime() + (1095 * 24 * 60 * 60 * 1000)); var expires = "expires=" + d.toUTCString(); document.cookie = "etherscan_cookieconsent=True" + ";" + expires + ";path=/";; }); }); function handleSearchText(data) { searchAddress = false; if ($("txtSearchInput").val() == "") { $("#hdnSearchText").val(""); } } jQuery(document).ready(function () { $.HSCore.components.HSRangeDatepicker.init('.js-range-datepicker'); $.HSCore.components.HSValidation.init('.js-validate'); }); var correctCaptcha = function (response) { //alert(response); }; function get_action(form) { var v = grecaptcha.getResponse(); if (v.length == 0) { console.log("You can't leave captcha code empty"); return false; } else { console.log("Captcha completed"); document.getElementById("responsetext").innerHTML = "
Your download is being processed.
"; document.getElementById("ContentPlaceHolder1_divAlertMsg").style.display = "none"; document.getElementById("ContentPlaceHolder1_btnSubmit").style.display = "none"; document.getElementById("rcaptcha").style.visibility = "hidden"; return true; } }
'''
Context Link
Go1dfish undelete link
unreddit undelete link
Author: mahdiamolimoghadam
1: ether-search-heading:hover 2: ui-menu-item:hover 3: ui-menu-item:hover 4: ui-menu-item:hover
submitted by removalbot to removalbot [link] [comments]

IOTA, and When to Expect the COO to be Removed

Hello All,
This post is meant to address the elephant in the room, and the #1 criticism that IOTA gets which is the existence of the Coordinator node.
The Coordinator or, COO for short, is a special piece of software that is operated by the IOTA Foundation. This software's function is to drop "milestone" transactions onto the Tangle that help in ordering of Transactions.
As this wonderful post on reddit highlights (https://www.reddit.com/Iota/comments/7c3qu8/coordinator_explained/)
When you want to know if a transaction is verified, you find the newest Milestone and you see if it indirectly verifies your transaction (i.e it verifies your transaction, or if verifies a transaction that verifies your transaction, or if it verifies a transaction that verifies a transaction that verifies your transaction, etc). The reason that the Milestones exist is because if you just picked any random transaction, there's the possibility that the node you're connected to is malicious and is trying to trick you into verifying its transactions. The people who operate nodes can't fake the signatures on Milestones, so you know you can trust the Milestones to be legit.
The COO protects the network, that is great right?
No, it is not.
The coordinator represents a centralized entity that draws the ire of the concurrency community in general is the reason behind a lot of FUD.
Here is where things get dicey. If you ask the IOTA Foundation, the last official response I heard was
We are running super computer simulations with the University of St. Peteresburg to determine when that could be a possibility.
This answer didn't satisfy me, so I've spent the last few weeks thinking about the problem and think I can explain the challenges that the IOTA Foundation are up against, what they expect to model with the super computer simulations, and what ultimately what my intuition (backed up by some back of the napkin mathematics) tells me that outcomes will be.
In order to understand the bounds of the problem, we first need to understand what our measuring stick is.
Our measuring stick provides measurements with respect to hashed per second. A hash, is a mathematical operation that blockchain (and DAG) based applications require before accepting your transaction. This is generally thought of as an anti-spam measure used to protect a blockchain network.
IOTA and Bitcoin share some things in common, and one of those things is that they both require Proof of Work in order to interact with the blockchain.
In IOTA, a single hash is completed for each Transaction that you submit. You complete this PoW at the time of submitting your Transaction, and you never revisit it again.
In Bitcoin, hashes are guessed at by millions of computers (miners) competing to be the first person to find solve the correct hash, and ultimately mint a new block.
Because of the competitive nature of the bitcoin mining mechanism, the bitcoin hashrate is a sustained hashrate, while the IOTA hashrate is "bursty" going through peaks and valleys as new transactions are submitted.
Essentially, IOTA performance is a function of the current throughput of the network. While, bitcoin's performance is a delicate balance between all collective miners, the hashing difficulty with the goal of pegging the block time to 10 minutes.
With all that said, I hope it is clear that we can come to the following conclusion.
The amount of CPU time required to compute 1 Bitcoin hash is much much greater then the amount of CPU time required to compute 1 IOTA hash.
T(BtcHash) >> T(IotaHash)
After all, low powered IOT devices are supposed to be able to execute the IOTA hashing function in order to submit their own transactions.
A "hash" has be looked at as an amount of work that needs to be completed. If you are solving a bitcoin hash, it will take a lot more work to solve then an IOTA hash.
When we want to measure IOTA, we usually look at "Transactions Per Second". Since each Transaction requires a single Hash to be completed, we can translate this measurement into "Hashes Per Second" that the entire network supports.
IOTA has seen Transactions Per Second on the order of magnitude of <100. That means, that at current adoption levels the IOTA network is supported and secured by 100 IOTA hashes per second (on a very good day).
Bitcoin hashes are much more difficult to solve. The bitcoin network is secured by 1 Bitcoin hash every 10 minutes (which adjust's it's difficult over time to remain pegged at 10 minutes). (More details on bitcoin mining: https://www.coindesk.com/information/how-bitcoin-mining-works/)
Without the COOs protection, IOTA would be a juicy target destroy. With only 100 IOTA hashes per second securing the network, that means that an individual would only need to maintain a sustained 34 hashes per second in order to completely take over the network.
Personally, my relatively moderate gaming PC takes about 60 seconds to solve IOTA Proof of Work before my transaction will be submitted to the Tangle. This is not a beastly machine, nor does it utilize specialized hardware to solve my Proof of Work. This gaming PC cost about $1000 to build, and provides me .0166 hashes per second.
**Using this figure, we can derive that consumer electronics provide hashing efficiency of roughly $60,000 USD / Hash / Second ($60k per hash per second) on the IOTA network.
Given that the Tx/Second of IOTA is around 100 on a good day, and it requires $60,000 USD to acquire 1Hash/Second of computing power we would need 34 * $60,000 to attack the IOTA network.
The total amount of money required to 34% the IOTA project is $2,040,00
This is a very small number. Not only that, but the hash rate required to conduct such an attack already exists, and it is likely that this attack has already been attempted.
The simple truth is, that due to the economic incentive of mining the hash rate required to attack IOTA is already centralized, and are foaming at the mouth to attack IOTA. This is why the Coordinator exists, and why it will not be going anywhere anytime soon.
The most important thing that needs to occur to remove the COO, is that the native measurement of transactions per second (which ultimately also measures the hashes per second) need to go drastically up in orders of magnitude.
If the IOTA transaction volume were to increase to 1000 transactions per second, then it would require 340 transactions per second from a malicious actor to compromise the network. In order to complete 340 transactions per second, the attacker would need now need the economic power of 340 * $60,000 to 34% attack the IOTA network.
In this hypothetical scenario, the cost of attacking the IOTA network is $20,400,000. This number is still pretty small, but at least you can see the pattern. IOTA will likely need to hit many-thousand transactions per second before it can be considered secure.
What we have to keep in mind here, is that IOTA has an ace up their sleeve, and that Ace is JINN Labs and the ternary processor that they are working on.
Ultimately, JINN is the end-game for the IOTA project that will make the removal of the COO a reality.
In order to understand what JINN is, we need to understand a little bit about computer architecture and the nature of computational instruction in general.
A "processor" is a piece of hardware that performs micro calculations. These micro calculations are usually very simple, such as adding two numbers, subtracting two numbers, incrementing, decrementing, and the like. The operation that is completed (addition, subtraction) is called the opcode while the numbers being operated on are called the operands.
Traditional processors, like the ones you find in my "regular gaming PC" are binary processors where both the opcode and operands are expected to be binary numbers (or a collection of 0s and 1s).
The JINN processor, provides the same functionality, mainly a hardware implementation of micro instructions. However, it expects the opcodes and operands to be ternary numbers (or a collection of 0s, 1s, and 2s).
I won't get into the computational data density of base 2 vs. base 3 processors, nor will get I get into the energy efficiency of those processors. What I will be getting into however, is how certain tasks are simpler to solve in certain number systems.
Depending on what operations are being executed upon the operands, performing the calculation in a different base will actually reduce the amount of steps required, and thus the execution time of the calculation. For an example, see how base 12 has been argued to be superior to base 10 (https://io9.gizmodo.com/5977095/why-we-should-switch-to-a-base-12-counting-system)
I want to be clear here. I am not saying that any 1 number system is superior to any other number system for all types of operations. I am simply saying, that there exist certain types of calculations that are easier to perform in base 2, then they are performed in base 10. Likewise, there are calculations that are vastly simpler in base 3 then they are in base 2.
The IOTA POW, and the algorithms required to solve for it is one of these algorithms. The IOTA PoW was designed to be ternary in nature, and I suggest that this is the reason right here. The data density and electricity savings that JINN provides are great, but the real design decision that has led to base 3 has been that they can now manufacture hardware that is superior at solving their own PoW calculations.
Binary emulation, is when a binary processor is asked to perform ternary operations. A binary processor is completely able to solve ternary hashes, but in order to do so it will need to emulate the ternary micro instructions at a higher level in the application stack from away from the hardware.
If you had access to a base 3 processor, and needed perform a base 3 addition operation you could easily ask your processor to natively perform that calculation.
If all you have access to, is a base 2 processor, you would need to emulate a base 3 number system in software. This would ultimately result in a higher number of instructions passing through your processor, more electricity being utilized, more time to complete.
Finally, let's review these figures.
It costs roughly $60k to acquire 1hash per second in BASE 2 consumer electrictronic. It costs roughly $2M to acquire enough BASE 2 hash rate to 34% the IOTA network.
JINN, will be specifically manufactured hardware that will solve base 3 hashes natively. What this likely means, is that $1 spent on JINN will be much more effective at acquiring base 3 hash rate then $1 spent on base 2 hash rate.
Finally, with bitcoin and traditional block chain applications there lies economic incentive to amass mining hardware.
It first starts out by a miner earning income from his mining rig. He then reinvests those profits on additional hardware to increase his income.
Eventually, this spirals into an arms raise where the players that are left in the game have increasingly available resources up until the point that there are only a handful of players left.
This economic incentive, creates a mass centralization of computing resources capable of being misused in a coordinated effort to attack a cryptocurrency.
IOTA aims to break this economic incentive, and the centralization that is causes. However, over the short term the fact that the centralization of such resources does exist is an existential peril to IOTA, and the COO is an inconvenient truth that we all have to live with.
Due to all the above, I think we can come to the following conclusions:
  1. IOTA will not be able to remove the COO until the transactions per second (and ultimately hashrate) increase by orders of magnitude.
  2. The performance of JINN processors, and their advantage of being able to compute natively on ternary operands and opcodes will be important for the value ratio of $USD / hash rate on the IOTA network
  3. Existing mining hardware is at a fundamental disadvantage to computing base 3 hashes when compared to a JINN processor designed specifically for that function
  4. Attrition of centralized base 2 hash power will occur if the practice of mining can be defeated and the income related to it. Then the incentive of amassing a huge amount of centralized computing power will be reduced.
  5. JINN processors, and their adoption in consume electronics (like cell phones and cars) hold the key in being able to provide enough "bursty" hash rate to defend the network from 34% attacks without the help of the COO.
  6. What are the super computer simulations? I think they are simulating a few things. They are modeling tip selection algorithms to reduce the amount of unverified transactions, however I think they may also be performing some simulations regarding the above calculations. JINN processors have not been released yet, so the performance benchmarks, manufacturing costs, retail costs, and adoption rates are all variables that I cannot account for. The IF probably has much better insight into all of those figures, which will allow them to better understand when the techno-economic environment would be conducive to the disabling of the COO.
  7. The COO will likely be decentralized before it is removed. With all this taken into account, the date that the COO will be removed is years off if I was forced to guess. This means, that decentralizing the COO itself would be a sufficient stop-gap to the centralized COO that we see today.
submitted by localhost87 to Iota [link] [comments]

What's the f*****ng benefit of the reactivated OP_Codes?

Nobody explained what we can do with the soon to be reactivated OP_Codes for Bitcoin Cash, and nobody explained why we need them. It's a fact that there are risks associated with them, and there is no sufficient testing of these risks by independent developers, nor is there a sufficient explanation why they carry no risk. BitcoinABC developers, explain yourselves, please.
Edit: Instead of calling me a troll, please answer the question. If not, ask someone else.
Edit Edit: tomtomtom7 provided a resfreshing answer on the question:
https://www.reddit.com/btc/comments/7z3ly4/to_the_people_who_thing_we_urgently_need_to_add/dulkmnf/
The OP_Codes were disabled because bugs were found, and worry existed that more bugs could exist.
They are now being re-enabled with these bugs fixed, with sufficient test cases and they will be put through thorough review.
These are missing pieces in the language for which various use cases have been proposed over the years.
The reason to include these, is because all developers from various implementations have agreed that this is a good idea. No objections are raised.
Note that this does not mean that all these OP_Codes will make it in the next hardfork. This is obviously uncertain when testing and reviewing is still being done.
This is not yet the case for OP_GROUP. Some objection and questions have been raised which takes time to discuss and time to come to agreement. IMO this is a very healthy process.
Another good comment is here
https://www.reddit.com/btc/comments/7z49at/whats_the_fng_benefit_of_the_reactivated_op_codes/dullcek/
One precise thing: Allowing more bitwise logical operators can (will) yield smaller scripts, this saves data on the blockchain, the hex code gets smaller.
Here is a detailled answer. I did not goe through it if it is satisfying, but at least it is a very good start, Thank you silverjustice.
But further, if you want specific advantages for some of these, then I recommend you check out the below from the scaling Bitcoin conference:
opcodes are very useful, such as in for example with CAT you can do tree signatures even if you have a very complicated multisig design using CAT you could reduce that size to log(n) size. It would be much more compact. Or with XOR we could do some kind of deterministic random number generator by combining secret values from different parties so that nobody could cheat. They could combine and generate a new random number. If people think-- ... we could use LEFT to make weaker hash. These opcodes were re-enabled in sidechain elements project. It's a sidechain from Bitcoin Core. We can reintroduce these functions to bitcoin.
The other problem are the ... numeric operations which were disabled by Satoshi. There's another problem. Which is that the range of values accepted by script is limited and confused because the CScript.. is processed at ..... bit integers internally. But to these opcodes it's only 32 bits at most. So it's quite confusing. The other problem is that we have this.. it requires 251 encode or calculate or manipulate this number. So we need at least 52 bits. But right now it is only 32 bits. So the proposal is to expand the valid input range to 7 bytes which would allow 56 bits. And it limits the maximum size to 7 bytes so we could have the same size for inputs and outputs. For these operations, we could re-enable them within these safe limits. It would be safe for us to have these functions again.
The other problem is that we currently cannot commit to additional scripts. In the original design of bitcoin, we could have script operations inside of the signature. But the problem is that the signature is not covered by the signature itself. So any script in the scriptSig is modifiable by any third party in the network. For example, if we tried to do a CHECKSIG operation in the signature, people could simply replace it with an OP_0 and invalidate the transaction. This is a bypass of the.. signature check in the scriptSig. But actually this function is really useful, for example, we can do... delegation, people could add additional scripts to a new UTXO without first spending it. So people could do something like let's say to let their son spend their coin within a year if it is not first spent otherwise.. and also, people, talk about replay protection. So we have some ohter new opcode like pushing the blockhash to the stack, with this function we could have replay protection to make sure the transaction is valid only in a specified blockchain.
So the proposal is that in the future the CHECKSIG should have the ability to sign additional script and to execute these scripts. And finally the other problem is that the script has limited access to different parts of the transaction. There is only one type of operation that allowed to investigate different parts of the transaction, which is CHECKSIG and CHECKMULTISIG. But it is very limited. There are sighash limitations here... there are only 6 types of sighash. The advantage of doing this is that it's very compact and could use only one byte to indicate which component to sign. But the problem is that it's inflexible. The meaning of this sighash is set at the beginning and you can't change it. You need a new witness version to have another checksig. And the other problem is that the sighash can be complex and people might make mistakes so Satoshi made this mistake in the sighash design such as the well-known bug in validation time and also the SIGHASH_SINGLE bug. It's not easy to prevent.
The proposal is that we might have the next generation of sighash (sighashv2) to expand to two bytes, allow it to cover different parts of the transaction and allow people to choose which components they would like to sign. This would allow more flexibility and hopefully not overly complicated. But still this is probably not enough for more flexible design.
Another proposal is OP_PUSHTXDATA which pushes the value of different components of a transaction to the stack. It's easy to implement, for example, we could just push the scriptpubkey of the second output to the stack, okay. So it is actually easier to implement. We could do something more than just... because we have sighash, we could check where something is equal to the specified value. But if we could push the value, like the value of an output to the stack, then we could use other operations like more than or less than and then we could do something like checking whether the value of output x must be at least y bitcoin, which is a fixed value.
There are some other useful functions like MAST which would allow for more compact scripts by hiding the other unexecuted branches. There's also aggregation that would allow n-of-n multisig to be reduced to a single signature and so on. In the elements project, they implemented CHECKSIGFROMSTACK where they don't check the transaction structure but instead they verify a message on the stack. So it could be some message like not bitcoin maybe, perhaps cross-chain swap, or another bitcoin UTXO. And also we might have some elliptic curve point addition and operations which are also useful in lightning network design.
Here are some related works in progress. If you are interested in this topic, I would like to encourage you to join our discussions because it's a very active topic. jl2012 bip114 MAST, maaku's MBV, luke-jr or version-1 witness program, Simplicity, etc.
so you have your script template the amount value and there is a block impactor beause we have the sha chain whih allows you to hae the hashes.. we can hae that errortate constant beause you need the HTLC chashes, to properly reoke the prior states and if you an't do that then you can't onstruct the redeem script. Right now it ineeds a signature for eery state, you need all the HTLCs, it needs the netowrk erification state, and there's another cool thing you can do with which is like trap door erification and you can include it in the transaction itself and there can be a alsue where there is some margin for it.. Which make sit powerful, and then you can make it more private with these constructs. We only have a few minutes left, we can cover this.
One furthe rthing is that in the transformation, we have privacy issue because we need to keep going forward, we need to have hte private state, so there's a history of this in the ages in the past, the current one used replications, which was one of the cool things about lightning. We used to have deckman signatures we had a sequence value of like 30 days, we did an update, we had to switch sides then we make it 29 then 27 etc. You can only broadcast the most recent state because otherwise the other party can transact the other transaction. If you start with 30 days then you can only do about 30 bidirectiona lswitches. Then there was cdecker's payment channels where you have a root tree and every time you need to- you had two payment channels, you had to rebalance htem and then it's on your part of the channel you can reset the channel state. You can do 30 this way, you have another tree, you can do it that way, and then there's a new version of it in the indefinite lifetime... by keeping the transaction in CSV, the drawback on that paproahc because you have al arge validation tree, in the worst cas eyou have 8 or 10 on the tree, and then you nee dfor the prior state and then you do the 12 per day, and every time you have to make a state, you have to revoke the preimage from the prior state, this is cool because if they ever broadcast the entire state, eahc one has the caluse so that you can draw the entire money in the event o f a violation. There are some limitations for doing more complex verifications and you have this log(n) state that you have to deal with ehen you deal with that.
We're going to do the key power on the stack to limit key verifications on this main contract. this is all composable. You can do discreet log contracts. You can now check signtures on arbitrary messages. You can sign a message nad then we can enforce structure on the messages themselves. Right now you need to have sequene numbers. So each state we are going to increment the sequence numbers. So you give me a siequence number on that state. On the touputs we have a commitment ot the sequence number and the value r. So people on chain will know that how many places we did in that itself. The ool part about this is that because we have a seq number then I have the one if it's highest neough. Then I am opening that commitment to say this is state 5 and I present to you a new signed ommitment and open that as well, that's in a validation state. The cool things is that you only need one of those m. So we have to some auxiliary state, and each time I have a new state I an drop the old state. I have a signed commitment to revoke the prior state. This is a ibg deal beause the state is much smaller. Currently we require you to fwe use a state mahcine on state 2, and it also has implications for verifications and watch tower
So on lightning, there's this technique itself- it's timelocks CSV value and if you can't react within that value then you can't go to court and enforce judgement on this attacker. So the watchtower is a requirement, you delegate the state watching to the watchtower. They know which channels you're watching. You send some initial points, like a script template. For every one you send the signautre and the verification state. They can use the verification stat ethat collapses into a log(n) tree, you can basically use state where you send half the txids, you can decrypt this in... some time.
submitted by Der_Bergmann to btc [link] [comments]

Bitcoin Cash trading is being temporarily suspended on the subreddit until further notice because of a brewing hard fork war on November 15th

You might recall that one time Bitcoin trading was temporarily suspended around August last year because two sides of development (Bitcoin Core, and a bunch of people that disagreed with Bitcoin Core) were at an impasse in regards to scaling, so those that did not agree with Core's vision forked off and created Bitcoin Cash.
Since Cash's creation, it's gone through two planned hard forks to conduct network upgrades like per-block difficulty calculation. This third fork is a bit different, as there are two competing proposals. The first, by the Bitcoin ABC development group, introduces a new opcode that validates messages sent through the blockchain from outside sources. The second, by the Bitcoin "Satoshi's Vision" group (AKA Bitcoin SV), opposes this and instead introduces other opcodes and a block size increase all the way to 128 MB (up from 32 MB). Both proposals are incompatible with the other, and a chain split of some sort will occur.
Of particular concern amongst the Bitcoin Cash community is concerns about the leadership of Bitcoin SV, of which a prominent figurehead (Craig Wright) has pledged to do various things with Bitcoin SV that many argue is exactly against what the Bitcoin & cryptocurrency concepts stand against, like scooping dormant and "lost" coins from "inactive" wallets for redistribution.
As it stands right now, as per coin.dance (which keeps track of all sorts of things about the Bitcoin and Bitcoin Cash ecosystems), a majority of companies either are prepared to support or explicitly stated support for Bitcoin ABC. But a bunch of mining power right now is backing Bitcoin SV.
While we'd normally let the two warring factions play it out, Coinbase has stated its intention to freeze Bitcoin Cash activity until a point where they deem that a network is healthy enough to resume trading. BitPay (which Valve used as their Bitcoin payment processor before they dropped Bitcoin payments last December) is recommending its users to cut off BCH activity at least two hours before.

As a hard fork war is imminent and because Coinbase is freezing its BCH activity in anticipation of a chain split, /GlobalOffensiveTrade will be suspending all Bitcoin Cash trading tonight at 6 PM CST (0000 UTC) until further notice.

Posts involving Bitcoin Cash trades will be removed by AutoModerator after this time. You should also be aware that exchanges like Coinbase may experience increased load from people taking action and offloading coins into wallets they directly control in advance of a potential fork, which has the potential to result in processing delays.
If you use Bitcoin Cash, chances are that you may have already heard about this upcoming war and know what to do and have already plotted your course of action. But for those that haven't, we recommend avoiding non-essential Bitcoin Cash trades through the next few days. If you wish to get out of Bitcoin Cash by way of liquidation to a fiat currency or an alternative cryptocurrency of choice, or if you want to remain with Bitcoin Cash and prepare to place your vote on what the wider ecosystem will back, now is the time to do so.
"OG" Bitcoin is unaffected and will not be suspended, so you will be able to continue to post BTC trades without interruption (as long as your trade doesn't also list BCH). And as a reminder, our bait-and-switch policies involving Bitcoin is as follows: if you do not explicitly specify which variant of Bitcoin you want, it will be assumed that you want original Bitcoin and not Bitcoin Cash (including if you say just BTC). If you want Bitcoin Cash instead of original Bitcoin, you must explicitly say so, either by stating "Bitcoin Cash", "BCC", or "BCH".
As the hard fork war progresses, the health of the Bitcoin Cash network(s) will be evaluated periodically. Chances are that if Coinbase comes to a decision, we will make ours as well and resume when that happens.
submitted by wickedplayer494 to GlobalOffensiveTrade [link] [comments]

A Bitoin Cash geek contest that's lasting 3 days(6/1~6/3) has ended in a satisfactory way. Here's top 3 teams' works for BCH application.

Top 1 mempool team
Top 2 ViaBTC team & GON
Top 3 西安哈希战队,比特币运输船,神雕侠侣
mempool: URI-paying browser plug-ins and systems based on Bitcoin Cash. By extending the HTTP/1.1 Status Code 402 protocol, the browser plug-in is allowed to automatically/manually pay for a URI-authorized browser plug-in/server system when payment is required.
ViaBTC: The BCH-based authenticator is designed to enhance the software experience. Now it's deployed on IOS and Andorid. This authenticator can permanetly store date, sychronize on multiple devices, recover data quickly and does not require server. It's tamper-resistant. It's working principle is accqiring GA information, encrypting private key, sharding, adding markers and making generated transactions.
GON: TokenDice (Token Dice), a fully-chained 1v1 quiz application based on BCH decentralized. Its implementation principle is to open the opcode: OP_DATASIGNVERIFY, RPC: signmessage, createrawtransaction, buildscript, signtx, lock script, and unlock script (contract code): The player enters a random value, takes a model n, and calculates the outcome.
神雕侠侣: IFPassword password management software (ifpass.cash) is designed to allow more people to benefit from the blockchain. The application stems from the fact that there are many passwords in real life, weak password security levels, duplication of passwords on different platforms, inability to remember passwords, and lack of trust in third parties such as icloud. The application enables users to manage passwords (traditional websites), private key management (for new websites), one-click login (browser plug-ins), and realize permanent storage on the chain, importing them at any time, and using browsers and applets. Light application. 比特币运输船: Shell books were developed by a team of students from the sophomore at the Computer College of Chongqing University of Posts and Telecommunications. This is a product based on blockchain technology developed for target groups aged 15-29. You can record your own fading diary, read interesting diaries of others, write diaries with interesting people, and create memories with time. 西安哈希战队:BCH investment ecosystem aims to use BCH to invest in the world and use BCH to invest in traditional financial markets. With the gameplay process is simple and direct, information is linked, can not be changed, can see more or bearish, BCH buy-in, the target is a complete chain of contracts, the user orders information on the chain and so on.
If you'd like to know more of the contest's details, you can search "bchgeek" on the Internet. But the language is Chinese. Glad to know that people from all over the world is making effort for BCH.
submitted by didang to btc [link] [comments]

Had a quick (1,5 hours) look at Counterparty github code (master branch)

I spent 1,5 hours looking at their code on the github (master branch), and also read a bit of their documentation. Here is what I found:
So all in all, I do not see why there was such a fuss about Counterparty today - nothing major happened there. And, as mentioned before, having 10 mins block time for smart contracts could be quite inconvenient, and has a potential of reducing the applicability.
submitted by ledgerwatch to ethtrader [link] [comments]

The second time to start again, my blockchain startup road

My PPLive/PPTV startup experience
I am a person who is very eager to pursue technology and geek spirit. I always want to do things that change the world.
In 2004, when I was still in college, suddenly one day, Bill approached me and said that there was no way to watch NBA basketball games smoothly via the Internet on campus . Can we do a real-time live video software using P2P transmission technology together? This idea was similar to the popular BitTorrent download software at that time. When I looked at this idea, I could not only solve my own needs for watching NBA, but also the most important thing is that I could not resist the temptation of technical challenges, so I quickly agreed to Bill. In this way, we started our business together and the software soon became very popular in China. PPLive was later rebranded as PPTV. Bill became the Founder and CEO of PPTV and I became the Chief Architect of PPTV and began to focus on P2P transmission technology.
The first thing PPLive achieved was real-time streaming via peer to peer transmission technology. What is real-time streaming P2P transmission technology? It is simultaneous data upload and download at the same time, which makes live streaming possible. Simply put, I am for everyone, everyone is for me.
In the beginning, I led the team to build an live broadcast platform with P2P. Because the characteristics of live broadcast are that many people will share the same data, this way we can achieve a bandwidth saving ratio of more than 99.9% when we watch the most popular content at the same time. With only 10Mb of release bandwidth, it can support up to 10 million users to watch TV at the same time.In this case, we can achieve excellent QoS(Quality of Service): the average time to start playing is 1.2 seconds; the average count of interruption is 1.6 seconds per half an hour; the whole network latest delays from broadcast source is up to 90 seconds.
After the P2P live broadcast is completed and became sophisticated, we began to develop P2P VOD(video on demand) because the demand for VOD was getting bigger and bigger. P2P VOD on-demand and P2P live broadcast are different from each other. VOD is the need to use hard disk storage and memory cache, which is more difficult than the live broadcast. Besides, it is the difference between popular content and unpopular content in the VOD system, and the P2P effect of unpopular content is not as good as the popular content. However, our P2P VOD has achieved excellent QoS(Quality of Service) too: 90% bandwidth saving ratio; the average time to start playing is 1.5 seconds; the average count of interruption is 2.2 seconds per half an hour; the average time to play when seek position is 0.9 second.
Subsequently, I led the team to port the P2P kernel to the embedded system. We streamlined the P2P transmission protocol and made a number of optimizations in terms of performance. The hard disk in embedded device read and write is not used for regular mechanical hard disk, so we have done a good amount of protection measures, which enable the P2P core to run on embedded devices. Then we quickly launched an iOS client that supports P2P, an Android mobile phone client that supports P2P, and finally an Android set-top box client that supports P2P.
Soon after, amid the growing popularity of smartphones, video creation became easier than ever Smartphone users become more enthusiastic about producing their own content, which helped video content increased dramatically. I led the team to start the cloud broadcast service, giving each user a certain amount of free storage space. On this platform users can freely upload,download, and share video content, In fact, it is very similar to network hard drives such as Google Drive. The difficulty of network hard drive with share is similar to that of VOD, but the count of videos is far more than that of VOD. The content head effect is more obvious and a large amount of content is stored in the tail. We have put a great amount of work onto the optimization of this particular area.
In 2013, PPTV was sold it to a listed Chinese company, Suning Yunshang, for US$420 million. A year later, I waved goodbye to video and P2P technology, which I had been doing for over ten years. To pursue a bigger dream, I started a new path to another startup.
My startup experience in the JDO (https://www.jidouauto.com/en)
After the success of PPTV startup, I have been pursuing new technology and geek spirit, so I did not choose video and P2P fields again, instead, I devoted myself to intelligent hardware and artificial intelligence. In 2014, I founded JDO with Cloud Wong, a company which produces cars that implement AI technology and connected to the Internet. In the past four years, I have served as CTO in JDO, and have accumulated in-depth technical experience in intelligent hardware, embedded software, artificial intelligence, and machine learning.
The automotive industry is a very closed industry. We need a long business negotiation cycle for everything we do and the product development cycle is limited by the iteration cycle of cars. In this environment of only catering to businesses, I could not entirely follow my ideas to be innovative. JDO was doing very well in the automotive field, where they have received orders from many internationally renowned car manufacturers. However, in order to pursue a bigger dream and to change the world, I decided to leave JDO.
The thought of sharing storage
Because I am doing artificial intelligence every day, I have to do a lot of neural network training and I have established cooperation and communication with many other AI companies. To do neural network training, it requires high-end NVIDIA GPU graphics cards, such graphics cards are costly, and most of the time they are idle. I was ponder: Is there a chance to build a shared GPU graphics platform, which encourages everyone to rent out unused GPU resources? Meaning when I need it, I have to pay to use other people’s unused GPU resources to calculate; when others need it, they have to pay to use my remaining GPU resources to calculate; so for startups, you don’t need to buy so many GPUs at once. The cost will be reduced and companies that use less can make money.
I took the idea of sharing a GPU graphics platform, did a simple MVP(Minimum Viable Product) to verify my thoughts and let several cooperative AI companies to use this MVP. However, I found that they have not been used. After I learned about the situation, I realized that although AI companies like lower price with sharing GPU graphics platform, they are more concerned about the security of data than price. AI company’s data collection often requires high cost to collect enough data. If they put these data on shared computers, they are afraid of being stolen data by competitors. More importantly, the neural network does not have an excellent way to split into multiple computer calculations, encrypt the data in time, and the data must be decrypted before entering the neural network, which is easy to be stolen. Technically, there is currently the no good way to decentralize the neural network model and there is no good way to send cryptographic data directly into the neural network calculation. Due to this obstacle plus other reasons, I finally gave up on this idea.
At this time, P2P storage suddenly came to my mind. I used to work on P2P video for 10 years. If you do not share the GPU, but only the hard disk and bandwidth, could this be feasible? After careful consideration, as we are entering the Internet of Things era, a large number of households have idle computers which not fully used, a large number of households purchase bandwidth on a monthly basis but not fully used, and a large number of IoT devices have storage capabilities. If these vast amounts of idle resources can be fully utilized, this will be an excellent cause for human society.
More importantly, what was done in PPTV before was that users share content voluntarily without any incentives. What is needed now is to encourage users to share by giving them incentives. That is, some users use the service to pay for the fee, and some users provide services to earn income, thus establishing a sharing storage network, just like Uber and Airbnb, passengers and rentees use the service to pay the fee, the driver and the landlord provide services to earn income.
Therefore, I came up with this sharing storage idea. The few problems that can be solved are:
  1. The cost is lower because, For miners, this is the reuse of idle resources. It is a fairly zero cost regarding cost structure.;
  2. Faster, P2P can speed up the transmission, and PPTV’s success is the best proof. After the sharing storage, because the storage nodes are everywhere on the network, the data is stored on the nearest storage node, so it can also be the fastest transmission.
  3. Decentralized storage is more private. because big companies do not necessarily guarantee privacy. Almost all big companies have exposed data leakage incidents. Today’s big companies are very dependent on big data and they might use user data for AI training. Decentralized storage is to double-encrypt data through the user’s key and developer key, then slice it into multiple segments.In this way, different segments are stored on different computers. Moreover, the sharing platform itself does not store data, which completely prevents leakage risk. Only the user’s key can open the data and the sharing operator has no way. Also, for hackers, hacking into one computer is useless, because the data is stored across many different computers This significantly increases the difficulty for hackers to steal data.
Although shared storage can solve these problems, it also brings many challenges.
  1. How to guarantee the stability of miner nodes? Some nodes will go online soon and some will go offline, just like the users of PPTV before. How to make stable products based on unstable nodes is a challenge, but it is also a thing that P2P itself must do.
  2. The quality of the miner node: For example, the hard disk of some miner nodes may be old, and it is easy to break; for another example, the bandwidth quality of some miner nodes may be terrible; for another example, some place’s power is unstable, and there are power outages frequently.
  3. Cheating on evil miner nodes: Since there are gains, some people find ways to cheat. It is the weakness of human nature and it is inevitable.
Although these problems are challenging, for me who have done 10 years of international P2P projects, I believe that these problems can be solved very well in the end, because most of the problems have been encountered before in PPTV.
However, if it’s just shared storage, even if you do the above, why do you believe that your income distribution is fair? It is the value of the blockchain.
The fate of my blockchain, starting in 2010, I was exposed to Bitcoin, a P2P currency system. As a commercial P2P practitioner, I quickly read through Bitcoin’s code: block, block hash, nonce, and mining algorithms, which are quickly understood. However, at the time I only felt the greatness of Bitcoin’s technology. In business, I didn’t understand it so deeply.
Later, Ethereum was born. In my opinion, in addition to optimizing Bitcoin’s block performance and mining algorithm, Ethereum’s most significant improvement is that Bitcoin can only execute simple instructions OPCODE into complex logic code. Virtual machine code and can program in solidity high-level language. This blockchain can be used not only for digital currencies but also for writing a variety of smart contracts that work in many scenarios.
Later, after carefully studying the technical mechanism of the open source alliance chain fabric made by IBM, I finally found the value of the blockchain itself.
From a technical point of view, blockchain is essentially a distributed and fully synchronized database. Since the chain scattered on different computers, there is a need to resolve the dispute, and the consensus algorithm is used to resolve the dispute. From a sociological perspective, the decentralization and consensus of blockchains can generate enormous social value. This value is that blockchain can build real trust. The value is that the public blockchain can build true public trust.
In the sharing storage scenario, I think it is wrong to store the stored files on the chain. The function of the blockchain for shared storage does not aim at solving data storage problems, but to solve trust problems. The data stored in the chain should be trust-related data, such as user assets, storage contracts, proofs, rewards, and penalties.
My thinking is getting clearer
With blockchain technology, an effective third-party node certification mechanism can be established to supervise the cheating problem of evil nodes through consensus; the rules of sharing distribution cannot be changed thought of one’s will, and there are fairness and transparency.
Because of fairness and transparency, everyone can safely invest resources in the platform network. Because of fairness and transparency, everyone is willing to invest in funds and build a reliable and competitive and large service center to provide services for incentives. Because of fairness and transparency, there is a truly transparent market that drives everyone to look for better and affordable resources to provide users with better and affordable services.
Because of fairness and transparency, we can build an economic model that is implemented purely by computer programs. For evil miners, they must be severely punished until the collateral is forfeited to 0. For the miners who are unstable online and the service, it must be punished according to the amount and the collateral is forfeited. Thus, such method stimulates miners to provide stable service. For miners who occasionally fail, there must be a warning penalty to motivate the service provider to provide more stable hardware. Through an effective economic mechanism, as long as the miners are stable, the entire platform service will be stabilized and QoS will be guaranteed.
What needs to be done is to: use blockchain to create a sharing storage network; use incentives to stimulate sharing; use public blockchain to ensure fair and transparent incentives. The value is: affordability, speed, and privacy. This network is easy to use and is friendly to developers. Developers can write a variety of applications in various computer languages.
Eventually, I started the project PPIO
When I was doing PPTV back in the days, Bill approached me. However, this time I approached Bill and told him that I wanted to change the world again. Therefore, Bill and l once again launched this new project with the goal to change the world,which is PPIO.
PPIO — — A decentralized data storage and delivery platform for developers
that values speed, affordability, and privacy.
Article author:Wayne Wong
If you want to reprint, please indicate the source
If you have an exchange about blockchain learning, you can contact me in the following ways:Github: https://github.com/omnigeekerTelegram: @omnigeekerTwitter: @omnigeekerMedium: https://medium.com/@omnigeekerSteemit: https://steemit.com/@omnigeeker
submitted by omnigeeker to btc [link] [comments]

"POS stands for the future? Qtum brings deep analysis"

Each cryptocurrency will adopt some kind of consensus mechanism so that the entire distributed network can maintain synchronization. Bitcoin adopted the Proof of Work (PoW) consensus mechanism from the very beginning of its birth to achieve proof of workload through continuous digital cryptographic hash operations. Since the hashing algorithm is unidirectional, even a small change in the input data will make the output hash value completely different. If the calculated hash value satisfies certain conditions (referred to as "mining difficulty"), participants in the bitcoin network identify the workload proof. Mining difficulty is an ever-changing hash target. When the speed of network-generated blocks becomes faster, the difficulty is automatically increased to maintain the average of the entire network every 10 minutes.
 
Definition
For those who are not very familiar with the blockchain, here are some basic definitions to help understand the post:
 
PoW and Blockchain Consensus System
Through 8 years of development of Bitcoin, the security of the PoW mechanism has been confirmed. However, PoW has the following problems:
 
  1. PoW has wasted a lot of power resources and is not friendly to the environment;
  2. PoW is only economically advantageous for big people who have a lot of power (normal users can hardly mine into mines);
  3. PoW lacks incentives for users to hold or use coins;
  4. PoW has a certain risk of centralization, because miners tend to join large pools, which makes large pools have a greater influence on the network;
 
The right to benefit prove mechanism (Proof of Stake, hereinafter referred to as PoS) can solve a lot of problems among this, because it enables any user with tokens in your wallet can have the opportunity to dig mine (of course, will get mining reward). The PoS was originally proposed by Sunny King in Peercoin. It was later refined and adopted in a variety of cryptocurrencies. Among these are PoS Vasin's PoS 2.0, Larry Ren's PoS Velocity, and the recent CASPER proposed by Vlad Zamfir, as well as various other relatively unknown projects.
 
The consensus mechanism adopted by Qtum is based on PoS3.0. PoS3.0 is an upgraded version of PoS2.0, also proposed and implemented by Pavel Vasin. This article will focus on this version of the PoS implementation. Qtum made some changes based on PoS3.0, but the core consensus mechanism is basically the same.
 
For general community members and even some developers, PoS is not particularly easy to understand because there are currently fewer documents detailing how to ensure network security in networks that use only token ownership to achieve consensus. This article will elaborate on how to generate, verify, and secure the PoS blockchain in PoS3.0. The article may involve some technical knowledge, but I will try to describe it with some of the basic definitions provided in this article. But at least the reader needs to have a basic idea of ​​a UTXO-based blockchain.
 
Before introducing PoS, let me briefly introduce PoW's working mechanism, which can help the following understanding of PoS. The PoW mining process can be represented by the following pseudocode:  
While(blockhash > difficulty) { Block.nonce = block.nonce + 1 Blockhash = sha256(sha256(block)) } 
 
The hash operation used here I explained earlier, that is, to use arbitrary length data as input, after a series of operations, get a fixed-length information digest as an output, but only know the information digest but it is impossible to reverse the corresponding input data . The whole process is a lot like the lottery winning mechanism. You can create a “voucher” by hashing the data and compare it with the target hash range to determine if you “win”. If you don't win, you can create a new "voucher" again by slightly changing some of the data. The random number nonce in Bitcoin is used to adjust the input data. Once the required hash is found, the block is legitimate and can be broadcast to a distributed network. Once the other miners in the network receive this new block message and pass the verification, they will add the block to the chain and continue to build the block after the new block.
 
PoS protocol structure and rules
 
Now we begin to introduce PoS. PoS has the following goals :
  1. Cannot fake blocks;
  2. "Large households" will not receive much disproportionately large rewards;
  3. Having strong computing power does not help create blocks;
  4. No one or several members of the network can control the entire blockchain;
The basic concept of PoS is very similar to PoW, and it is like a lottery. The only difference is that PoS can't get new "lotteries" just by fine-tuning the input data, PoW uses "block hash" as lottery ticket, and PoS introduces the concept of "kernel hash".
The Kernel hash takes as input multiple unmodifiable data in the current block. So, because the miners can't find a simple way to modify the kernal hash, they can't get legal through a lot of traversal of the possible hash.New block.
 
In order to achieve this goal, PoS added many additional consensus rules.
First, unlike PoW, the PoS's coinbase transaction (that is, the first transaction in the block) has zero output. At the same time, in order to reward Staker, a staking transaction was introduced as the second transaction of the block. The staking transaction has the following features:
  1. There are at least 1 legal vin
  2. The first vout must be empty script
  3. The second vout must not be empty
 
In addition, staking transactions must also obey the following rules :
  1. The second vout must be a pubkey script (note that it is not pubkeyhash) or an OP_RETURN script that cannot be used to save data on the chain;
  2. The timestamp in the transaction must be consistent with the block timestamp;
  3. The total output value of the staking transaction must be less than or equal to the sum of all input values, PoS block awards, and transaction fees (ie output <= (input + block_reward + tx_fees));
  4. The output corresponding to the first vin must pass the confirmation of at least 500 blocks (that is, the currency spent needs at least 500 blocks to confirm);
  5. Although the staking transaction can have multiple input vins, only the first vin is used for the consensus mechanism;
 
These rules make it easy to identify the staking transaction, thus ensuring that it can provide enough information to verify the block. It should be noted here that the first vout is not the only way to identify the staking transaction, but since the PoS3.0 designer Sunny King started using this method, and proved its reliability in long-term practice, so we have also adopted this method to identify staking transactions.
 
Now that we know the definition of the staking transaction and we understand the rules that it must follow, let's introduce the rules of the PoS block :
 
The most important of these rules for PoS is the "kernal hash". The role of the kernel hash is similar to that of the block hash in PoW. That is, if the hash value matches the condition, the block is considered valid. However, kernal hash cannot be obtained by directly modifying part of the current block. Next, I will first introduce the structure and operating mechanism of kernal hash, and then further explain the purpose of this design, and if you change the unforeseen consequences of this design will bring.
 
Kernel Hash in PoS
The kernal hash consists of the following data in order as input:
 
The "skate modifier" of a block refers to the hash value of the following data:
There are only two ways to change the current kernel hash (for mining), either change "prevout" or change the current block time.
 
In general, a wallet will contain multiple UTXOs. The balance of the wallet is actually the sum of all available UTXOs in the current wallet. This is also applicable in PoS wallets and is even more important because arbitrary output may be used for staking. One of these outputs will be the prevout in the staking transaction, which will be used to generate a valid block.
 
In addition, there is one more important change in the PoS block mining process (compared to PoW), which is that the difficulty of mining is inversely proportional to the number of coins owned (rather than the number of UTXOs). For example, a wallet with 2 coins is only half the difficulty of mining. If it is not designed this way, users will be encouraged to generate many UTXOs with small micro-regulations, which will cause the block size to become larger and may cause some security problems.
 
The calculation of kernal hash can be expressed in pseudo-code as:
While(true){ Foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value Hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) If(hash < posDifficulty){ Done } } Wait 16s -- wait 16 seconds, until the block time can be changed } 
 
Through the above process, we find that one of the UTXOs can be used to generate a staking transaction. This staking transaction has 1 vin, the UTXO we found. At the same time this staking transaction has at least two vouts, the first one is empty, which is used to identify the blockchain, the second vout is an OP_RETURN transaction containing only one public key, or contains the pay-to-pub-key script. The role of the latter is relatively pure (payment), and data transactions can have more uses (such as an independent block signature machine) without destroying the original UTXO model.
 
Finally, all transactions in the mempool will be added to the block. What we need to do next is generate the signature. This signature must use the public key corresponding to the second vout of the staking transaction. The actual transaction data is calculated by block hash. After signing, we can broadcast this block to the network. Other nodes in the network will verify the block. If the block is valid, the node will accept the block and connect it to its own blockchain while broadcasting the new block to other nodes it connects to.
 
Through the above steps, we can get a complete and secure PoS3.0 blockchain. PoS3.0 is considered to be the best consensus mechanism against malicious attacks in a fully decentralized consensus system. Why is this conclusion? We can understand the history of PoS development.
 
The development of PoS
PoS has a long history. Here is a brief description:
 
PoS1.0 — Applied in Peercoin , heavily dependent on coin age (ie, the time elapsed since UTXO was spent), the higher the coin age, the lower the difficulty of mining. This has the side effect that the user will choose to open a wallet for a long period of time (for example, one month or longer), so that the UTXO of the wallet will have a relatively large currency and the user can quickly find a new block. This will lead to double-spend attacks more easily. Peercoin itself is not affected by this, because it uses PoW and PoS mixing mechanisms, and PoW can reduce this negative effect.
 
PoS2.0 — The coin age was removed from the consensus mechanism and a different stake modifier was used than PoS1.0. The contents of the amendments are relatively numerous, but basically they are all about how to remove the coin age and realize the security consensus mechanism without using the PoW/PoS hybrid mode.
 
PoS3.0 — PoS3.0 can actually be said to be an upgraded version of PoS2.0. In PoS2.0, the snapshot modifier also contains the block time of the previous block, which was removed in 3.0, mainly to prevent the so-called "short-range" attack, that is, it is possible to change the previous area by traversing. Block time to traverse mining. PoS2.0 uses block time and transaction time to determine the age of UTXO, which is slightly different from the previous coinage age. It indicates that a UTXO can be used for the minimum number of confirmations required by staking. The UTXO age in PoS 3.0 becomes simpler, it is determined by the height of the block. This avoids the introduction of a less accurate timestamp in the blockchain and can effectively immunize the "timewarp" attack. PoS3.0 also adds OP_RETURN support for staking transactions, making voutYou can include only the public key, not necessarily the full pay-to-pubkey script.
 
Original:https://mp.weixin.qq.com/s/BRPuRn7iOoqeWbMiqXI11g
submitted by thisthingismud to Qtum [link] [comments]

The missing explanation of Proof of Stake Version 3 - Article by earlz.net

The missing explanation of Proof of Stake Version 3

In every cryptocurrency there must be some consensus mechanism which keeps the entire distributed network in sync. When Bitcoin first came out, it introduced the Proof of Work (PoW) system. PoW is done by cryptographically hashing a piece of data (the block header) over and over. Because of how one-way hashing works. One tiny change in the data can cause an extremely different hash to come of it. Participants in the network determine if the PoW is valid complete by judging if the final hash meets a certain condition, called difficulty. The difficulty is an ever changing "target" which the hash must meet or exceed. Whenever the network is creating more blocks than scheduled, this target is changed automatically by the network so that the target becomes more and more difficult to meet. And thus, requires more and more computing power to find a hash that matches the target within the target time of 10 minutes.

Definitions

Some basic definitions might be unfamiliar to some people not familiar with the blockchain code, these are:

Proof of Work and Blockchain Consensus Systems

Proof of Work is a proven consensus mechanism that has made Bitcoin secure and trustworthy for 8 years now. However, it is not without it's fair share of problems. PoW's major drawbacks are:
  1. PoW wastes a lot of electricity, harming the environment.
  2. PoW benefits greatly from economies of scale, so it tends to benefit big players the most, rather than small participants in the network.
  3. PoW provides no incentive to use or keep the tokens.
  4. PoW has some centralization risks, because it tends to encourage miners to participate in the biggest mining pool (a group of miners who share the block reward), thus the biggest mining pool operator holds a lot of control over the network.
Proof of Stake was invented to solve many of these problems by allowing participants to create and mine new blocks (and thus also get a block reward), simply by holding onto coins in their wallet and allowing their wallet to do automatic "staking". Proof Of Stake was originally invented by Sunny King and implemented in Peercoin. It has since been improved and adapted by many other people. This includes "Proof of Stake Version 2" by Pavel Vasin, "Proof of Stake Velocity" by Larry Ren, and most recently CASPER by Vlad Zamfir, as well as countless other experiments and lesser known projects.
For Qtum we have decided to build upon "Proof of Stake Version 3", an improvement over version 2 that was also made by Pavel Vasin and implemented in the Blackcoin project. This version of PoS as implemented in Blackcoin is what we will be describing here. Some minor details of it has been modified in Qtum, but the core consensus model is identical.
For many community members and developers alike, proof of stake is a difficult topic, because there has been very little written on how it manages to accomplish keeping the network safe using only proof of ownership of tokens on the network. This blog post will go into fine detail about Proof of Stake Version 3 and how it's blocks are created, validated, and ultimately how a pure Proof of Stake blockchain is possible to secure. This will assume some technical knowledge, but I will try to explain things so that most of the knowledge can be gathered from context. You should at least be familiar with the concept of the UTXO-based blockchain.
Before we talk about PoS, it helps to understand how the much simpler PoW consensus mechanism works. It's mining process can be described in just a few lines of pseudo-code:
while(blockhash > difficulty) { block.nonce = block.nonce + 1 blockhash = sha256(sha256(block)) } 
A hash is a cryptographic algorithm which takes an arbritrary amount of input data, does a lot of processing of it, and outputs a fixed-size "digest" of that data. It is impossible to figure out the input data with just the digest. So, PoW tends to function like a lottery, where you find out if you won by creating the hash and checking it against the target, and you create another ticket by changing some piece of data in the block. In Bitcoin's case, nonce is used for this, as well as some other fields (usually called "extraNonce"). Once a blockhash is found which is less than the difficulty target, the block is valid, and can be broadcast to the rest of the distributed network. Miners will then see it and start building the next block on top of this block.

Proof of Stake's Protocol Structures and Rules

Now enter Proof of Stake. We have these goals for PoS:
  1. Impossible to counterfeit a block
  2. Big players do not get disproportionally bigger rewards
  3. More computing power is not useful for creating blocks
  4. No one member of the network can control the entire blockchain
The core concept of PoS is very similar to PoW, a lottery. However, the big difference is that it is not possible to "get more tickets" to the lottery by simply changing some data in the block. Instead of the "block hash" being the lottery ticket to judge against a target, PoS invents the notion of a "kernel hash".
The kernel hash is composed of several pieces of data that are not readily modifiable in the current block. And so, because the miners do not have an easy way to modify the kernel hash, they can not simply iterate through a large amount of hashes like in PoW.
Proof of Stake blocks add many additional consensus rules in order to realize it's goals. First, unlike in PoW, the coinbase transaction (the first transaction in the block) must be empty and reward 0 tokens. Instead, to reward stakers, there is a special "stake transaction" which must be the 2nd transaction in the block. A stake transaction is defined as any transaction that:
  1. Has at least 1 valid vin
  2. It's first vout must be an empty script
  3. It's second vout must not be empty
Furthermore, staking transactions must abide by these rules to be valid in a block:
  1. The second vout must be either a pubkey (not pubkeyhash!) script, or an OP_RETURN script that is unspendable (data-only) but stores data for a public key
  2. The timestamp in the transaction must be equal to the block timestamp
  3. the total output value of a stake transaction must be less than or equal to the total inputs plus the PoS block reward plus the block's total transaction fees. output <= (input + block_reward + tx_fees)
  4. The first spent vin's output must be confirmed by at least 500 blocks (in otherwords, the coins being spent must be at least 500 blocks old)
  5. Though more vins can used and spent in a staking transaction, the first vin is the only one used for consensus parameters.
These rules ensure that the stake transaction is easy to identify, and ensures that it gives enough info to the blockchain to validate the block. The empty vout method is not the only way staking transactions could have been identified, but this was the original design from Sunny King and has worked well enough.
Now that we understand what a staking transaction is, and what rules they must abide by, the next piece is to cover the rules for PoS blocks:
There are a lot of details here that we'll cover in a bit. The most important part that really makes PoS effective lies in the "kernel hash". The kernel hash is used similar to PoW (if hash meets difficulty, then block is valid). However, the kernel hash is not directly modifiable in the context of the current block. We will first cover exactly what goes into these structures and mechanisms, and later explain why this design is exactly this way, and what unexpected consequences can come from minor changes to it.

The Proof of Stake Kernel Hash

The kernel hash specifically consists of the following exact pieces of data (in order):
The stake modifier of a block is a hash of exactly:
The only way to change the current kernel hash (in order to mine a block), is thus to either change your "prevout", or to change the current block time.
A single wallet typically contains many UTXOs. The balance of the wallet is basically the total amount of all the UTXOs that can be spent by the wallet. This is of course the same in a PoS wallet. This is important though, because any output can be used for staking. One of these outputs are what can become the "prevout" in a staking transaction to form a valid PoS block.
Finally, there is one more aspect that is changed in the mining process of a PoS block. The difficulty is weighted against the number of coins in the staking transaction. The PoS difficulty ends up being twice as easy to achieve when staking 2 coins, compared to staking just 1 coin. If this were not the case, then it would encourage creating many tiny UTXOs for staking, which would bloat the size of the blockchain and ultimately cause the entire network to require more resources to maintain, as well as potentially compromise the blockchain's overall security.
So, if we were to show some pseudo-code for finding a valid kernel hash now, it would look like:
while(true){ foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) if(hash < posDifficulty){ done } } wait 16s -- wait 16 seconds, until the block time can be changed } 
This code isn't so easy to understand as our PoW example, so I'll attempt to explain it in plain english:
Do the following over and over for infinity: Calculate the blockTime to be the current time minus itself modulus 16 (modulus is like dividing by 16, but then only instead of taking the result, taking the remainder) Calculate the posDifficulty as the network difficulty, multiplied by the number of coins held by the UTXO. Cycle through each UTXO in the wallet. With each UTXO, calculate a SHA256 hash using the previous block's stake modifier, as well as some data from the the UTXO, and finally the blockTime. Compare this hash to the posDifficulty. If the hash is less than the posDifficulty, then the kernel hash is valid and you can create a new block. After going through all UTXOs, if no hash produced is less than the posDifficulty, then wait 16 seconds and do it all over again. 
Now that we have found a valid kernel hash using one of the UTXOs we can spend, we can create a staking transaction. This staking transaction will have 1 vin, which spends the UTXO we found that has a valid kernel hash. It will have (at least) 2 vouts. The first vout will be empty, identifying to the blockchain that it is a staking transaction. The second vout will either contain an OP_RETURN data transaction that contains a single public key, or it will contain a pay-to-pubkey script. The latter is usually used for simplicity, but using a data transaction for this allows for some advanced use cases (such as a separate block signing machine) without needlessly cluttering the UTXO set.
Finally, any transactions from the mempool are added to the block. The only thing left to do now is to create a signature, proving that we have approved the otherwise valid PoS block. The signature must use the public key that is encoded (either as pay-pubkey script, or as a data OP_RETURN script) in the second vout of the staking transaction. The actual data signed in the block hash. After the signature is applied, the block can be broadcast to the network. Nodes in the network will then validate the block and if it finds it valid and there is no better blockchain then it will accept it into it's own blockchain and broadcast the block to all the nodes it has connection to.
Now we have a fully functional and secure PoSv3 blockchain. PoSv3 is what we determined to be most resistant to attack while maintaining a pure decentralized consensus system (ie, without master nodes or currators). To understand why we approached this conclusion however, we must understand it's history.

PoSv3's History

Proof of Stake has a fairly long history. I won't cover every detail, but cover broadly what was changed between each version to arrive at PoSv3 for historical purposes:
PoSv1 - This version is implemented in Peercoin. It relied heavily on the notion of "coin age", or how long a UTXO has not been spent on the blockchain. It's implementation would basically make it so that the higher the coin age, the more the difficulty is reduced. This had the bad side-effect however of encouraging people to only open their wallet every month or longer for staking. Assuming the coins were all relatively old, they would almost instantaneously produce new staking blocks. This however makes double-spend attacks extremely easy to execute. Peercoin itself is not affected by this because it is a hybrid PoW and PoS blockchain, so the PoW blocks mitigated this effect.
PoSv2 - This version removes coin age completely from consensus, as well as using a completely different stake modifier mechanism from v1. The number of changes are too numerous to list here. All of this was done to remove coin age from consensus and make it a safe consensus mechanism without requiring a PoW/PoS hybrid blockchain to mitigate various attacks.
PoSv3 - PoSv3 is really more of an incremental improvement over PoSv2. In PoSv2 the stake modifier also included the previous block time. This was removed to prevent a "short-range" attack where it was possible to iteratively mine an alternative blockchain by iterating through previous block times. PoSv2 used block and transaction times to determine the age of a UTXO; this is not the same as coin age, but rather is the "minimum confirmations required" before a UTXO can be used for staking. This was changed to a much simpler mechanism where the age of a UTXO is determined by it's depth in the blockchain. This thus doesn't incentivize inaccurate timestamps to be used on the blockchain, and is also more immune to "timewarp" attacks. PoSv3 also added support for OP_RETURN coinstake transactions which allows for a vout to contain the public key for signing the block without requiring a full pay-to-pubkey script.

References:

  1. https://peercoin.net/assets/papepeercoin-paper.pdf
  2. https://blackcoin.co/blackcoin-pos-protocol-v2-whitepaper.pdf
  3. https://www.reddcoin.com/papers/PoSV.pdf
  4. https://blog.ethereum.org/2015/08/01/introducing-casper-friendly-ghost/
  5. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/kernel.h#L11
  6. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.cpp#L2032
  7. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.h#L279
  8. http://earlz.net/view/2017/07/27/1820/what-is-a-utxo-and-how-does-it
  9. https://en.bitcoin.it/wiki/Script#Obsolete_pay-to-pubkey_transaction
  10. https://en.bitcoin.it/wiki/Script#Standard_Transaction_to_Bitcoin_address_.28pay-to-pubkey-hash.29
  11. https://en.bitcoin.it/wiki/Script#Provably_Unspendable.2FPrunable_Outputs
Article by earlz.net
http://earlz.net/view/2017/07/27/1904/the-missing-explanation-of-proof-of-stake-version
submitted by B3TeC to Moin [link] [comments]

A Bitoin Cash geek contest that's lasting 3 days(6/1~6/3) has ended in a satisfactory way. Here's top 3 teams' works for BCH application.

Top 1 mempool team
Top 2 ViaBTC team & GON
Top 3 西安哈希战队,比特币运输船,神雕侠侣
mempool: URI-paying browser plug-ins and systems based on Bitcoin Cash. By extending the HTTP/1.1 Status Code 402 protocol, the browser plug-in is allowed to automatically/manually pay for a URI-authorized browser plug-in/server system when payment is required.
ViaBTC: The BCH-based authenticator is designed to enhance the software experience. Now it's deployed on IOS and Andorid. This authenticator can permanetly store date, sychronize on multiple devices, recover data quickly and does not require server. It's tamper-resistant. It's working principle is accqiring GA information, encrypting private key, sharding, adding markers and making generated transactions.
GON: TokenDice (Token Dice), a fully-chained 1v1 quiz application based on BCH decentralized. Its implementation principle is to open the opcode: OP_DATASIGNVERIFY, RPC: signmessage, createrawtransaction, buildscript, signtx, lock script, and unlock script (contract code): The player enters a random value, takes a model n, and calculates the outcome.
神雕侠侣: IFPassword password management software (ifpass.cash) is designed to allow more people to benefit from the blockchain. The application stems from the fact that there are many passwords in real life, weak password security levels, duplication of passwords on different platforms, inability to remember passwords, and lack of trust in third parties such as icloud. The application enables users to manage passwords (traditional websites), private key management (for new websites), one-click login (browser plug-ins), and realize permanent storage on the chain, importing them at any time, and using browsers and applets. Light application. 比特币运输船: Shell books were developed by a team of students from the sophomore at the Computer College of Chongqing University of Posts and Telecommunications. This is a product based on blockchain technology developed for target groups aged 15-29. You can record your own fading diary, read interesting diaries of others, write diaries with interesting people, and create memories with time. 西安哈希战队:BCH investment ecosystem aims to use BCH to invest in the world and use BCH to invest in traditional financial markets. With the gameplay process is simple and direct, information is linked, can not be changed, can see more or bearish, BCH buy-in, the target is a complete chain of contracts, the user orders information on the chain and so on.
If you'd like to know more of the contest's details, you can search "bchgeek" on the Internet. But the language is Chinese.
submitted by didang to Bitcoincash [link] [comments]

Serialization: Qtum Quantum Chain Design Document (6): x86 Virtual Machines Reshaping Smart Contract Ecosystem

Qtum Original Design Document Summary (6) -- Qtum x86 Virtual Machine

https://mp.weixin.qq.com/s/0pXoUjXZnqJaAdM4vywvlA
As we mentioned in the previous chapters, Qtum uses a layered design, using Qtum AAL, so that the Ethereum virtual machine EVM can run on the underlying UTXO model to be compatible with Ethereum's smart contracts. However, EVM itself has many limitations, and is currently only compatible with a high-level language such as Solidity for smart contract writing. Its security and maturity still require time verification. The Qtum AAL was designed to be compatible with multiple virtual machines at the beginning of the design, so after the initial compatibility with the EVM, the Qtum team was committed to compatibility with the more mainstream architecture of the virtual machine, which in turn was compatible with mainstream programming languages and toolchains.
The Qtum x86 virtual machine is the development focus of the Qtum project in 2018. It aims to create a virtual machine compatible with the x86 instruction set and provide similar operating system level calls, aiming to push smart contract development into the mainstream.
The following section intercepted some of the original Qtum development team's original design documents for the Qtum x86 virtual machine (with Chinese translation) (ps: document QTUM <#> or QTUMCORE<#> for internal design document numbering):
 
QTUMCORE-103:[x86lib] Add some missing primary opcodes
Description:There are several missing opcodes in the x86 VM right now. For this story, complete the following normal opcodes (it should just be a lot of connecting code, nothing too intense)
//op(0x9C, op_pushf);
//op(0x9D, op_popf);
//op(0xC0, op_group_C0); //186
// C0 group: _rm8_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
//op(0xC1, op_group_C1); //186
// C1 group: _rmW_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
Notes:
• Make sure to look at existing examples of similar code in the VM code.
• Look at the x86 design document references for some good descriptions of each opcode
• Ask earlz directly about any questions
• At the top of opcode_def.h there is a big comment block explaining the opcode function name standard and what things like "rW" mean
• Implement the first opcode listed and then have earlz review to make sure things looks correct
Task: [x86lib] add some missing main operation code
Description: Some opcodes are currently missing from x86 virtual machines. In this task, complete the following standard opcode (should only be some connection code, not too tight)
//op(0x9C, op_pushf);
//op(0x9D, op_popf);
//op(0xC0, op_group_C0); //186
// C0 group: _rm8_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
//op(0xC1, op_group_C1); //186
// C1 group: _rmW_imm8; rol, ror, rcl, rcr, shl/sal, shr, sal/shl, sar
note:
• Make sure to see existing similar code examples in VM (virtual machine) code
• View x86 design documents to better understand each opcode
• Ask any question directly to Earlz
• At the top of opcode_def.h, there is a large section of comments explaining the opcode function name criteria and the meaning of keywords such as "rW"
• Implement the first opcode listed, and then let Earlz check to make sure the code looks correct.
QTUMCORE-106: [x86lib] Add some more missing primary opcodes
Description: There are a few missing opcodes in the x86 VM right now. For this story, complete the following normal opcodes (it should just be a lot of connecting code, nothing too intense)
//op(0x60, op_pushaW); //186
//op(0x61, op_popaW); //186
//op(0x6C, op_insb_m8_dx); //186
//op(0x6D, op_insW_mW_dx); //186
//op(0x6E, op_outsb_dx_m8); //186
//op(0x6F, op_outsW_dx_mW); //186
Notes:
• Make sure to look at existing examples of similar code in the VM code.
• Look at the x86 design document references for some good descriptions of each opcode
• Ask earlz directly about any questions
• At the top of opcode_def.h there is a big comment block explaining the opcode function name standard and what things like "rW" mean
• Implement the first opcode listed and then have earlz review to make sure things looks correct
Task: [x86lib] add some missing main operation code
Description: Some opcodes are currently missing from the x86 virtual machine. In this task, complete the following standard opcode (should only be some connection code, not too tight)
//op(0x60, op_pushaW); //186
//op(0x61, op_popaW); //186
//op(0x6C, op_insb_m8_dx); //186
//op(0x6D, op_insW_mW_dx); //186
//op(0x6E, op_outsb_dx_m8); //186
//op(0x6F, op_outsW_dx_mW); //186
note:
• Make sure to see existing similar code examples in VM (virtual machine) code
• View x86 design documents to better understand each opcode
• Ask any question directly to Earlz
• At the top of opcode_def.h, there is a large section of comments explaining the opcode function name criteria and the meaning of keywords such as "rW"
• Implement the first opcode listed, and then let Earlz check to make sure the code looks correct.
QTUMCORE-104: [x86lib] Add some missing extended opcodes
Description: There are several missing opcodes in the x86 VM right now. For this story, complete the following extended (0x0F prefix) opcodes (it should just be a lot of connecting code, nothing too intense)
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAF, op_imul_rW_rmW); //386
Opx(0xB0, op_cmpxchg_rm8_al_r8); //48
Opx(0xB1, op_cmpxchg_rmW_axW_rW); //486
For(int i=0;i<8;i++)
{opx(0xC8 + i, op_bswap_rW); }
Notes:
• Make sure to look at existing examples of similar code in the VM code.
• Look at the x86 design document references for some good descriptions of each opcode
• Ask earlz directly about any questions
• At the top of opcode_def.h there is a big comment block explaining the opcode function name standard and what things like "rW" mean
• Implement the first opcode listed and then have earlz review to make sure things looks correct
Task: [x86lib] Adding Some Missing Extended Opcodes
Description: Some opcodes are currently missing from the x86 virtual machine. In this task, complete the following extension (0x0F prefix) opcode (should be just some connection code, not too tight)
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAF, op_imul_rW_rmW); //386
Opx(0xB0, op_cmpxchg_rm8_al_r8); //48
Opx(0xB1, op_cmpxchg_rmW_axW_rW); //486
For(int i=0;i<8;i++)
{opx(0xC8 + i, op_bswap_rW); }
note:
• Make sure to see the existing similar code example in the virtual machine code
• View x86 design documents to better understand each opcode
• Ask any question directly to Earlz
• At the top of opcode_def.h, there is a large section of comments explaining the opcode function name criteria and the meaning of keywords such as "rW"
• Implement the first opcode listed, and then let Earlz check to make sure the code looks correct.
The above series of tasks implements most of the necessary opcodes for the x86 lib kernel part (x86lib). These are the basics for the virtual machine to recognize and run x86 instructions and function as an emulator for x86 instructions.
QTUMCORE-105: [x86lib] Research how to do automated testing for x86lib
Description: Research and look for viable ways to do automated testing of x86lib's supported opcodes
Task: How to Automatically Test x86lib
Description: Study and find possible ways to automate x86lib supported opcodes
The Qtum team achieved automated testing of the x86 virtual machine kernel through the above tasks, because the parsing and running errors of the underlying instructions are often difficult to find through debugging, and must use some automated testing tools. This ensures the correctness of the x86lib kernel.
QTUMCORE-109:[x86] Add "reason" field for all memory requests
Description: In order to prepare for the upcoming gas model, a new field needs to be added to every memory access. This field basically gives the reason for why memory is being accessed so that it can be given a proper gas cost. Possible reasons:
Code fetching (used for opcode reading, ModRM parsing, immediate arguments, etc)
Data (used for any memory reference in the program, such as mov [1234], eax. also includes things like ModRM::WriteWord() etc)
Internal (used fro any internal memory reading that shouldn't be given a price.. probably not used right now outside of testbench/testsuite code)
This "reason" code can be place in MemorySystem(). It shouldn't go in each individual MemoryDevice object
Task: [x86] Add "reason" field to all memory requests
Description: In preparation for the gas model to be used, a new field needs to be added to each memory access. This field basically gives the reason why the memory was accessed so that the appropriate gas cost can be given.
Possible reasons are:
• Capture code (for opcode reads, ModRMB parsing, instant parameters, etc.)
• Data (used for memory references in programs such as mov[1234], eax, and operations similar to ModRM::WriteWord(), etc.)
Internal request (for any internal memory read that does not need to consume gas... currently only used in testbench/testsuite code)
The "reason" code can be placed in MemorySystem(). It should not be placed in any single MemoryDevice object.
The above task is mainly aimed at the Qtum x86 new gas model, and separate fields are reserved for different types of memory access requests. Currently only used to verify the feasibility, in the future will be used to calculate the actual gas price.
QTUMCORE-114: [x86] Add various i386+ instructions
Description: Implement (with unit tests for behavior) the following opcodes and groups:
//op(0x62, op_bound_rW_mW); //186
//op(0x64, op_pre_fs_override); //386
//op(0x65, op_pre_gs_override); //386
// op(0x69, op_imul_rW_rmW_immW); //186 (note: uses /r for rW)
// op(0x6B, op_imul_rW_rmW_imm8); //186 (note: uses /r for rW, imm8 is sign extended)
//op(0x82, op_group_82); //rm8, imm8 - add, or, adc, sbb, and, sub, xor, cmp
Task:[x86]Add various i386+ instructions
Description: Implement (and unit test) the following opcodes and groups:
//op(0x62, op_bound_rW_mW); //186
//op(0x64, op_pre_fs_override); //386
//op(0x65, op_pre_gs_override); //386
// op(0x69, op_imul_rW_rmW_immW); //186 (note: uses /r for rW)
// op(0x6B, op_imul_rW_rmW_imm8); //186 (note: uses /r for rW, imm8 is sign extended)
//op(0x82, op_group_82); //rm8, imm8 - add, or, adc, sbb, and, sub, xor, cmp
QTUMCORE-115: [x86] Implementer more i386+ opcodes
Description: Implement with unit tests the following opcodes:
(notice opx is extended opcode)
//op(0xC8, op_enter); //186
For(int i=0;i<16;i++)
{opx(0x80+i, op_jcc_relW); //386 opx(0x90+i, op_setcc_rm8); //386 }
Opx(0x02, op_lar_rW_rmW);
Opx(0x03, op_lsl_rW_rmW);
Opx(0x0B, op_unknown); //UD2 official unsupported opcode
Opx(0x0D, op_nop_rmW); //nop, but needs a ModRM byte for proper parsing
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA2, op_cpuid); //486
Opx(0xA3, op_bt_rmW_rW); //386
Opx(0xA4, op_shld_rmW_rW_imm8); //386
Opx(0xA5, op_shld_rmW_rW_cl); //386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAA, op_rsm); //386
Opx(0xAB, op_bts_rmW_rW); //386
Opx(0xAC, op_shrd_rmW_rW_imm8); //386
Opx(0xAD, op_shrd_rmW_rW_cl); //386
Make sure to remove these opcodes from the commented todo list as they are implemented
Task: [x86] Implement More i386+ Instructions
Description: Implements the following opcodes and unit tests:
(Note that opx is an extended opcode)
//op(0xC8, op_enter); //186
For(int i=0;i<16;i++)
{opx(0x80+i, op_jcc_relW); //386 opx(0x90+i, op_setcc_rm8); //386 }
Opx(0x02, op_lar_rW_rmW);
Opx(0x03, op_lsl_rW_rmW);
Opx(0x0B, op_unknown); / / UD2 official unsupported opcode
Opx(0x0D, op_nop_rmW); //nop, but requires a ModRM byte for proper parsing
Opx(0xA0, op_push_fs); //386
Opx(0xA1, op_pop_fs); // 386
Opx(0xA2, op_cpuid); //486
Opx(0xA3, op_bt_rmW_rW); //386
Opx(0xA4, op_shld_rmW_rW_imm8); //386
Opx(0xA5, op_shld_rmW_rW_cl); //386
Opx(0xA8, op_push_gs); //386
Opx(0xA9, op_pop_gs); //386
Opx(0xAA, op_rsm); //386
Opx(0xAB, op_bts_rmW_rW); //386
Opx(0xAC, op_shrd_rmW_rW_imm8); //386
Opx(0xAD, op_shrd_rmW_rW_cl); //386
After these opcodes are implemented, make sure to remove them from the commented TODO list.
QTUMCORE-118: Implement remaining opcodes in x86lib
Description: The remaining opcodes that do not result in an error or change of behavior should be implemented with unit tests. Take particular care and use many references for some of the weird opcodes, like nop_rm32.
Task: Implementing remaining x86lib opcodes
Description: The remaining opcodes that do not cause errors or behavior changes should be implemented through unit tests. Take special care and refer to some weird opcodes, such as nop_rm32.
The above series of tasks further adds support for i386+ opcodes and implements the rest of the necessary remaining opcodes. At this point x86lib can already support most i386+ instructions
QTUMCORE-117: Begin leveldb-backed database for x86 contracts
Description: For this story, the code work should done as a sub-project from Qtum Core, and can be done direclty in the Qtum Core github. For now, unit and integration tests should be used to confirm functionality. It will be integrated into Qtum Core later. You might need to modify Qtum Core some so that the project is built with proper dependencies. This story will implement the beginnings of a new database that will be used for smart contracts. This will only store meta-data, contract Bytecode, and constructor data for right now:
The leveldb dataset for this data should be named "contracts". The key for this dataset should be a 256-bit contract address (exact format will be specified later) awarded as a hex string.
The value data should contain the following:
• txid of contract creation (with this the chainstate db can be used to lookup blockhash)
• VM version
• contract creation parameters (see "contract deployment" page in design)
• contract creation data (the constructor data)
• contract bytecode
The interface for reading and writing into this database should be clear and extensible. Although it is being designed for the x86 VM, other VMs in the future will also use it.
Task: Implementing a leveldb database in an x86 contract
Description: For this task, code should be written in the Qtum Core subproject and can be done directly on the Qtum Core github. Currently, unit tests and integration tests should be used to confirm the correctness of the function. The following code will be integrated into Qtum Core. It may be necessary to modify the Qtum Core appropriately so that the project has the appropriate dependencies. This task will implement a prototype of a new database that can be used for smart contracts. This database currently only stores meta-data, contract bytecode, and constructor data.
The data leveldb data set should be named "contract." The key of the data set should be the contract address of a 256-digit hexadecimal string (the specific format will be specified later).
Value data should contain the following sections:
• Contract created transaction id (chain state database can use it to find block hash)
• Virtual Machine Version
• Contract creation parameters (see "Contract Deployment" page in the design)
• Contract creation data (constructor data)
• Contract bytecode
The interface for database reads and writes should be clear and extensible. Although designed for x86 virtual machines, other virtual machines can be used in the future.
The above task implements the most basic leveldb database available for x86 contracts. Currently, this database only stores some specific data such as contract codes, which can be expanded in the future. In addition, the design emphasizes the universality of the interface and facilitates the invocation of other virtual machines in the future.
QTUMCORE-119: Research needed functions in Qtum's version of libc
Description: We should evaluate the C99 standard library specifications to determine which functions should be supported in the x86 VM, with easy to use tooling provided to developers (ie, a custom toolchain). List the headers and functions that are common enough to warrant support , as well as is agnostic to the operating system, or can some way fit into the operating system like model of Qtum's x86 VM.
Task: To study the functions required in the libc version of Qtum
Description: We should evaluate the C99 standard library specification to determine which features should be supported in the x86 virtual machine and make it easier to use the tools provided to the developer (for example, a customized tool chain). Lists the most common function headers and functions that must be supported. These function headers and functions are agnostic to the operating system, or to some extent suitable for operating systems similar to the Qtum x86 virtual machine model.
Based on the c99 standard library specification, the Qtumx86 virtual machine implements a simplified version of the libc library for use by smart contract developers.
QTUMCORE-126: [x86] [Compiler] Figure out and document a way of compiling/packaging the QtumOS GCC toolchain for Windows, Linux, and OSX
Description: We should evaluate the C99 standard library specifications to determine which functions should be supported in the x86 VM, with easy to use tooling provided to developers (ie, a custom toolchain). List the headers and functions that are common enough to warrant support , as well as is agnostic to the operating system, or can some way fit into the operating system like model of Qtum's x86 VM.
Task:[x86][Compiler] Finding and documenting a way to compile/package QtumOS GCC toolchain for Windows, Linux and OSX
Description: As a contract developer, I don't want to compile the QtumOS toolchain when developing x86 virtual machine contracts.
For this task, study and document how to build the QtumOS GCC toolchain for Windows, Linux and OSX. Using this toolchain on all platforms should have the same experience. Following this document, anyone should be able to compile the pre-built version of GCC.
In order to use the same compiler tool for any common platform, the above task implements a cross-platform, pre-compiled gcc tool for smart contract developers*.*
QTUMCORE-127: [x86] [libqtum] Add basic blockchain data APIs
Description: As a contract devleoper, I want to be capable of getting basic blockchain data like network weight without needing to know how to write assembly code.
For this story, create a new project for libqtum to compile to libqtum.a using the QtumOS compiler, and place all definitions in a qtum.h file. The first operations to be added are some basic system calls for the following:
• Access to past 256 block hashes
• Block gas limt
• MPoS staking address for block (only the 0th address indicating the block creator)
• Current block difficulty
• Previous block time
• Current block height
These functions are not yet built into the x86 VM or Qtum, so these will just be mocks for now that can't be beta until later.
API list:
previousBlockTime() -> int32 – syscall(0)
• blockGasLimit() -> int64 – syscall(1, &return);
• blockCreator() -> address_t – syscall(2, &return);
• blockDifficulty() -> int32 – syscall(3);
blockHeight() -> int32 – syscall(4);
• getBlockHash(int number) -> hash_t (32 bytes) – syscall(5, number, &return);
Note, this inline assembly code can be used as a template for safely using the "int" opcode from C code, and should be capable of being put into a .S assembly file and used via:
//in C header
Extern long syscall(long syscall_number, long p1, long p2, long p3, long p4, long p5, long p6);
//in .S file
User mode
.global syscall
Long syscall(long number, long p1, long p2, long p3, long p4, long p5, long p6)
Syscall:
Push %ebp
Mov %esp, %ebp
Push %edi
Push %esi
Push %ebx
Mov 8+0*4(%ebp), %eax
Mov 8+1*4(%ebp), %ebx
Mov 8+2*4(%ebp),%ecx
Mov 8+3*4(%ebp), %edx
Mov 8+4*4(%ebp), %esi
Mov 8+5*4(%ebp), %edi
Mov 8+6*4(%ebp), %ebp
Int $0x40
Pop %ebx
Pop %esi
Pop %edi
Pop %ebp
Ret
Task:[x86][libqtum]Add basic blockchain data APIs
Description: As a contract developer, I hope to obtain basic blockchain data, such as network weight, without writing assembly code.
For this task, create a new project for libqtum, compile to libqtum.a using the QtumOS compiler, and put all definitions in the qtum.h file. The first operation that needs to be added is the basic system call to the following:
• Access to past 256 block hashes
• Block gas limit
• MPoS staking address of the block (only the creator of the 0th address indicator block)
• Current block difficulty
• Time of previous block
• The height of the current block
These features have not yet been built into x86 virtual machines or Qtum, so these are only temporary simulations that can be tested later
API list:
previousBlockTime() -> int32 – syscall(0)
• blockGasLimit() -> int64 – syscall(1, &return);
• blockCreator() -> address_t – syscall(2, &return);
• blockDifficulty() -> int32 – syscall(3);
blockHeight() -> int32 – syscall(4);
• getBlockHash(int number) -> hash_t (32 bytes) – syscall(5, number, &return);
Note that this inline assembly code can be used as a template for safely using the "int" opcode of C code and should be able to be put into an .S assembly file and used by:
//in C header
Extern long syscall(long syscall_number, long p1, long p2, long p3, long p4, long p5, long p6);
//in .S file
User mode
.global syscall
Long syscall(long number, long p1, long p2, long p3, long p4, long p5, long p6)
Syscall:
Push %ebp
Mov %esp, %ebp
Push %edi
Push %esi
Push %ebx
Mov 8+0*4(%ebp), %eax
Mov 8+1*4(%ebp), %ebx
Mov 8+2*4(%ebp),%ecx
Mov 8+3*4(%ebp), %edx
Mov 8+4*4(%ebp), %esi
Mov 8+5*4(%ebp), %edi
Mov 8+6*4(%ebp), %ebp
Int $0x40
Pop %ebx
Pop %esi
Pop %edi
Pop %ebp
Ret
The basic data of the blockchain is very useful for smart contract writing, but it is very difficult for ordinary smart contract developers to obtain this data without providing more tools. The above task provides an API for acquiring basic block data, enabling developers to quickly obtain relevant block data, thereby improving the efficiency of smart contract development.
QTUMCORE-128: [x86] [VM] Add very basic gas system
Description: As a contract devleoper, I want to test how intensive my prototype x86 smart contracts will be on a real blockchain.
For this story, add a very basic gas model to the x86 VM. There should be a new option added to Execute() that allows for specifying an absolute gas limit that execution will error upon hitting. It should also be possible to retrieve how much Gas was used during the execution of the program. For this basic gas model, each instruction is 1 gas. It is ok if there are edge cases where an instruction might not be counted.
Task: [x86][VM] Adds the Most Basic Gas System
Description: As a contract developer, I want to test the strength of my prototype x86 smart contract on the real blockchain.
For this task, add a very basic gas model to the x86 virtual machine. There should be a new option added to Execute() that allows you to specify an absolute gas limit, as long as you reach that value and you will get an error. It should also be possible to calculate how much gas is used during program execution. For this basic gas model, each instruction is 1gas. It is also possible if there are boundary scenes where the instruction may not be calculated.
The above task implements the most basic gas system of the x86 virtual machine, and can be used to calculate the specific consumed gas of the contract in the real blockchain.
QTUMCORE-129: [x86] [DeltaDB] Add very basic prototype version of DeltaDB
Description: As a contract developer, I want my prototype x86 contracts to persist within my own personal blockchain so that I can do more than just execute them. I need to be able to call them after deployment.
Right now, we will only concern ourselves with loading and writing contract bytecode. The key thus should be "bytecode_%address%" and the value should be the raw contract bytecode. The contract bytecode will have an internal format later so that bytecode, constant Data, and contract options are distinguishable in the flat data.
The exposed C++ class interface should simply allow for the smart contract VM layer to look up the size of an address's code, load the address's code into memory, and write code from memory into an address's associated data store.
Look at how the leveldb code works for things like "txindex" in Qtum and model this using the Bitcoin database helper if possible. There is no need for this to be tied to consensus right now. It is also also to ignore block disconnects and things That would cause the state to be reverted in the database.
Please do all work based on the time/qtumcore0.15 branch in Qtum Core for right now. Also, for the format of an "address", please look for "UniversalAddress" in the earlz/x86-2 branch, and copy the related Code if needed.
Task: [x86][DeltaDB] Add the most basic version of the DeltaDB prototype
Description: As a contract developer, I hope that my prototype x86 contract can exist in my own blockchain so that all I can do is not just run them. I hope to be able to call them after deployment.
For now, we only care about loading and writing contract bytecodes. Therefore, the key should be "bytecode_%address%" and the value should be the original contract bytecode. Contract bytecodes will have an internal format so that bytecode, constant data, and contract options can be distinguished in flat data.
The exposed C++ class interface should allow the smart contract virtual machine layer to look up the size of the address code, load the address code into memory, and write the code from memory into the address's associated data store.
Look at how the leveldb code for things like "txindex" in Qtum works, and if possible, model it using the Bitcoin database helper. There is no need to associate this issue with consensus. It is also possible to temporarily ignore the disconnection of the block and cause the state of the database to recover.
Now do all the work based on Qtum Core's time/qtumcore0.15 branch. In addition, for the "address" format, look for "UniversalAddress" in the earlz/x86-2 branch and copy the relevant code if necessary.
The above task adds the most basic database DeltaBD to the x86 virtual machine, which can be used to store contract status and is a necessary condition for contract invocation*.*
QTUMCORE-130: [x86] [UI] Add "createx86contract" RPC call
Description: As a smart contract developer, I want to be capable of easily deploying my contract code no too much worrying.
In this story, add a new RPC call named "createx86contract" which accepts 4 arguments: gas price, gas limit, filename (ELF file), and sender address.
The ELF file should be tore apart into a flat array of data refer the contract data to be put onto the blockchain.
  1. int32 size of options
  2. int32 size of code
  3. int32 size of data
  4. int32 (unused)
  5. options data (right now, this can be empty, and size is 0)
  6. code memory data
  7. data memory data
Similar ELF file processing exists in the x86Lib project and that code can be adapted for this. Note that there should be some way of returning errors and warnings back to the user in case of problems with the ELF file.
After the contract data is extracted and built, a transaction output of the following type should be constructed (similar to create contract)
OP_CREATE
The RPC result should be similar to what is returned from createcontract, except for a warnings/errors field should be included, and the contract address should be a base58 x86 address, and any other fields invalid for x86 should be excluded
Task: [x86][UI] Add "createx86contract" RPC Call
Description: As a developer of smart contracts, I hope to deploy my contract code very simply.
In this task, a new RPC call named "createx86contract" is added, which accepts four parameters: gas price, gas limit, filename (ELF file), and the sender address.
The ELF file should be split into a set of data indicating that the contract data was placed on the blockchain.
  1. int32 size of options (options, shaping 32-bit size)
  2. int32 size of code (code, shaping 32-bit size)
  3. int32 size of data (data, integer 32-bit size)
  4. int32 (not used)
  5. options data (options data, now empty, size 0)
  6. code memory data
  7. data memory data
There is a similar ELF file processing in the x86lib project, and its code can be modified according to the requirements here. Note that when there is a problem with the ELF file, there should be some way to return errors and warnings to the user.
After extracting and building the contract data, a transaction output of the following type will be constructed (similar to createcontract)
OP_CREATE
The RPC result should be similar to the one returned in createcontract, except that it contains a warning/error field, and the contract's address should be a base58 encoded x86 address. Any other fields that are invalid for x86 should be excluded.
The above task adds a new qtum rpc call, namely createx86contract. The rpc can load smart contracts directly from the elf executable file and deploy it to the blockchain. It also adds error return mechanisms to allow developers to know the contract deployment status.
summary
This chapter contains the most critical implementation details of the Qtum x86 virtual machine. Based on these tasks, the Qtum team has implemented the first virtual machine that can deploy and run x86 smart contracts. It also provides tools for writing smart contracts in C language. The prototype now supports writing contracts in C language. The implementation of the Qtum x86 virtual machine continues, and the follow-up Qtum project team will continue to expose the original design documents. Interested readers will continue to pay attention.
submitted by thisthingismud to Qtum [link] [comments]

Qtum Quantum Chain Design Document (3): The Account Abstraction Layer (AAL) brings layered design while supporting EVM and X86 virtual machines

Qtum Quantum Chain Design Document (3): The account abstraction layer brings layered design while supporting EVM and X86 virtual machines
https://mp.weixin.qq.com/s?__biz=MzI2MzM2NDQ2NA==&mid=2247485904&idx=1&sn=6509d404ca5dba5ccdf2d21fd3db5274&chksm=eabc43cfddcbcad9ab539c480c3b89e5080ab4249cc068a2056c9a34282f12b80690f16cdcae&scene=21#wechat_redirect
Qtum original design document summary (3) -- Account Abstraction Layer (AAL)
In the first two articles, we reviewed the birth of Qtum's newly defined opcode and Qtum's preliminary preparations for integrating the Ethereum virtual machine EVM. Ethereum is an account-based blockchain system, while Qtum is based on the Bitcoin UTXO model. In order to solve the problem of the difference between the two transaction models, the account abstraction layer (AAL) is added to the design of the quantum chain, so as to maximize the original function of the original bitcoin and be compatible with the existing Ethereum smart contract. .
In fact, the role of AAL is far more than just getting through Bitcoin and Ethereum. The account abstraction layer AAL can theoretically be used on any blockchain project like the Bitcoin UTXO model and is compatible with any virtual machine based on the account model. Subsequent Qtum development teams will refactor AAL to be compatible with more virtual machines, such as x86 virtual machines being developed by Qtum, and other virtual machines that may be adapted in the future.
In this chapter, we will restore the original QTD development team's original design document related to the account abstraction layer AAL (with Chinese translation) as follows (ps: QTUM<#> or QTUMCORE<#> in the document is the internal design document number):
QTUMCORE-47: Add Account Abstraction Layer for contracts
Description: The account abstraction layer is a way for EVM contracts to work on the UTXO based blockchain. This story should cover nearly the entire module.
The condensing transaction will spend any existing contract vouts which require their balance to be changed, and the outputs will be The new balance of the contracts.
The condensing transaction is created so that a contract never owns more than 1 UTXO. This significant simplifies coin picking, and prevents many attack vectors with filling up a block.
There may be more than 1 condensing transaction per block. In the case of a single contract having multiple balance changes in a block, a condensing transaction might spend a previous condensing transaction's outputs in the same block. This is slightly wasteful but reduces the complexity of Logic required, and allows for easily adding more contract execution transactions without needing to rewrite any previous transactions.
An ITD for the full behavior with all known edge cases is here: https://github.com/qtumproject/qtum-itds/blob/masteaal/condensing-transaction.md
Task: Add the contract's account abstraction layer (AAL)
Description: The Account Abstraction Layer (AAL) is a solution that enables EVM contracts to work on UTXO-based blockchains. This task covers almost the entire module.
You can use OP_CALL to send funds to the contract. When a contract receives or sends funds, it should result in a "condensing transaction." The transaction will cost any existing contract vouts and cause their balance to change, and outputs will become the new balance of the contract.
The condensing transaction is created to make the contract have no more than one UTXO. This greatly simplifies coin picking and can prevent attacks by filling up blocks.
There can be more than one condensing transaction in each block. In the case where a single contract has multiple balance changes in a block, the condensing transaction may spend the output of the previous condensing transaction in the same block. This is a bit of a waste but can reduce the complexity of the logic and simply add more contracts to run the transaction without rewriting any of the previous transactions.
See the ITD for the operation of all known boundary scenarios:
Https://github.com/qtumproject/qtum-itds/blob/masteaal/condensing-transaction.md
The above task describes the account abstraction layer as a whole. Since many preparations have been completed, the task does not describe too much implementation details. Interested readers can refer to the relevant design documents in the previous two series of articles. The condensing transaction mentioned here is a very important concept that avoids the inability to reach consensus due to differences in the order in which the contract balances are selected.
QTUMCORE-54: Add tests for Account Abstraction Layer for contracts
Description: Cover with tests Account Abstraction Layer change QTUMCORE-47 DONE
Task: Test the account abstraction layer of the contract
Description: including the modification of the account abstraction layer in QTUMCORE-47, also to test
The above tasks are mainly to completely test the newly designed account abstraction layer to ensure that it is safe before going online. Since the Qtum main online line, the account abstraction layer has been working stably, mainly due to sufficient testing before going online.
QTUMCORE-58: Add additional AAL and Condensing TX validation rules
Description: We have just implemented the AAL. However, it's validation rules are not completely secure yet. We should do the following when receiving a block with condensing executions:
Implementing this will allow us to remove previous OP_TXHASH one by one checks (no need for double execution), and will allow the validation of all VM future tx types (including the recent validation condensing transaction ITD).
  1. First make sure stateRoot and utxoRoot hashes are part of the block hash computation.
  2. Process block using checkblock, and all other checks needed before validating transaction scripts and contract execution
  3. Create a completely new block, copying all header info from the original block
  4. Add coinbase and stake transactions from original block
  5. Add 1 transaction at a time from original block, executing contracts as needed
Any expected OP_TXHASH transactions should be added to the block in-order of creation. If there is an unexpected OP_TXHASH tx in the original block, it should be rejected (This might be changed later for soft-fork compatibility)
  1. Process every transaction until none remain
  2. Create new merklehash and place in blockheader
  3. Compute state root hashes etc EVM state data
  4. Compare the new block's hash to the old block's hash
  5. If the new block's hash matches, then accept it. Otherwise, reject the block
Pseudo-code:
{code:java}
Block = receiveBlock();
CheckBlock(block)
testBlock = new Block();
testBlock.header=block.header
testBlock.MerkelRoot=0; //make 0 because no transactions in block yet
Foreach(tx in block){
If(tx.hasExec()){
testblock.Add(tx) //add contract tx to block
Result = tx.exec();
If(result.hasOpHashTx()){
Foreach(opHashTx in result){
testblock.Add(opHashTx); //add resulting condensing tx from contract execution (I think there can be more than 1 if a single tx has multiple EVM execs?)
}
}
}else if(tx.isContractSpend()){
//don't add spends
}else{
testblock.Add(tx) //add any other tx type, non-standard, pubkeyhash, etc
}
}
testBlock.calculateMerkelRoot();
testBlock.calculateStateRoots();
Assert(testBlock.hash() == block.hash())
{code}
Task: Add additional account abstraction layers and Condensing transaction validation rules
Description: We have just implemented the Account Abstraction Layer (AAL). However, its validation rules are not completely secure. When we receive the block that condensing runs, we should do the following:
Implementing this feature will allow us to remove the previous OP_TXHASH through a check (no need to run again) and allow verification of future transaction types for all virtual machines (including recent condensing transaction ITD verification)
  1. First make sure the stateRoot and utxoRoot hashes are part of the block hash operation
  2. Use the checkblock to process the block, as well as all other checks required before the transaction script validation and contract run
  3. Create a brand new block and copy all block header information from the original block
  4. Add coinbase and stake transactions from the original block
  5. Add 1 transaction from the original block at a time, run the contract as needed
  6. Any expected OP_TXHASH transactions should be added to the block in the order they were created. If an unexpected OP_TXHASH appears in the original block, it should be rejected (this may need to be modified later, in order to support soft fork compatibility)
  7. Process each transaction until all transactions have been processed
  8. Create a new merklehash and place it in the block header
  9. Calculate EVM status data such as state root hashes
  10. Compare the new block hash with the old block hash
  11. If the new block hash matches, accept it. Otherwise, reject the block
Pseudo code:
{code:java}
Block = receiveBlock();
CheckBlock(block)
testBlock = new Block();
testBlock.header=block.header
testBlock.MerkelRoot=0; //make 0 because no transactions in block yet
Foreach(tx in block){
If(tx.hasExec()){
testblock.Add(tx) //add contract tx to block
Result = tx.exec();
If(result.hasOpHashTx()){
Foreach(opHashTx in result){
testblock.Add(opHashTx); //add resulting condensing tx from contract execution (I think there can be more than 1 if a single tx has multiple EVM execs?)
}
}
}else if(tx.isContractSpend()){
//don't add spends
}else{
testblock.Add(tx) //add any other tx type, non-standard, pubkeyhash, etc
}
}
testBlock.calculateMerkelRoot();
testBlock.calculateStateRoots();
Assert(testBlock.hash() == block.hash())
{code}
The above task details the verification rules of the node for the block containing the contract transaction after the introduction of the account abstraction layer AAL. This rule is an important part of the Qtum blockchain consensus.
summary
The original document in this chapter mainly describes the design details of the Qatum account abstraction layer AAL. Although it is less content, it is an important innovation of Qtum compatible bitcoin and Ethereum virtual machine. Interested readers can further read "How to get through Bitcoin and Ethereum Ecology through the Qtum Account Abstraction Layer". The article is a detailed analysis of AAL from the code level.
submitted by thisthingismud to Qtum [link] [comments]

uvexltdbitcoin calculator miningbest cryptocurrencies ... Chainlink Could Follow Bitcoin and Go Parabolic if It ... Bitcoin on chain transfer volume exceeds 4M as price rises ... Fa Bani jucand jocuri pe calculator Bani si Bitcoins cu ... Bitcoin Mining - Rig Software Calculator Machine Hardware ...

It seems to be referring to the removal of some Script opcodes from the Bitcoin server earlier and making the corresponding ... 6502 8008 8085 8086 8087 alto analog Apollo apple arc arduino arm beaglebone bitcoin c# calculator chips css electronics f# fpga fractals genome haskell html5 ibm ibm1401 intel ipv6 ir java javascript math oscilloscope photo power supply random reverse-engineering ... OP_* (1)—all current opcodes in Bitcoin are a single byte. ecdsa_public_key (33)—old wallets may use 65-byte public keys. ecdsa_signature (72) (about half of all signatures generated with a random nonce are this size, about half are one byte smaller, and a small percentage are smaller than that) schnorr_public_key (32) Bitcoin SV wird von 61 bis 72 Prozent der Bitcoin Cash Hashrate unterstützt, während ABC nicht einmal auf bestenfalls 30 Prozent kommt. Dies liegt zum einen daran, dass Calvin Ayres Pool „CoinGeek“ noch einmal deutlich zugelegt hat. Aber auch der für SV-willige Miner eröffnete SV-Pool hat mächtig Hashpower aufgebaut, wie auch der zu nChain gehörende BMG Pool sowie der erst am ... Future Development. Bitcoin Cash will get a hard fork itself on 15 May 2018. The update will further increase the block size, to 32 MB. Additionally, native tokens, like ERC20 for Ethereum, will be implemented on the BCH blockchain through the re-introducing of Opcodes, which were deactivated on BTCs blockchain from Satoshi Nakamoto due to security reasons. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example.com find submissions from "example.com"

[index] [7691] [49917] [2318] [6421] [27310] [36473] [41930] [22696] [36620] [31036]

uvexltdbitcoin calculator miningbest cryptocurrencies ...

CLICK HERE FOR MORE INFO: https://rebrand.ly/forex33 And start earning in the Forex Market Now! In our growing international company atmosphere, there are fi... A combination of supportive fundamental and technical catalysts are pointing towards an extended Bitcoin bull run. The BTC/USD exchange rate kicked off the n... How Mining Crypto can pay you daily Please subscribe to this channel and click the bell for updates. Use the State block to direct a downstream message when one of a collection of possibilities... Inregistreaza-te aici : https://skins.cash/user/ref/76561198860359631 Poti sustine acest canal donand aici incepand cu 1$ aici : https://www.tipeeestream.com...

#