HackMD
    • Sharing Link copied
    • /edit
    • View mode
      • Edit mode
      • View mode
      • Book mode
      • Slide mode
      Edit mode View mode Book mode Slide mode
    • Note Permission
    • Read
      • Only me
      • Signed-in users
      • Everyone
      Only me Signed-in users Everyone
    • Write
      • Only me
      • Signed-in users
      • Everyone
      Only me Signed-in users Everyone
    • More (Comment, Invitee)
    • Publishing
    • Commenting Enable
      Disabled Forbidden Owners Signed-in users Everyone
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Invitee
    • No invitee
    • Options
    • Versions and GitHub Sync
    • Transfer ownership
    • Delete this note
    • Template
    • Save as template
    • Insert from template
    • Export
    • Google Drive Export to Google Drive
    • Gist
    • Import
    • Google Drive Import from Google Drive
    • Gist
    • Clipboard
    • Download
    • Markdown
    • HTML
    • Raw HTML
Menu Sharing Help
Menu
Options
Versions and GitHub Sync Transfer ownership Delete this note
Export
Google Drive Export to Google Drive Gist
Import
Google Drive Import from Google Drive Gist Clipboard
Download
Markdown HTML Raw HTML
Back
Sharing
Sharing Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
More (Comment, Invitee)
Publishing
More (Comment, Invitee)
Commenting Enable
Disabled Forbidden Owners Signed-in users Everyone
Permission
Owners
  • Forbidden
  • Owners
  • Signed-in users
  • Everyone
Invitee
No invitee
   owned this note    owned this note      
Published Linked with GitHub
Like BookmarkBookmarked
Subscribed
  • Any changes
    Be notified of any changes
  • Mention me
    Be notified of mention me
  • Unsubscribe
Subscribe
# Notes on Header Accumulator and the Portal Network An accurate view of the header chain is essential for secure portal client operation. All validation of data from the portal network links back to the header chain. For Example: - State: - Requires validation of a proof against `Header.state_root` - History - Transactions: - Requires validation of a transaction against `Header.transactions_root` - Receipts: - Requires validation of a receipt against `Header.receipts_root` The Header chain is >6GB and grows linearly. This exceeds the storage allowances and thus we need a network design that does not require clients to store the full chain. A reasonable expectation is that clients will store *recent* headers, but *forget* headers beyond some recent time horizon. Maybe as few as 256 recent headers. This poses a problem for retrieval of a header at a specific block number. At any height, there can be many valid block headers, but only one that is canonical. The non-canonical ones are *uncles* (or maybe were never included in the chain at all). The only way to validate that a header is part of the canonical chain would be to trace backwards using `Header.parent_hash` from the HEAD of the chain to the block number in question. This is a problem because it is an `O(N)` operation and proving the canonicall-ness of a header sufficiently far in the history becomes infeasible to do within a reasonable amount of time. The proposed solution for this is to use a double batched merkle accumulator: https://ethresear.ch/t/double-batched-merkle-log-accumulator/571 Assuming a client has an accurate and recent view of this accumulator, any arbitrary header from history can be proven to be part of the canonical chain by providing a merkle proof which proves the header was included in the accumulator. This gives us a solid solution for proving canonicallness of historical headers, but shifts the problem to acquisition and maintenance of the "header accumulator". ## Build it from scratch The *trustless* way to do this is to build the entire accumulator from scratch. We start by assuming the client has a block header that it considers to be the head of the chain. The client then recursively uses `Header.parent_hash` to fetch the parent header all the way back to the genesis block of the chain. Once they have constructed a valid sequence of headers from the genesis block to their head block, they can then construct the accumulator from those headers. Keeping the accumulator up-to-date involves following the head of the chain as it progresses, and updating the accumulator accordingly. This process can be parallelized, by speculatively fetching a "skeleton" of the header chain, such as fetching every 1000th header, and then fetching the segments between these headers in parallel. This approach is *easy* to implement, but imposes UX problems that make it non-viable for most portal client use cases. Even with the skeleton approach, it is prohibitively expensive with respect to computation, bandwidth, and storage for a resource constrained device to do this. ## Paths towards viable options All of the options that I (piper) have imagined for acquiring an accurate and up-to-date view of the accumulator that are viable for portal client requirements involve some level of trust or uncertainty, but I believe they are viable. We'll start with these base assumptions: - The network allows us to request headers by their hash - The network allows us to ask someone for the *hash* of their accumulator at recent heights. - The network allows us to ask someone for a copy of their accumulator at recent heights. ### Sampling One approach to acquiring an accurate view of the accumulator would be to simply ask a random set of nodes for their accumulator and acquire the one which is most popular. If we assume an honest majority of nodes this approach works and is quite simple. This approach can be griefed by malicious peers returning *junk* data. ### Hueristics for Validation Assuming we acquire a recent view of the accumulator and want to try and validate that it is accurate, we should be able to accomplish this with some heuristics. If we acquire the accumulator from "Alice" and then request a random historical header from "Bob" and validate that proof against the accumulator we have the following possibilities. - Proof is invalid: - A: Bob is malicious and gave us an invalid proof - B: Alice is malicious and gave us an invalid accumulator - Proof is valid - C: Bob and Alice are both malicious and both lying to us - D: The accumulator is valid. #### Testing "A" vs "B" We can test `A` vs `B` by requesting the same header+proof from more peers. If all of the peers we request the proof from return a proof that is invalid, then we can with high probability assume that we are dealing with option `B` and should discard our accumulator and try another peer. Otherwise we can assume we are dealing with option `A` This test is relatively simple to execute. #### Testing "C" vs "D" We can test `C` vs `D` by performing N random samplings from the historical headers. The amount of work necessary to pass off a fraudulent accumulator grows exponentially as more and more samples from history are validated. At each sample you would pick a block number `N` and request the header at that height. If no proof can be retrieved from the network you assume the accumulator is invalid. If the proof can be retrieved and is valid, **and** the returned header is also valid, then we know one of two things. - E: The accumulator is correct - F: The accumulator is incorrect **but** an attacker has generated a valid header at the requested height The cost of executing situation `F` can be measured, since each header has a proof-of-work seal, and that seal can be validated. This means that each time we perform this check by requesting a random header+proof and validating them, we increase the amount of work that an attacker must do to *trick* us into accepting an invalid accumulator. By requesting headers at random, we increase the cost of performing this attack because the attacker will have to actually generate a valid proof-of-work chain of headers at the requested height. After performing a sufficient number of these random validations we can probabalistically assume that we are operating in case `E` and that the accumulator is indeed valid. #### Additional security Clients can also do any of the following to further increase their confidence. - Validate headers against centralized providers. - When checking a random header to validate an accumulator the returned header can actually be validated against centralized sources like Infura or Etherscan. - Validate headers againt known checkpoints. - Clients can include specific checkpoints for either the accumulator or hard coded header hashes at certain heights.

Import from clipboard

Advanced permission required

Your current role can only read. Ask the system administrator to acquire write and comment permission.

This team is disabled

Sorry, this team is disabled. You can't edit this note.

This note is locked

Sorry, only owner can edit this note.

Reach the limit

Sorry, you've reached the max length this note can be.
Please reduce the content or divide it to more notes, thank you!

Import from Gist

Import from Snippet

or

Export to Snippet

Are you sure?

Do you really want to delete this note?
All users will lost their connection.

Create a note from template

Create a note from template

Oops...
This template has been removed or transferred.


Upgrade

All
  • All
  • Team
No template.

Create a template


Upgrade

Delete template

Do you really want to delete this template?

This page need refresh

You have an incompatible client version.
Refresh to update.
New version available!
See releases notes here
Refresh to enjoy new features.
Your user state has changed.
Refresh to load new user state.

Sign in

Sign in via SAML

or

Sign in via GitHub

Help

  • English
  • 中文
  • 日本語

Documents

Tutorials

Book Mode Tutorial

Slide Example

YAML Metadata

Resources

Releases

Blog

Policy

Terms

Privacy

Cheatsheet

Syntax Example Reference
# Header Header 基本排版
- Unordered List
  • Unordered List
1. Ordered List
  1. Ordered List
- [ ] Todo List
  • Todo List
> Blockquote
Blockquote
**Bold font** Bold font
*Italics font* Italics font
~~Strikethrough~~ Strikethrough
19^th^ 19th
H~2~O H2O
++Inserted text++ Inserted text
==Marked text== Marked text
[link text](https:// "title") Link
![image alt](https:// "title") Image
`Code` Code 在筆記中貼入程式碼
```javascript
var i = 0;
```
var i = 0;
:smile: :smile: Emoji list
{%youtube youtube_id %} Externals
$L^aT_eX$ LaTeX
:::info
This is a alert area.
:::

This is a alert area.

Versions

Versions and GitHub Sync

Sign in to link this note to GitHub Learn more
This note is not linked with GitHub Learn more
 
Add badge Pull Push GitHub Link Settings
Upgrade now

Version named by    

More Less
  • Edit
  • Delete

Note content is identical to the latest version.
Compare with
    Choose a version
    No search result
    Version not found

Feedback

Submission failed, please try again

Thanks for your support.

On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

Please give us some advice and help us improve HackMD.

 

Thanks for your feedback

Remove version name

Do you want to remove this version name and description?

Transfer ownership

Transfer to
    Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

      Link with GitHub

      Please authorize HackMD on GitHub

      Please sign in to GitHub and install the HackMD app on your GitHub repo. Learn more

       Sign in to GitHub

      HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.

      Push the note to GitHub Push to GitHub Pull a file from GitHub

        Authorize again
       

      Choose which file to push to

      Select repo
      Refresh Authorize more repos
      Select branch
      Select file
      Select branch
      Choose version(s) to push
      • Save a new version and push
      • Choose from existing versions
      Available push count

      Upgrade

      Pull from GitHub

       
      File from GitHub
      File from HackMD

      GitHub Link Settings

      File linked

      Linked by
      File path
      Last synced branch
      Available push count

      Upgrade

      Danger Zone

      Unlink
      You will no longer receive notification when GitHub file changes after unlink.

      Syncing

      Push failed

      Push successfully