# Portal Network Brain Dump > ignore this, it's just me jotting down structure for a talk.... What is it? Where did it come from? How does it work (at a high level)? What does this mean for: users, clients, wallets ## Layering features/apis on top of base Portal Network functionality How can we build layers on top of the portal network APIs. The value added should be able to be found in things that: - can be parrellelized: - Many concurrent requests performed by independent actors coordinating around a single task. - are of common interest: - Many actors benefitting from deduplicating common work and taking advantage of equivalent work that was already performed. - are provable or verifiable - The result must be easy to verify Candidates: - log filtering - search can be parallelized - cooperative ongoing filters for *new* occurences might be tractable - requires "index" of the logs. - easy to verify a provided result is valid... - prohibitively difficult to verify that something wasn't omitted. - but easy to prove someone omitted something if the system is appropriately designed. - proof generation - parallelizing trie traversal in `GetNodeData` style network. - there are commonly accessed parts of the trie - finding transactions for a specific sender - use of `nonce` information of known transactions and current nonce can provide bounds on search space. - some sort of advertisement that someone is seeking specific transactions. - maybe use bloom filters for aggregating information Thoughts on incentives Many of the layered functions can be more efficiently served by a larger node with full local indices over the data being queried. If we try to *pay* people for their work, then these sort of nodes will "win" out on any sort of race to provide the requested data. This naturally turns this type of functionality into a client/server model which may be ok, but also goes against the idea that even nodes with a small amount of resources can contribute... Once possible conclusion to be drawn from this is that adding incentives will have a negative effect on the network as a small number of participants will likely be the ones that are able to capture the majority of the profit. # Ways to speed up portal network data access Some parts of the trie are going to be accessed more often than others. Traversing the network is expensive because of the extra round trips needed to drill down into a section of the neighborhood that you aren't familiar with. Specifically, the top few layers of the trie tend to be full. The total trie size can be *inferred* by probing the trie in random locations to determine average depth. Knowing the total size of the trie, and the location of the node in the trie, we should be able to probabalistically determine the likelihood of that node being accessed for any random trie access. This number can then be used to determine how many times a piece of data should be replicated. For replication, we change the *storage* rules to use some *modulo* style operator to divide the full address space up into evenly sized sections. The idea being, that there would be `N` sections where `N` is the replecation factor and that with each of the `N` sections there would be one location in which the data belonged.