1. 12 Jan, 2016 1 commit
  2. 14 Dec, 2015 1 commit
  3. 05 Nov, 2015 1 commit
  4. 27 Oct, 2015 1 commit
  5. 23 Oct, 2015 1 commit
  6. 18 Oct, 2015 1 commit
  7. 12 Oct, 2015 1 commit
  8. 03 Oct, 2015 1 commit
  9. 25 Sep, 2015 1 commit
  10. 16 Sep, 2015 1 commit
  11. 15 Sep, 2015 2 commits
  12. 09 Sep, 2015 2 commits
  13. 05 Sep, 2015 1 commit
  14. 23 Aug, 2015 2 commits
  15. 04 Aug, 2015 1 commit
    • Juan Batiz-Benet's avatar
      bitswap/provide: improved rate limiting · 06b49918
      Juan Batiz-Benet authored
      this PR greatly speeds up providing and add.
      
      (1) Instead of idling workers, we move to a ratelimiter-based worker.
      We put this max at 512, so that means _up to_ 512 goroutines. This
      is very small load on the node, as each worker is providing to the
      dht, which means mostly waiting. It DOES put a large load on the DHT.
      but i want to try this out for a while and see if it's a problem.
      We can decide later if it is a problem for the network (nothing
      stops anyone from re-compiling, but the defaults of course matter).
      
      (2) We add a buffer size for provideKeys, which means that we block
      the add process much less. this is a very cheap buffer, as it only
      stores keys (it may be even cheaper with a lock + ring buffer
      instead of a channel...). This makes add blazing fast-- it was being
      rate limited by providing. Add should not be ratelimited by providing
      (much, if any) as the user wants to just store the stuff in the local
      node's repo. This buffer is initially set to 4096, which means:
      
        4096 * keysize (~258 bytes + go overhead) ~ 1-1.5MB
      
      this buffer only last a few sec to mins, and is an ok thing to do
      for the sake of very fast adds. (this could be a configurable
      paramter, certainly for low-mem footprint use cases). At the moment
      this is not much, compared to block sizes.
      
      (3) We make the providing EventBegin() + Done(), so that we can
      track how long a provide takes, and we can remove workers as they
      finish in bsdash and similar tools.
      
      License: MIT
      Signed-off-by: default avatarJuan Batiz-Benet <juan@benet.ai>
      06b49918
  16. 16 Jul, 2015 1 commit
  17. 14 Jul, 2015 3 commits
  18. 13 Jul, 2015 2 commits
  19. 10 Jul, 2015 1 commit
  20. 07 Jul, 2015 1 commit
  21. 18 Jun, 2015 2 commits
  22. 12 Jun, 2015 2 commits
  23. 11 Jun, 2015 2 commits
  24. 01 Jun, 2015 1 commit
  25. 30 May, 2015 5 commits
    • Jeromy's avatar
      handle error · 8cd12955
      Jeromy authored
      8cd12955
    • Jeromy's avatar
      parallelize block processing · bc186b26
      Jeromy authored
      bc186b26
    • Jeromy's avatar
      89c950aa
    • Jeromy's avatar
      adjust naming · 5056a837
      Jeromy authored
      5056a837
    • Jeromy's avatar
      Move findproviders out of main block request path · e5aa2acc
      Jeromy authored
      This PR moves the addition of new blocks to our wantlist (and their
      subsequent broadcast to the network) outside of the clientWorker loop.
      This allows blocks to more quickly propogate to peers we are already
      connected to, where before we had to wait for the previous findProviders
      call in clientworker to complete before we could notify our partners of
      the next blocks that we want. I then changed the naming of the
      clientWorker and related variables to be a bit more appropriate to the
      model. Although the clientWorker (now named providerConnector) feels a
      bit awkward and should probably be changed.
      
      fix test assumption
      e5aa2acc
  26. 26 May, 2015 1 commit
  27. 22 May, 2015 1 commit