btdht.dht
DHT DHT_BASE Node Bucket RoutingTable
DHT
DHT_BASE
bind_ip
str
interface the dht is binded tobind_port
int
port the dht is binded todebuglvl
int
the dht instance verbosity levellast_msg
last time we received any message
last_msg_rep
last time we receive a response to one of our messages
ignored_ip
set
of ignored ip in dotted notationignored_net
list
of default ignored ip networksmyid
utils.ID
the dht instance id, 160bits long (20 Bytes)prefix
str
prefixing all debug messageroot
RoutingTable
the used instance of the routing tablesock
The current dht
socket.socket
stoped
the state (stoped ?) of the dht
threads
list
of theThread<threading.Thread>
of the dht instancetoken
Token send with get_peers response. Map between ip addresses and a list of random token. A new token by ip is genereted at most every 5 min, a single token is valid 10 min. On reception of a announce_peer query from ip, the query is only accepted if we have a valid token (generated less than 10min ago).
mytoken
Tokens received on get_peers response. Map between ip addresses and received token from ip. Needed to send announce_peer to that particular ip.
transaction_type
Map beetween transaction id and messages type (to be able to match responses)
to_send
A
PollableQueue
of messages (data, (ip, port)) to sendto_schedule
A list of looping iterator to schedule, passed to
_scheduler
zombie
save(filename=None, max_node=None)
load(filename=None, max_node=None)
start(start_routing_table=True, start_scheduler=True)
stop
stop_bg
init_socket
is_alive
debug(lvl, msg)
sleep(t, fstop=None)
bootstarp( addresses=[ ("router.utorrent.com", 6881), ("grenade.genua.fr", 6880), ("dht.transmissionbt.com", 6881) ]
) .. automethod:: build_table .. automethod:: announce_peer(info_hash, port, delay=0, block=True) .. automethod:: get_peers(hash, delay=0, block=True, callback=None, limit=10)
get_closest_nodes(id, compact=False)
sendto(msg, addr)
clean
clean_long
register_message(msg)
on_announce_peer_response(query, response)
on_announce_peer_query(query)
on_find_node_query(query)
on_find_node_response(query, response)
on_get_peers_query(query)
on_get_peers_response(query, response)
on_ping_query(query)
on_ping_response(query, response)
on_error(error, query=None)
Node
port
UDP port of the node
last_response
Unix timestamp of the last received response from this node
last_query
Unix timestamp of the last received query from this node
failed
Number of reponse pending (increase on sending query to the node, set to 0 on reception from the node)
id
160bits (20 Bytes) identifier of the node
good
True
if the node is a good node. A good node is a node has responded to one of our queries within the last 15 minutes. A node is also good if it has ever responded to one of our queries and has sent us a query within the last 15 minutes.bad
True
if the node is a bad node (communication with the node is not possible). Nodes become bad when they fail to respond to 3 queries in a row.ip
IP address of the node in dotted notation
compact_info
from_compact_infos(infos)
from_compact_info(info)
announce_peer(dht, info_hash, port)
find_node(dht, target)
get_peers(dht, info_hash)
ping(dht)
Bucket
max_size
Maximun number of element in the bucket
last_changed
Unix timestamp, last time the bucket had been updated
id
A prefix identifier from 0 to 160 bits for the bucket
id_length
Number of signifiant bit in
id
to_refresh
random_id
add(dht, node)
get_node(id)
own(id)
split(rt, dht)
merge(bucket)
RoutingTable
debuglvl
int
the routing table instance verbosity leveltrie
The routing table storage data structure, an instance of
datrie.Trie
stoped
The state (stoped ?) of the routing table
need_merge
Is a merge sheduled ?
threads
list
of theThread<threading.Thread>
of the routing table instanceto_schedule
A class:list of couple (weightless thread name, weightless thread function)
prefix
Prefix in logs and threads name
zombie
start
stop
stop_bg
is_alive
register_torrent(id)
release_torrent(id)
register_torrent_longterm(id)
release_torrent_longterm(id)
register_dht(dht)
release_dht(dht)
empty
debug(lvl, msg)
stats()
heigth
find(id)
get_node(id)
get_closest_nodes(id, bad=False)
add(dht, node)
split(dht, bucket)
merge