Python requests for Hive JSON-RPC

in HiveDevs3 months ago (edited)

Context

Expanding on a reply I wrote yesterday.
Built on code from an earlier post.

Python requests for Hive JSON-RPC

JSON-RPC Client

Building your own custom Hive client is the most direct, bare-bone approach to interface with Hive's native API, which uses JSON-RPC protocol.
You don't have to depend on any 3rd party software or infrastructure.

Example:

import requests

api_url = 'https://api.hive.blog'

def hive_api_get_properties(url):
    data = '{"jsonrpc":"2.0", "method":"database_api.get_dynamic_global_properties", "id":1}'
    response = requests.post(url, data=data)     
    return response.json()["result"]

print(hive_api_get_properties(api_url))

This example uses Python requests library to connect to one of Hive's publicly available nodes.
You could go as far as building your own http client, but for this guide I will use requests.

JSON-RPC Sessions

JSON-RPC sessions can contain more than a single call and response.
This can make your client faster and more reliable, than anything you could build with higher level, more abstract libraries like Beem.

Example:

def get_response(data, url, session: requests.Session):
    request = requests.Request("POST", url=url, data=data).prepare()    
    response = session.send(request, allow_redirects=False)
    return response

This function prepares a requests.Request, needs a requests.Session and returns a response.
This might sound stupidly complicated, but it I hope it makes more sense in the next step.

def get_properties(url, session: requests.Session):
    data = '{"jsonrpc":"2.0", "method":"database_api.get_dynamic_global_properties", "id":1}'
    response = get_response(data, url, session)
    properties = response.json()['result']
    return properties

This function uses the same call as the first example above, but with extra steps, so I can pass a Session through the stack.
The goal is to be able to pass two Requests to the same session.

def get_block(num, url, session: requests.Session):
    data = '{"jsonrpc":"2.0", "method":"block_api.get_block", "params":{"block_num":' + str(num) + '},"id":1}'
    response = get_response(data, url, session)
    block = response.json()['result']['block']
    return block

This API-call needs another parameter; a block number.
I want to 'chain' the two calls and pass in the last_irreversible_block_num to the get_block call to get to the latest block in one single step, within a single https session.

def get_last_block(url):
    with requests.Session() as session:                    
        last_irreversible_block_num = get_properties(url, session)['last_irreversible_block_num']
        last_block = get_block(last_irreversible_block_num, url, session)
        return last_block

The last layer of abstraction.
get_last_block is a custom function, that does something I have seen in no library, no tutorial, no guide.
It's actually quite a big deal in my opinion.

Save as apy.py and then use as such:

import api

api_url = 'https://api.hive.blog'

print(api.get_last_block(api_url))

So in your main.py you can use a simple, single statement.

Bad Example

All other libraries require 2 https handshakes for the same result.
To compare, I build a bad_api.py:

import requests

def get_last_block_num(url):
    data = '{"jsonrpc":"2.0", "method":"database_api.get_dynamic_global_properties", "id":1}'
    response = requests.post(url, data)
    num = response.json()['result']['last_irreversible_block_num']
    return num 

def get_block(num, url):
    data = '{"jsonrpc":"2.0", "method":"block_api.get_block", "params":{"block_num":' + str(num) + '},"id":1}'
    response = requests.post(url, data)
    block = response.json()['result']['block']
    return block

Comparison

import api
import bad_api
import time

api_url = 'https://api.hive.blog'

start = time.time()
block = api.get_last_block(api_url)
print(time.time() - start)

time.sleep(5)

start = time.time()
num = bad_api.get_last_block_num(api_url)
block = bad_api.get_block(num, api_url)
print(time.time() - start)

Result:

0.9454114437103271
1.1649067401885986

It works and is consistently faster.
0.2 seconds faster may not be much, but you can:

  • reduce stress on the nodes
  • create a more reliable app
  • avoid getting banned from the node for doing too many https calls in too little time

Notes

  • The method assumes that both database_api and block_api are accessible on the same node.
  • If you'd access your own node locally, the difference would probably fade

In the end, it all depends on what you want to build.

Maybe you don't like my style or flavor of coding;
Feel free to contribute to or build upon this or ignore it.

Final Thoughts

@slobberchops' post triggered me to post this.

I am tired now and don't want to continue.
Maybe I should refine the above more, but for whom?
Using sessions like above, is just an example, of what I have taught myself.
It took me a long time to learn all this, and I don't know how valuable it is to others.
Maybe it's so trivial, that nobody else bothered mentioning.

I have built my own framework from scratch with the above methods and it seems bulletproof.
Cycling nodes, unbreakable blockstream with resync... everything you could wish for, really.
It can basically do the same as HAF. In 200-300 lines of code, only. Without the need to operate your own node.
You could build your own custom API, or Hive-SQL, or Hive-MongoDB in a few more steps and I could demonstrate how.
I don't want to give it all away for free, though.

I have no chance to get funding from the DHF, so I won't even bother.

If you need help with Python or JS for Hive, and found the above helpful, find me on Discord, maybe I can help.

Sort:  

Thanks mate, I will need top run this up to see what you have done, otherwise its cryptic to read. I always like to understand things properly.

I have no chance to get funding from the DHF, so I won't even bother.

Probably not 😀

Had a shitty day at work, it's some 'Assassin's Creed Odyssey' for me, a waste of time I know...

Gaming beats coding any day of the week. I'm off playing Paladins

Bookmarked and dropped a follow: tonight I'm going to study your code!

Loading...

I think its hard to grasp why Beem gets referenced so much more than lighthive in posts like this, I'm absolutely not a Beem fan, but lighthive is pretty good.

I wrote a Python lib for this way back in steemit days, but it was Python 2.x using the Twisted async framework.

I'm currently working on a less generic Python lib called aiohivebot, not really suitable for comand line tools and other basic tools, but quite powerfull if it the project fits the abstractions and requires a proper async solution, not yet production ready, but maybe you'd like to give it a look.

With respect to getting multiple requests, keeping the session open is great, but JSON-RPC defines batches too what is even better. The problem with JSON-RPC batches though is that something about the tech stack of the public API nodes seems to be non-univorm across nodes. I know at least one node supports them, but most don't, what is a shame. But if you are going to rely on just one node (many nodes seem to have regular availability hickups), you could pick a node that support JSON-RPC batches.

I have looked at aiohivebot before.

I also know about batch calls.

If you want -for example- more data than the bath-size allows (pagination), you could use above approach to keep the connection open for the next call- it could be faster than light hive, or any other library.

As for aysnc and other stuff, I have a better approach and just posted about it.
For some approaches, it doesn't really matter, if you use Beem or lighthive or requests or build your own lib...