Official links:
Before we can do anything, we need to be sure that the software that Subsquid utilizes is installed
Install dependencies first.
sudo apt update && sudo apt upgrade && sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
Now, we'll need to get our aptitude sources updated to include the Docker-CE repository.
These commands must be run as root.
sudo su
curl -fsSL
https://download.docker.com/linux/debian/gpg
| apt-key add -
add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/debian
$(lsb_release -cs) stable"
Once we have added the repo and the necessary key, we want to exit the root user. exit
Note: You should see your shell prompt displaying a $
if you're a non-root user. If you're in an elevated (root) TTY session, your shell prompt will display a #
Since we've modified our aptitude sources, we'll need to update them before we can install docker-ce
sudo apt update && sudo apt install docker-ce
Now that we have Docker-CE installed, we'll need to add our user to the docker group.
(The prevents us from having to run docker commands with elevated (root) permissions)
sudo usermod -aG docker $USER
With Docker-CE installed, we simply need to run the following command to install docker-compose:
sudo apt install docker-compose
with docker-compose installed, we need to create a link to the binary
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
First, we'll install the latest stable release of node along with node package manager (npm).
sudo apt install node npm
Once we have those installed, we'll install node version manager (nvm)
curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
That will have made changes to your ~/.bashrc file. Let's source that so that they take effect immediately.
source ~/.bashrc
Now, we'll use nvm to install a newer version (we need 16 or greater for Subsquid)
nvm install node
We can check our installed version with
node --version
Now that we have our dependencies resolved, we'll move on to installing and using Subsquid
We'll use npm to install subsquid:
sudo npm i -g @subsquid/cli@latest
We can confirm successful installation by checking the version
sqd --version
With Subsquid successfully installed, we'll go ahead and make a new directory and get started on indexing data for the ETH / USD price feed on Ethereum Mainnet
In this example, we'll be indexing data from the ETH / USD price feed on Ethereum Mainnet.
mkdir squid-eth-usd-ethereum-proxy
cd squid-eth-usd-ethereum-proxy
sqd init squid-eth-usd-ethereum-proxy --template https://github.com/subsquid/squid-abi-template
Next, we'll initialize the directory and install dependencies
npm i
Before moving on to the next step, we will need to have the contract's ABI in a locally stored json file.
If you do not have one, you can fetch the ABI of a deployed contract by following the directions in the next section. If you already have the ABI, then skip ahead a bit.
save the below into a file named fetch_abi.py
#!/usr/bin/python
import argparse
import requests
import json
# Exports contract ABI in JSON
ABI_ENDPOINT = 'https://api.etherscan.io/api?module=contract&action=getabi&address='
parser = argparse.ArgumentParser()
parser.add_argument('addr', type=str, help='Contract address')
parser.add_argument('-o', '--output', type=str, help="Path to the output JSON file", required=True)
def __main__():
args = parser.parse_args()
response = requests.get('%s%s'%(ABI_ENDPOINT, args.addr))
response_json = response.json()
abi_json = json.loads(response_json['result'])
result = json.dumps({"abi":abi_json}, indent=4, sort_keys=True)
open(args.output, 'w').write(result)
if __name__ == '__main__':
__main__()
Once saved, we can run it with the python3
command; example below:
-o
is the desired file name you'd like the downloaded ABI to have.python3 fetch_abi.py 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419 -o EACAggregatorProxy.json
Now that we have the necessary ABI file, we can begin ingestion.
Run the generate command with the necessary flags (each flag explained below):
sqd generate \
--address 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419 \
--archive https://$ARCHIVE-RPC-ENDPOINT \
--abi ~/stuff/abiFiles/EACAggregatorProxy.json \
--function '*' \
--event '*' \
--from 10606500
Options:
--address <contract> contract address
--archive <url> archive endpoint
--abi <path> (Optional) path or URL to the abi file. If omitted, the Etherscan API is used.
-e, --event <name...> one or multiple events to be indexed. '*' will index all events
-f, --function <name...>. one or multiple contract functions to be indexed. '*' will index all functions
--from <block> start indexing from the given block.
--etherscan-api <url> (Optional) an Etherscan-compatible API to fetch contract ABI by a known address. Default: https://api.etherscan.io/
In the above example, we specified the contract address of the ETH / USD proxy, the archive RPC endpoint, the ABI file, the desired functions and events, as well as the block number that we want to begin ingestion from (we referenced etherscan to see which block the contract was deployed in and subtracted one).
You should see output similar to below:
GENERATE
14:52:05 INFO sqd:squidgen running typegen...
14:52:05 INFO sqd:squidgen processing "contract" contract...
14:52:05 INFO sqd:evm-typegen saved src/abi/abi.support.ts
14:52:05 INFO sqd:evm-typegen processing /home/$USERNAME/PATH/abiFiles/EACAggregatorProxy.json
14:52:05 INFO sqd:evm-typegen saved src/abi/EACAggregatorProxy.abi.ts
14:52:05 INFO sqd:evm-typegen saved src/abi/EACAggregatorProxy.ts
14:52:07 WARN sqd:squidgen readonly function "accessController" skipped
14:52:07 WARN sqd:squidgen readonly function "aggregator" skipped
14:52:07 WARN sqd:squidgen readonly function "decimals" skipped
14:52:07 WARN sqd:squidgen readonly function "description" skipped
14:52:07 WARN sqd:squidgen readonly function "getAnswer" skipped
14:52:07 WARN sqd:squidgen readonly function "getRoundData" skipped
14:52:07 WARN sqd:squidgen readonly function "getTimestamp" skipped
14:52:07 WARN sqd:squidgen readonly function "latestAnswer" skipped
14:52:07 WARN sqd:squidgen readonly function "latestRound" skipped
14:52:07 WARN sqd:squidgen readonly function "latestRoundData" skipped
14:52:07 WARN sqd:squidgen readonly function "latestTimestamp" skipped
14:52:07 WARN sqd:squidgen readonly function "owner" skipped
14:52:07 WARN sqd:squidgen readonly function "phaseAggregators" skipped
14:52:07 WARN sqd:squidgen readonly function "phaseId" skipped
14:52:07 WARN sqd:squidgen readonly function "proposedAggregator" skipped
14:52:07 WARN sqd:squidgen readonly function "proposedGetRoundData" skipped
14:52:07 WARN sqd:squidgen readonly function "proposedLatestRoundData" skipped
14:52:07 WARN sqd:squidgen readonly function "version" skipped
14:52:07 INFO sqd:squidgen running codegen...
14:52:07 INFO sqd:squidgen generating processor...
Lastly, run the following commands:
sqd build
sqd up
Once you have the PostgreSQL container running (check with docker ps
) you'll need to edit your .env
file to contain an additional value for DB_HOST
DB_HOST=$PG_CONTAINER_IT_OR_HOST_LOCAL_IP
Once that's updated, we can run the following commands
sqd migration:generate
sqd process
If you'd like to see the indexed data in a graphql web page, simply run the following command:
sqd serve
and the dashboard will be available at
http://localhost-or-ip-address:4350/graphql