Hi Doc, there is no need to change the 8545 port in this case.
Advanced set up: customize the port of your Node
I guess I need to remove the port forwarding rule for that port then?
I followed the same rule setup I had in my AWS test server, so I had all 3 ports being forwarded to one machine on the LAN… so now that I will have 2 machines on my LAN, what do I need to do with this port, where should it forward too? Or was this rule on AWS just to have the port open for outbound, and not an inbound port that the vNode has to listen on?
In my experience, 9334, 9433, and 8545 are the default ports (both on the AWS and your Node).
My teammate will explain it further. Please wait.
Ok, just to make sure we are on same page… Both of my nodes are on my personal LAN (no longer hosting on AWS), and with two nodes on the same LAN I wasn’t sure if I need to add port forwarding rule for 8545. The other two ports are mentioned in other post and have documentation on how/where to change them… but the 8545 port is never mentioned. If the rule on AWS was just to open the port, but is not a port with incoming network request, then there is no issue, but if that port has a sever hosted on it that is listening for connections, that is where I would need to customize the port so both vNodes can get their own request behind a single router. I am not sure I would know how to have both vNodes listening on the same port and get the traffic routed properly behind a single router.
Hey @doc,
Deep dive into the run.sh
file, your incognito node(s) require at least 1 Ethereum light client. Either Infura or geth will worked:
# eth with parity
docker run -ti --restart=always --net inc_net -d -p 8545:8545 -p 30303:30303 -p 30303:30303/udp -v $PWD/${eth_data_dir}:/home/parity/.local/share/io.parity.ethereum/ --name eth_mainnet parity/parity:stable --light --jsonrpc-interface all --jsonrpc-hosts all --jsonrpc-apis all --mode last --base-path=/home/parity/.local/share/io.parity.ethereum/
# OR eth with geth
# docker run --restart=always --net inc_net -d --name eth_mainnet -p 8545:8545 -p 30303:30303 -v $PWD/${eth_data_dir_geth}:/geth -it ethereum/client-go --syncmode light --datadir /geth --rpcaddr 0.0.0.0 --rpcport 8545 --rpc --rpccorsdomain "*"
So, you should open 8545 port. (NAT/port forwarding rule)
Please also check this post on how to setup multiple node on a single server
Hey @khanhj, The post you suggested is the one I already read about how to get the first two ports forwarded, but it is specific to having two clients on one machine. I am asking about having 2 separate devices on a single network behind a single router. I am not sure by your reply if it is even possible to do this configuration now.
Since I have two separate machines, each running their own instance of the client, can I adjust the 8545 port in the run.sh file to so I can forward traffic to each machine? Or is the 8545 port only needed for outbound connections and does not need to listen for incoming connections, so the port only needs to be open, but not forwarded?
hi @doc,
I got your idea, the ETH light client is used for both inbound and outbound. The ETH light client used port 8545 to sync headers. Then Incognito node connect to local ETH client to verify some data of bridge transaction before it insert new block.
There’re 2 solutions to fix your problem:
-
Keep Machine1 with ETH light client running on port 8545; change ETH light client port on Machine2, then configure port forwarding on your router. (this is what you said in last post)
-
Register a
https://infura.io/
account.
Modify both run.sh file, replace themainnet.infura.io/v3
Endpoint to fieldGETH_NAME
as sample below.
Remove the command to run eth light client.
# eth with parity
# docker run -ti --restart=always --net inc_net -d -p 8545:8545 -p 30303:30303 -p 30303:30303/udp -v $PWD/${eth_data_dir}:/home/parity/.local/share/io.parity.ethereum/ --name eth_mainnet parity/parity:stable --light --jsonrpc-interface all --jsonrpc-hosts all --jsonrpc-apis all --mode last --base-path=/home/parity/.local/share/io.parity.ethereum/
# OR eth with geth
# docker run --restart=always --net inc_net -d --name eth_mainnet -p 8545:8545 -p 30303:30303 -v $PWD/${eth_data_dir_geth}:/geth -it ethereum/client-go --syncmode light --datadir /geth --rpcaddr 0.0.0.0 --rpcport 8545 --rpc --rpccorsdomain "*"
docker run --restart=always --net inc_net -p $node_port:$node_port -p $rpc_port:$rpc_port -e NODE_PORT=$node_port -e RPC_PORT=$rpc_port -e BOOTNODE_IP=$bootnode -e GETH_NAME=mainnet.infura.io/v3/XXX86da1fdca4asgdgsgsXXX -e GETH_PROTOCOL=https -e GETH_PORT= -e MININGKEY=${validator_key} -e TESTNET=false -v $PWD/${data_dir}:/data -d --name inc_mainnet incognitochain/incognito-mainnet:${latest_tag}
This will config incognito node connect to external ETH fullnode to verify data.
- Sample script to run multiple validator on same server, using 3rd party Infura service: run_3.sh
Let me know if it work!
@khanhj
So I can just modify this line:
… -restart=always --net inc_net -d -p 8545:8545 …
with 8546:8546 and be good for the run.sh? Then just setup port forwarding?
If I do that, I get the following output on “docker ps”
parity/parity:stable "/bin/parity -- 26 hours ago Up 50 seconds 5001/tcp, 8080/tcp, 8082-8083/tcp,
8180/tcp, 0.0.0.0:8546->8546/tcp, 8545/tcp, 0.0.0.0:30303->30303/tcp, 0.0.0.0:30303->30303/udp eth_mainnet
specifically, I still see a 8545/tcp even though I have modified the run.sh for 8546.
I have not done any port forwarding or modifications for port 30303, 5001, 8080, 8082-8083, is that going to be an issue?
So, you go with method1.
- you dont have to do port forwarding for those ports
- on your machine2, please try to stop eth_mainnet:
docker stop eth_mainnet
- then execute this command:
docker run -ti --restart=always --net inc_net -d -p 8546:8545 -p 30303:30303 -p 30303:30303/udp -v $PWD/${eth_data_dir}:/home/parity/.local/share/io.parity.ethereum/ --name eth_mainnet parity/parity:stable --light --jsonrpc-interface all --jsonrpc-hosts all --jsonrpc-apis all --mode last --base-path=/home/parity/.local/share/io.parity.ethereum/
=> this is how it look like on my system:
khanhlh@staking-khanhle:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7213ea635ab4 parity/parity:stable "/bin/parity --light…" 4 seconds ago Up 2 seconds 5001/tcp, 8080/tcp, 8082-8083/tcp, 8180/tcp, 8546/tcp, 0.0.0.0:30303->30303/tcp, 0.0.0.0:30303->30303/udp, 0.0.0.0:8546->8545/tcp eth_mainnet
d79526ad33e6 incognitochain/incognito:20200225_1 "/bin/sh run_incogni…" 8 days ago Up 8 days 0.0.0.0:9360->9334/tcp, 0.0.0.0:9460->9433/tcp inc_miner_stake20
khanhlh@staking-khanhle:~$ sudo netstat -nltp | grep docker-proxy
tcp6 0 0 :::8546 :::* LISTEN 12329/docker-proxy
tcp6 0 0 :::9360 :::* LISTEN 31838/docker-proxy
tcp6 0 0 :::9460 :::* LISTEN 31824/docker-proxy
tcp6 0 0 :::5432 :::* LISTEN 22026/docker-proxy
tcp6 0 0 :::30303 :::* LISTEN 12300/docker-proxy
I wanted to touch base with you again about this, as its been 8 days and neither vNode role has moved from “waiting”, and for the 6 weeks I have been running them, that has never happened… I have always had at least 1 committee selection between the two within that period, so I wanted to verify everything looks correct. Below is outputs from docker ps, and the port forwarding scheme on my router.
vNode 1, default ports (docker ps output)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
b3a66831c3fd incognitochain/incognito-mainnet:20200226_1 "/bin/sh run_incogni…" 9 days ago Up 3 days 0.0.0.0:9334->9334/
tcp, 0.0.0.0:9433->9433/tcp inc_mainnet
328edf14671f parity/parity:stable "/bin/parity --light…" 9 days ago Up 3 days 5001/tcp, 8080/tcp,
8082-8083/tcp, 8180/tcp, 0.0.0.0:8545->8545/tcp, 8546/tcp, 0.0.0.0:30303->30303/tcp, 0.0.0.0:30303->30303/udp eth_mainnet
sudo netstat -nltp | grep docker-proxy
tcp6 0 0 :::9334 :::* LISTEN 2097/docker-proxy
tcp6 0 0 :::9433 :::* LISTEN 2085/docker-proxy
tcp6 0 0 :::30303 :::* LISTEN 2110/docker-proxy
tcp6 0 0 :::8545 :::* LISTEN 2141/docker-proxy
vNode 2, custom ports
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
730d5f255ec7 incognitochain/incognito-mainnet:20200226_1 "/bin/sh run_incogni…" 9 days ago Up 8 days 0.0.0.0:9335->9335/
tcp, 0.0.0.0:9434->9434/tcp inc_mainnet
e1199e4ca95c parity/parity:stable "/bin/parity --light…" 9 days ago Up 8 days 5001/tcp, 8080/tcp,
8082-8083/tcp, 8180/tcp, 0.0.0.0:8546->8546/tcp, 8545/tcp, 0.0.0.0:30303->30303/tcp, 0.0.0.0:30303->30303/udp eth_mainnet
sudo netstat -nltp | grep docker-proxy
tcp6 0 0 :::9335 :::* LISTEN 1845/docker-proxy
tcp6 0 0 :::9434 :::* LISTEN 1833/docker-proxy
tcp6 0 0 :::30303 :::* LISTEN 1858/docker-proxy
tcp6 0 0 :::8546 :::* LISTEN 1884/docker-proxy
vNode 1: 192.168.1.128
vNode 2: 192.168.1.129
port forwarding configuration:
Hi Doc, all the data you have are correct. As there are more than 1000 Nodes now, it is normal if you wait longer than before.
We are going to publish a post so you can know the probability of having earning today. Will keep you posted.
Any update with this? My vNode appears to remain with the following status:
Beacon height - variable, increasing.
Layer - Shard
Role - Committee
Shard ID - 0
Thanks.
If the beacon height is around 350k, then there is no problem. So you are in the committee of Shard 0 and you will earn some PRV soon.
If beacon height is so less than 350k, then there is still no problem This means that your node still is synching. Please see this post. Currently, synchronization is so slow.
hey. I want to connect to external parity/infura.
I checked docker image. But I dont see inside that:
GETH_NAME is used?
@abduraman maybe you got ideas here?
My node has settled down now and appears to be working reliably.
Correct me if I’m wrong but I think ‘Shard ID -2’ means its syncing. ‘Shard ID -1’ means its waiting. And ‘Shard ID [shard numer]’ is displayed when it’s assigned to a shard in operation.
If an expert can confirm @abduraman, hopefully this helps others also.
Correct but you may earn when shard id is -1 or -2 (normally, it shouldn’t). This abnormal case may specific to the synching state. Most of the time, the correct shard id is shown. Please follow this topic for other details. As I remember, @mesquka was trying to find the issue.
Yep this seems to be correct. I’ve not had the time to work on the app but I’m hoping to do some changes to the ‘status’ field so it lets you know that the node is syncing among other things. I’m hoping to be able to get back to this in the next few days.