Safe Network Container (including CLI)

GitHub source, Docker Hub Image

During development I thought a few times about containerizing the network. During the weekend I played around with Docker (or rather the interchangeable Podman) and created an image that runs the network and auto-magically initializes the CLI. Run it by providing --network=host so that the container uses the host’s network interface to run the network on localhost (bootstrap node on port 12000).

$ docker run --detach --network=host --name sn bzeeman/sn_container
$ # wait for a while (e.g. 1 minute) for the network to boot and setup
$ docker exec -it sn safe
Welcome to Safe CLI interactive shell!
$ docker stop sn && docker rm sn

I think there’s also an image that’s produced by MaidSafe, but it does not include the CLI. I also can’t exactly find the sources for that container and not sure how it’s supposed to be used.

Perhaps I can also turn the image into a image that is already running the network, like a snapshot of a running network. So that starting the container immediately has the network up and running with an authorized CLI. (Instead of the current situation where the container might be booting for some tens of seconds before it’s usable.)


I just found out about a topic on the main forum seeking to do something similar. I commented there about a solution I’m trying to find for not using --network=host as that is apparently only supported on Linux.

I thought --publish would do something similar. I’m running the nodes within the container on those ports, so I thought that would work, but clients from the host can’t connect… I assume I don’t take into account something to do with bootstrapping or Docker-specific here.

Perhaps @tfa or @Scorch have an immediate idea about this? I know there has been some confusion and discussion about setting the (internal/external) IPs.

I’m using --local to boot the nodes, which will bind them to --local-ip= doesn’t seem to work because it complains about IGD.

Edit: I think @tfa is hinting to a similar problem here.

1 Like

Yea, basically this is the run-down of the IGD error and the local net support as I understand it (these are two separate issues from what I can see):

  • sn_node only supports, from what I gather, networks that run entirely on localhost or networks forward across NAT. LAN nets, etc. that don’t run on local but don’t require port forwarding either don’t seem to be supported I think… This is not a limitation of sn_node but comes from qp2p. This is what I understand that @tfa and @mav we’re wrestling with in part.
  • The IGD error you’re receiving is not actually a product of the above, but is related. It seems like a bug with automatic port-forwarding via UPnP, which I wrote about in this issue. This is pertinent to nets that are port-forwarded across NAT, which has nothing really to do with running a LAN net, but is a misleading error you’ll receive when you try. Manual port forwarding can be used to get around this (specify both --local-ip and --external-ip after setting up the port forwarding). If you’re running LAN or something, you’d still be dead in the water, as per the above point, probably due to lack of support (not due to this error).

If this is also due to IGD, then I have a theory that might be worth testing…

I’ve little-to-no experience with docker, so I don’t know if I can help necessarily. Consider everything following to be speculation. But the difference between --public and --network=host seems related to the second issue considering it’s creating a “software network bridge” that probably looks like a NAT traversal to the docker instance is my guess (I’m getting all of this from this stackoverflow question btw).

But because that port mapping seems like a manual mapping to me, I think you can use --local-ip, --local-port, and then also specify --external-ip and --external-port to skip UPnP which will dodge your IGD error. The --network option doesn’t virtualize the net interface with such a software bridge afaik, so I presume that’s why it works.

I’ll admit I don’t know enough about the software network bridge stuff to know exactly what those arguments look like, but perhaps try something like this as an idea? This is pure speculation, I haven’t run this, nor do I know for sure if it will run, but this is where I’d start my investigation at least.

sn_node --local --local-port=$port --external-port=$port --external-ip=${host_local_ip} --root-dir "${node_dir}/$port" --hard-coded-contacts='[""]' &

I don’t believe resolves to the same local ip from the perspective of the docker instance (unless you specify --network=host perhaps), so my thinking is you’d need to pass the host’s local-ip to the script and store it in $host_local_ip and this effectively maps the localhost of your docker image to the localhost of the host while still virtualizing the interface. This would effectively bypass UPnP, get rid of your IGD error, and still forward the port across the software bridge… Maybe… :upside_down_face:

If I’ve got some time, I’ll reinstall docker and maybe try it out myself though. I’m curious enough to play with it a bit :thinking:


No, that was not a real problem. If you read a later post, you will see that the problem was that the client needed another configuration file. Maybe you can try playing with the file I mentioned (~/.safe/node/node_connection_info.config)

At that time I succeeded in creating a docker bridge network and a safe network with 9 nodes. Each node was a docker container in this bridge network and had its own IP address. I guess the default docker bridge network would have worked the same, I created my own only to test IPv6.

EDIT: I forgot to mention I was using Docker on Windows (with Linux containers).

But with recent limitations added by @maidsafe, all what I could do at that time doesn’t work anymore. And not just an IPv6 network, I cannot create any network (see Cannot start node 0.28.0 due to error: Routing(Network( IgdNotSupported)) - #6 by tfa).

1 Like

I am happy to test out stuff with Podman on Fedora if anyone needs it . .

I am moving in the direction containerising (with Podman) everything - well everything that makes sense at least eg web sites - but a SAFE Podman container sounds like a great idea!


The container ( I introduced in my post should work fine in that context. I also mainly use Podman (on Arch Linux). You’re very welcome to test it out to see if it works.

Thanks @tfa and @Scorch, for looking into this. I’ve continued experimenting a bit with the node/qp2p configuration values (manually with a Rust binary project), but failed to make it work without the --network=host. It might be because I want to do it without bridged mode (I use Podman rootless, meaning it doesn’t configure network interfaces on the host).


I’ve worked on a bit of a different route to accomplish a kind of effortless testing with the network. What I had in mind was being able to clone a project repository and immediately being able to run tests on a full functioning network without too much effort.

The container I made earlier is quite handy for launching a network quickly for testing. That allows a project to test against a containerized network. But because I can’t really make it work outside of Linux I thought about also running the tests within the container.

I ended up with an image that I made specifically for the Safe Node.js binding I’m working on. The image includes the nodes, CLI and authenticator! It pre-builds the dependencies, so that if you’re developing and make a small code change, the container only re-compiles the final library. When running the container, a network will be launched and the authenticator started. Then the final code will be compiled and the test is run against the network and authenticator.

I also got interested in GitHub actions, that ended up being really easy to use. I’ve just successfully seen the test running.

I’ve used containers in the past, but using it for CI has been really interesting. I feel like I have reached the point where indeed it’s worth the effort I’ve put into it; it’s quite nice to be able to test in a reproducible and effortless way. Though I also feel there’s a lot more to learn and optimize.