The future of SAFE Launcher

These are difficult problems and the proposal is a great step in the right direction. It’s stimulated a lot of purposeful discussion already.

Here are my very-high-level (possibly impractical) thoughts:


  1. The safe network is a data store. Look at existing libraries for storing data: file system, http, database etc. Also look at existing libraries for communicating data: http, websocket, irc etc. Dev libraries / bindings to interact with the safe network should look similar to these for ease of use. Data flow on the safe network is permissioned and bidirectional so has inherit complexity, but ultimately the dev just wants to dump some data somewhere (or fetch it). Let’s keep that use-case in mind when designing the launcher interfaces. Screwing around with namespaces and token expiry etc is a sure way to complicate the life of devs (but is probably a necessary evil).

  2. Having to ‘create an app’ for the safe network just to save some data sounds terrible to me as a dev (high cognitive overhead). I feel the launcher needs to operate at a lot simpler abstraction than that. At least, the interfaces and documentation should be at a simpler abstraction. Might not be possible…?

  3. GET will be much more common than PUT, so make it really easy for a dev to get public data off the network. It should be as easy as reading a file or performing a http request. Ideally the launcher should not be required for GETs on the safe network, it should just be built-in to the library the dev uses.

  4. Code is a simpler interface than network. I’m glad to see SWIG mentioned.


  1. Don’t assume they have a gui. Headless can easily be extended with a gui wrapper but not vice-versa. Headless should be the primary interface for the launcher.

  2. It’s interesting to consider the launcher as part of a unix pipeline. To me this represents the perfect combination of simplicity and power, which can be extended or wrapped with gui or other interfaces as needed. Can the safe network have a cURL-like interface?

  3. The purpose of apps is to be a ‘wall’ between data. Auth Tokens issued by the launcher are the raw material these walls are built from. There must be doors between the walls (ie the user must be able to share / shield data between different apps and users). How these walls and doors are managed is a difficult problem, especially considering ux differences across platforms, eg desktop, mobile, server environments. Management of auth tokens and apps should be as transparent as possible to both users and devs, which segues to:

  4. Permissions ux should be designed as a spectrum. Let everything through at the most permissive. Manually approve every request at the most restrictive. In the middle approve app at first use then automatic approval every time that, etc etc (lots of possible variations). Making this experience smooth across the various platforms whilst remaining fully-featured is hard. I think it must start as a config-file and then possibly extend with gui wrappers around that file.

Very glad to see this discussion, and all suggestions so far sound like positive steps. Hopefully these points help simplify the ecosystem :slight_smile:

1 Like

Hey @mav,

I agree to many things, and disagree to others, you’ve posted here. But I won’t respond this here now, as many of them not specific to the Launcher concept we are focussing on here and will be addressed in one way or another through the data-handeling RFC/Proposal I mentioned earlier. I’d love to continue the discussion about those then, if you don’t mind :slight_smile: .

1 Like

For us, mobile phones are not an edge case, they will be a primary use case of internet access to the future – heck, for a big part of the planet they are already the only access to the internet at all. Being able to support apps, like messengers, in full successfully from smart phones is a version 1 blocker. Not having all the same features of the network available on mobile, too, is a deal-breaker: if you can’t spend safecoin from your app, or while you are browsing, all ideas of “pay-the-producer”-models and alike will be impossible to do.

I personally believe, that mobile is an especially important, if not the more important case: desktop is easy, everyone can build a peer-2-peer-network for desktop. But enabling “the next billion”, who are and will be primarily on mobile, to use your network is a game-changer.

Well, that must change for mobile anyways – there won’t be any intermediaries on mobile. So, if we don’t have intermediaries, what happens to the “last line of defense” you are talking about? We can’t let it go down, all apps must still be authenticated and controlled. So, we must move that “firewall”: out of the launcher into the network itself.

That’s a big part of the architectural change we are planning: many more things, that so far happened in the launcher (so we can develop faster) but should be done by the network (like stronger enforcement of permissions) will finally move into vaults. So, even if that authenticator will still provide the same REST-API, it won’t do that enforcement anymore by itself, but that will be done by the network behind it. That change is necessary and overdue.

I’ll not go into depth too much here, as this must be part of the other documents we are still preparing for release later this week, but the key idea would be that all that is currently enforced per user (number of PUTS, safecoin spend, etc…) would then be enforced by the vaults per app. Of course we can’t allow any App to “go rouge” and spend all your safecoin – and it won’t: Apps will still not be having your actual authentication credentials but their own and it will only be able to “spend” the allowance the user gave them explicitly. In many aspects, this will allow even more finer control on these aspects than we had previously.

No one ever said anything about just rely on a third party for doing anything. Quite the opposite: we want to provide the same amount of security and privacy for all platforms. Some constrains, like spending of safecoin and puts, will be hard-enforced by the vaults in the network, while we’ll move others – like logging facilities – will be ingrained in safe_core to provide dashboards and introspection for the user. Deeper than it is now, as a matter of fact – as the current one only works per session per instance, among other problems.

But that has nothing to do with whether a REST-API is provided or not. Those can be done independently from one another.

I think we should keep this part of the discussion for when we have finished drafting and published the other docs around this topic. Because that draws a more complete picture. If you then still think that running an app is a minefield (and can show evidence about that :stuck_out_tongue_winking_eye: ), we must look into what can be done about this.


I’m very in favor of this one. Especially for desktop. It’s a bit like TOR where you start the browser and you’re online. It might be a bit different for mobile. This is a simple mock-up I posted on that other forum:

Maybe some apps could be stored locally? As long as it’s encrypted we should be good. That way you make sure that apps didn’t became evil when opened them again. And I would use the same approach as mobile apps from Google and others: just ask people for permission once. Maidsafe could deliver a bunch of screened apps with the download bundle. So it could look something like this:

  • Someone downloads the SAFE Beaker Browser from the Maidsafe website.
  • After installation the browser is started from the desktop.
  • The browser opens up with around 6 or 7 approved apps which are stored locally. Just like the mock-up.
  • A program called “account manager” shows up as the first app-icon so people understand that this is where they log in.
  • The other apps will ask for permission once.

There could be an option to install extra apps locally which show up as icons as well. After SAFE 1.0 an online appstore could be build. But I shouldn’t focus on that one for beta.


I expect the OP is more well considered than I can suggest reply to but will put 2cents here:

So long as access doesn’t become political… Apple like allowed apps etc, then it looks a good idea to reduce the workload on any app developer.

I’ve been playing with APIs as an excuse to learn Rust but wondering that jumping the API gets in the way of just putting time to whatever bright idea an app dev might have. Also, I wonder removing the API might reduce bundle size. Coupled with ABC examples of how to use the interface to get something useful done, would be a good step forward.

That said, looking at SWIG Compatibility, I wonder the list of supported languages is not comprehensive… REST just works and ideally so would any SAFE interface. SAFE interface for everyone. Would noob dev with bright idea be able to see past something like SWIG that they are not familiar with?..

As a dull reaction I’d say I like what I understand and even I understand REST but looking at SWIG I wonder I’d need better examples than they put in their tutorial… at very least for javascript where there is none atm.

From a quick reading, I suppose the big positive is a cross platform answer that REST cannot provide on mobiles; otherwise I’d just encourage that API libraries and executables are crafted that are useful to all languages and interests… a counterargument perhaps would resolve to whether it’s worth the extra effort to maintain a REST API, just for accessibility to devs expertise… if it’s the first thing they look for and they see what the understand, is that almost at the level of marketing, that worth the grind to maintain it? From what I’ve seen of the API design its relatively simple, so perhaps there’s a balance to be considered??.. Also once it’s there does it need much work - could it be an option that is not the default??

Surprised at that… would have thought some Linux repository would be a good route to visibility and adoption. The Software Sources in Ubuntu like simple usable Linux systems is there for being a step beyond “Users could install … via the command line”.

I was probably being too polite/subtle there. I know that native binding is going to be faster, which put bluntly, means that people who have written to the REST API will almost certainly feel compelled to rewrite.

I am actually pleased with the suggested approaches, but just a little sad we had to go round the REST houses first to get there.

FWIW, the overhead of a localhost REST server is unlikely to be a major latency contributor though - it is when the requests leave localhost that the delays start to occur. I would wager that unencrypted REST calls to a well optimised web service would introduce orders of magnitude less latency than the first hop onto the network.

This is academic though. We can still have a REST server on top of the new architecture, while facilitating access from additional key platforms. This is all good stuff.


Let’s stay on topic, folks! I’ve split these 3 posts into a new topic: Why is MaidSafe even working so hard on these types of problems?

Well that’s new. Since I started following this project mobile has never been a priority. It’s always been “let’s make it work on desktop first and adapt for mobile after”.

I’m not saying there isn’t a lot of people using mobile that could benefit from this, just that until now it’s never been the focus. I’m also not saying that we shouldn’t build things to make it easier to migrate to mobile, just that supporting mobile shouldn’t overshadow getting it done on desktop.

The idea of unloading all permissions on the network do sounds interesting but can also have quite an impact on performance. Every single call travelling around the world for every user instead of being treated locally will increase the CPU load of every node and the latency of each call. Even more true if you add the billion of mobile phone that don’t do actual work for the network. Sounds like quite a big can of worm to open at this stage of the project.

Of course you guys are allowed to change your priorities, that’s not my call to make and I guess we’ll see how it turns out in the end.

Let’s not trivialize the work MaidSafe did until now, cause a lot of effort as been put into making it work on desktop and we are still not there yet so I don’t think it’s fair to call this easy…

I’m confused, what is the plan exactly for the REST-API?

Of course, but the conversation until now suggested that only app permissions would be running on the network. Not all the theoretical features people has been brainstorming about when talking about the launcher. You should probably paste that in the OP to avoid confusion.

1 Like

So to understand a bit better:

  1. Does the Authenticator interact with the network thru the safe_core as well? …I guess so
  2. When you say multiples keys, app/user keys, have to be managed by the network, what are they exactly? is the app’s key a register that the app has been authorized by the user?
  3. The Authenticator stores these keys in the network in order to prevent from auth fatigue, so authorizing/revoking an app means registering its key or removing it from the network?
  4. App’s keys are kept in the network and auth tokens are kept in safe_core libr memory?

At a higher level, and related to the discussion about the dissapearance of the Launcher’s API, it’s known that it was never thought to be useful for mobile. I guess the Launcher UI can still be maintained but not used anymore as the proxy for apps authorization, but perhaps just as the dashoard to manage user’s authorizations (i.e. listing them and revoking, not for user login or autorizing app), PUT statistics, etc. Somebody mentioned vpn, how about something like the openVPN app for mobile?

Now for desktops, my understanding (and personal preference when thinking about developing a desktop app) is that native apps have been dissapearing and most of apps are web based, so as long as something like the SAFE-js library is still provided for webapps, there is no really an issue there.

I personally don’t like to install native apps on my laptop, I prefer webapps since I don’t have to worry about updates and I don’t have to reserve some of my personal storage (specially now that I can earn some safecoins with it :slight_smile: ), so I presume most of the people will think the same way, obviously with some exceptions as they exists today.

Thinking about exceptions leads me to think that if you are building a desktop native app is because you are concerned about performance, or because you need to be able to access some resources not available from within the browser, in such a case I’m sure you will be considering and probably willing to access the network directly thru the safe_core lib rather than thru a REST API.

Lastly, it’s certain that developers need to find it easy to create and/or integrate apps to the safe network, but it’s imperative that the end user needs to find it even easier to use the safe net and safe apps. Very humbly, If more effort is needed on the apps development in favor of helping/assuring mass adoption, we need to invest in it.

I’m very glad you open this type of discussions to the community, many of us are here to really be part of this, this is one of the best ways to make us feel we indeed are.


Yes, you are right. APIs used by authenticator can be very minimal and thinking of having a separate FFI layer for authenticator and SDK, but both will of-course use safe_core.

Vault validates the user based on the signature of the request. At present the requests are signed by the user’s MAID keys. The user account is validated for available PUTs (later safe coin) etc based on the user’s MAID keys. The MAID keys can not be shared with applications. Thus we must have to create a Keypair specific to the application when the user authorises the application. The vaults must now be able to verify the request from an app and associate it to the users account. This feature is not available at present and a mechanism like this might be needed to manage/map the app and user keys.

Yes, you are correct again.

Nothing is stored in the memory. Everything should be in the network. The app authentication proposal will be detailed sooner and that might be elaborating this part further.

It should remain the same.


@DavidMtl Mobile has always been an important part of the plan. It hasn’t been as visible as other aspects in general discussions, but it was always an important aim so it isn’t new. For example, this is why support for ARM has remained such a priority all along.

To go into why is going off topic, so you can take that up separately if you want, I just want to correct this point.


Nah, we could be arguing about it forever, everyone’s time is better invested elsewhere. If that’s where they want to put their priority there isn’t much to be said, I’ll get over it. Let’s see where it leads us.


Sure, we should definitely consider doing this. What I was trying to say is that the installation methods for macOS and Windows would probably be different than the ones we recommend on Linux. I just updated that paragraph to make this clearer :slight_smile:


Just had a thought, would updating permission cost Safecoins?

When investigating possible solutions for mobile, of course VPN came to mind, too, and I spent the better part of a day digging through the corresponding Android and iOS documentation to understand how they make that case possible. Here is my resulting report from that investigation:

In conclusio, there is a VPN specific interface available for mobile, but it is a) very, very low level (IP low… two stacks lower than the HTTP we are currently offering) and b) is highly targeted to the VPN use case. Which we aren’t really, thus we’d clearly be bending that system if we are capable of doing what we want to do at all and most of the time the provided facilities (like closing the connection and re-establish on app-switches, only allowing one connection at a time) clearly stand in the way. Lastly, if we’d be providing the VPN connection (without actually VPNing) we’d prevent users from using SAFE through an actual system-wide VPN, thus exposing them more than necessary.

While I won’t rule out that this could be working, considering that just the cost to provide a PoC would be a lot (as it takes potentially weeks to built something comprehensive enough to assess the facility - per platform), it would be a vastly different approach than on Desktop (which means increased costs of development and maintainence) and still doesn’t solve the headless/embedded use cases (which we still need a solve in yet another way), I advised against continuing the persuit of this approach.


When visiting a website which was trying to know my location, the Chrome showed me a dialog box which I found very easy to see (and deny). Is something like this what you are having in mind in relation to the browser embedded authenticator?


late in reading this thread, but I definitely love the move from Launcher --> Authenticator. Allowing apps to talk to the network directly is such a natural move, and I thought it was the plan all along. The whole Launcher approach always felt cumbersome and clunky, and hearing things like “bundled into the Browser, so users only have to download one app” gets me very excited and helps me picture a very successful launch :slight_smile:

That being said, one important point was brought up and I feel like it received no attention whatsoever so far:

1 Like

Just another thought which probably is what you are having in mind already since it sounds too similar to what you described the SAFE authenticator would be like.

Would it make sense to have the SAFE authenticator to implement the OAuth 2.0 protocol?

This can help in technology adoption since I assume there are plenty of clients for it out there that can be used out of the box.


Quoting from the RFC 46:

OAUth itself doesn’t really work for us, however, as we don’t have a single server that an app can just do its auth requests against, nor do we require app key and secret (at this moment) or can provide propper callbacks from the authenticator. We have a simpler but very similar model.

And don’t get me started on the OAuth 2.0 protocol itself… Let’s just say, neither Twitter (version 1.1), Google (partly 2.0), nor Facebook (version 0.9) has fully implemented it, nor does any of them intend to.