The future of SAFE Launcher

Let’s use this topic to discuss the future of SAFE Launcher :smiley:

We need to consider how SAFE Launcher can be scaled to various platforms (e.g. mobile, embedded, etc.). For example, SAFE Launcher currently needs to run in the background and continuously proxy HTTP requests to the SAFE Network, but on some mobile platforms that wouldn’t necessarily be possible. It would be preferable if SAFE apps could communicate directly with the SAFE Network.

As for app permissions and app management, we want to choose a solution that provides a good user experience and won’t annoy users with lots of authorization prompts (also known as “auth fatigue”). We are currently comparing two approaches and we haven’t agreed on one of them yet. In the meantime, we can start by discussing app authentication and support for mobile apps :slight_smile:

SAFE Authenticator

Our idea is to change the SAFE Launcher to a SAFE Authenticator, which would only be used for authentication/authorization process: Once an app has a token, it will be able to directly interact with the SAFE Network (using a language binding we provide).

We found an approach that could help us speed up the deployment to mobile. @Ben is testing a project called CrossBuilder. It could allow us to target Electron (for desktop) and Cordova (for Android and iOS). Using CrossBuilder, we could provide one single source code for the authenticator, which is the same for all platforms: mobile, desktop and within the SAFE Browser directly.

App authentication

SAFE Authenticator registers a custom URL scheme safe-auth://. Similarly every app (that uses one of our language bindings) registers their own safe--prefixed url-scheme (probably based on the App-ID). When an App now wants to access SAFE in the name of a user, it would invoke that API with an authentication payload, as follows:

  • An app invokes a URL which has some authorization information in its payload.
  • SAFE Authenticator is invoked as a result of this URL invocation
  • Authenticator requests user confirmation to give the permissions (or directly grants if configured to do so)
  • Authenticator invokes the app’s custom URL containing an access token
  • The app is invoked with this url, reads the access token and can access the network directly through it.

Persistence of this token could be configured in the settings of SAFE Authenticator.

Language bindings

Instead of making requests to a local REST server, apps would talk directly with the SAFE Network via safe_core. Our idea is to carve everything that is authentication specific out of safe_core and move it directly into the authenticator to give a more pure safe_core that apps would directly bind to. While we want to provide bindings for as many languages and frameworks as possible (like iOS, Android, Node, Python, etc.), those would be slim and focus only on binding, while all actual code – including the uri-registration-handling mentioned before – is happening safely within our Rust code base.

Of course, we intend to automate the creation of these bindings as much as possible (e.g. by using SWIG).


  • Easily scalable on mobile
  • Apps have the flexibility to update to new APIs at their own pace
  • Improved performance
    • Removes indirection (APP -> REST -> FFI). Apps can use FFI directly.
    • Dedicated connections for apps instead of shared common connection


  • The Network has to manage multiple keys (app/user)
    • Vaults have to manage the user’s MAID keys and also map subkeys corresponding to the apps. Vaults must be aware of the user’s account even if the request is made using a subkey (app’s key)
  • FFI bindings must be provided for many platforms

Integration with SAFE Browser

We could bundle an authenticator directly within SAFE Browser. That way users wouldn’t have to download two separate apps. They could simply download SAFE Browser and use it to create an account or log in to an existing account. The browser would also allow for authentication of local and remote apps.


Even if we bundle the authenticator directly within SAFE Browser, it could make sense to have an installer (for Windows and macOS) that makes it easy to install SAFE Browser. It could also give the option (e.g. the user could choose via a checkbox) to install additional example apps and demos (e.g. SAFE Demo App).

For Linux, we would probably recommend different installation methods. For example, users could install SAFE Browser via their package manager (e.g. apt-get), a zip file that contains a Linux binary, or an AppImage file.

SAFE Authenticator on mobile

We could offer a dedicated authenticator for mobile. We also need to consider how to support SAFE websites on mobile. We could potentially develop a SAFE Browser for mobile.

All the ideas presented are still open to debate and we’d love to hear your thoughts and feedback :slight_smile: If some of them progress, they might turn into RFCs.


Good Timing :slight_smile:

For a while it has looked to me like the shift from “launcher” to “authenticator” was de facto, at least for now, and last week I was thinking that unless this was going to change we should stop calling it “launcher”, I’m pleased were getting into both the what and the how of this because I’ve been wondering about it for ages.

I held back my renaming suggestion, because I wanted a better name than “authenticator” and have not yet come up with one. So, while we ponder the what and how, I suggest we also think about that, although, if it turns out that “authenticator” becomes a largely hidden component it may not matter - so I won’t lose any sleep over it until this becomes clearer.

Don’t Be Evil

One immediate question that may seem superficial (as well, sorry :slight_smile:) is that by using electron, which I acknowledge comes with lots of pros, means using Chrome I think?

Maybe not a big issue, I’m not sure, especially as this is also used to build Signal desktop. Which is why I heard the reservations: some people simply won’t trust Google’s codebase (at least I think that’s the reason for objections).

So I’d like someone to confirm electron implies Chrome, implies Google code, and then for us to consider what potential impact this has on:

a) security risks and ways to ameliorate them, and
b) perception of security, considering that I know of one high profile privacy advocate who won’t use Signal desktop for this reason, and who obviously would have problems with SAFEnetwork if Chrome was a key component. (I know, I know, same issue with SAFE Beaker, so really I should have raised this earlier! :blush: )

I don’t know how serious the security implications are, or how widespread any negative perception might be, but it does seem to me a big decision to lock ourselves into before considering this angle.

I think the above is a discussion in itself so I’ve started a topic for replies related to: Security of Electron Based Apps: Launcher, SAFE Beaker & third party

The Pros Have It

The pros seem weighty, and of the two cons only the first seems significant, but I don’t have the understanding to judge that one - I look forward to following the RFC discussion on this! :slight_smile:

Elaboration Please

I’d like further explanation of the scheme, perhaps from different perspectives because I don’t understand this in enough detail to think properly about it yet.

So more detail on the granularity of permissions being considering here (because my underfunding is we’d like a lot more than we have so far), and how these would work from (particularly) user experience perspective, application process, and developer implementation would be helpful.

I could try and make a guess, and then ask questions to fill in blanks, but if someone has a vision of these perspectives and is willing to try writing any of them out, it might speed things up.

It does sound like s good way to go and I’m excited we’re getting to this, and taking the time to work out a good solution from various perspectives. Thank you MaidSafe and Francis.

I’m also wondering if I missed something:

We are currently comparing two approaches and we haven’t agreed on one of them yet.

What are the two approaches? I’m reading the OP as one thing rather than two, but perhaps I understand even less than I thought :slight_smile:

Whoops, Licensing

EDIT: I almost forgot, does this mean apps would be linking to the core libraries, and if so how do we preserve the none GPL option for developers who wish it?


chromium – the open source version of it. And for mobile we wouldn’t use electron. However, beaker is also on electron, so…

I don’t think the problems are as concerning as you said. While it is a lot of google code, it is ultimately – unlike chrome – publicly available and a lot of eyes are on it.

We are hoping to present at least one later this week and can discuss the authorized direct access then. The main aspect about this topic was to start the discussion about this shift in the model in the first place: that apps and devices would directly interact with SAFE rather than through an intermediate.


I suspect we may end up creating modules for libraries via swig or similar. In this case then I feel these should be perhaps LGPL so use as is in any project but any changes must go back to the codebase. Any direct changes inside the codebase will still be GPL so protected in that way.

This may give us the protection that the code stays free and always free, but developers have an opportunity to join us or use another license when using the library files we distribute.

It’s an internal wrangle (time and resources) but I think if we can automate a swig type process then we should issue libs for node/python/java etc. with each deployment (release).

I also feel that will be a much simpler experience for devs and forget all the wrangling with Rest and the hassles of securing that and of course running in mbile (a no no, or very difficult and especially so to predict the future of what’s allowed there).

Anyway glad you like it initially. I feel we will also work with existing projects to help them cu-over to this mechanism. It should also make examples much more native looking for programmers familiar with language X. As we progress then adding language bindings should be automated which will help. Rust’s C abi compatibility really helps us out here.


I wonder why such a drastic change in direction when it seemed like we were so close to having a full feature network ready for use. Is the REST indirection that bad that is overshadows everything else? There was quite a few benefits to having one gateway that controls all traffic.

Which mobile platform are we trying to support exactly? Surely Android was fine since it’s possible to run a proxy vpn client on it. And headless server could just use a command interface instead of the whole UI. Is that just for iOS?

Btw, for a dev using the REST api is a charm. Much simpler then figuring out which librairies to link to and making sure they are always up to date.



I’m probably missing the obvious, but what about authenticating on a headless (linux) with only commandline?
Will there be (eventually) a cmdline SAFE Authenticator?

And on a desktop: if you only want to start the SAFE Authenticator (on the background), without the browser. Will that be possible?


First impressions - I like it.

If I understand correctly, apps will authenticate against the network as well as the user, which I like. This was discussed in a recent thread and it will help cross device app permissions.

I assume migrating permissions to the network level will also allow full partitioning between app data, which will greatly improve app security.

It also sounds like these tokens can persist, which will allow a generator app to create them and allocate them to apps. Thus will be a boon for embedded/service apps, etc.

This should also mean that a client with a REST interface can still be added as a layer above. Thus would retain backwards compatibility. Alternatively, binding directly with safe_core should give more integration options.


I believe a REST layer could still be added on top, but it would be simpler. It would just help to manage permissions at the network level and provide a REST wrapper against safe_core (which I assume would more directly interface).

I do worry about the timing of this too though. I mentioned my concerns over mobile integration on a thread a month or so ago, as i was concerned the approach may shape cross platform functionality.

I know development can often be evolutionary and exploratory, but I hope that the strategic design work is being done. It is reassuring to see this thread being posted and discussed though.


It is a common misunderstanding that this was a limitation purely imposed by iOS. While all major mobile platforms (including windows phone) offer background process features in their latest iterations, all of them also make it explicit that you can’t rely on them to be running: On both iOS and Android (which I have looked at the closest and therefore will focus on the most), you can provide background services but they might be frozen or stopped by the system or the user at any given point in time.

A very common case is to give resources to foreground tasks: just open a bunch of tabs on your browser and even on a modern mobile phone you’ll find the CPU ticks scheduled for Apps running in the background will drop dramatically – to the point that those are frozen or killed. (While stock google Android offers some facilities to kinda restart the service when this happens, most custom built (like from Samsung) don’t really.) This is all intentional and good, as it improves performance on tasks the user case about more: the things they are looking at.

Secondly, even if we were able to have a process running constantly in the background, resources on mobile devices are very limited and keeping a socket open in the background is – even if barely in use – is bound to drain your battery: your app will constantly wake up the phone just to not read anything from said socket and put it back to sleep. And app that drains your battery – easily exposed with all those measuring tools built into the platforms nowadays – is bound to be deleted by the user quickly. It’s one of the big mistakes every App developer does once and learns quickly: you simply can’t keep a socket open.

But for the sake of argument, let’s assume we could: we could provide a background services that is somewhat reliably available to other apps and could somewhat reliably keep the connection open (for the record, WE CAN’T DO NEITHER): how would we even provide the REST-API though? The launcher is currently built using electron and the server is provided through Node. While we could somewhat ship the Node for Android (yikes…), we won’t be able to do the electron-part on either platform – but they are closely intertwined. There is no way we can port the current launcher and how it works onto mobile. We have to restructure bigger parts of the system architecture and built things different even if those things were possible.

This whole idea started by exploring the “headless”-launcher discussion, which made clear that if an API was provided we should/must move it into the Rust-Part of the app or we won’t be able to provide it on mobile at all. But considering that we can’t reliably provide it (and even if we could, we’d drain the battery super quickly), other concepts had to be explored. Adding to that the use case of stand-alone embedded devices, like your weather station that wants to push the current temperature into your SAFE account without requiring you to have your laptop running, just to proxy requests through, it becomes evident that Apps must have a more direct way of connecting SAFE.

And while I agree that this means on mobile you are actually increasing the number of connections to n per App (rather than just n per device), the resources will be handled better as those the user case about the most – the one in foreground – will be the most responsive. By moving more logic into safe_core we can provide a lot of helpers around optimising these and ensuring they are dealt with the best.

(Btw. if you look at it from a high-level point of view, this is also how facebook and google handle authentication on mobile – and to some degree on desktop/server/the browser: you authorise against their app once, they hand the app a special auth token and the app then manages connections directly, authenticating itself with said token.)

That is very much correct. There is nothing stopping anyone from still providing a REST-Style-API on top of safe_core. As mentioned previously, this also opens the door for providing this in Rust directly – which with the current launcher isn’t possible – and thus being more performant and safe than the current node implementation is.

What’s your timing worry about exactly? Just trying to understand…

The current approach was taken with good knowledge that this might not work on mobile but on desktop only. The priority was to get an ALPHA network up and running and start performing those tests that happened over the last few months and have it as a starting point to start exploring what APIs, Apps and use cases come up on the higher level. We are a lot further down this road and also have more people on the team with deeper mobile experience now – like myself.

We are doing our best to architect a pattern that will take us a long way. As mentioned I personally always have the embedded and headless case – which I consider the hardest at the moment – in the back of my head as well as the browser and mobile apps, when talking about any of those things. We are trying to propose and implement something this time that we believe will last way beyond these use cases and allow us to implement things for the next year(s) to come.

And while we might not implement all features just yet, as we are trying to design things in an evolutionary pattern, too, we already anticipate how we believe them to work – and make minor PoCs (like the crossbrowser mentioned above) to show case their feasibility. Lastly, we are brining these up as RFCs to gather more feedback and wider viewing angle on them: to make sure we don’t forget any important use case. And if we do so, we have to address them!


My worry is that the REST API wasn’t pitched as a temporary solution, while research was being done. It was presented as a standard way to access the Safe Network, especially if you didn’t want to comply with GPL.

Timing wise, it is more that it feels like time was wasted. People are writing browsers and various other apps, thinking that this REST strategy was going to stick. While I can see that the REST API can still be supported, this proposal seems to suggest it won’t be as ideal as the alternatives.

I understand experimentation must be done and solutions must evolve, but how to interface with the Safe Network is fairly fundamental. We knew that Electron would not work on Android or iOS. We also knew that limiting permissions via the launcher (when non-launcher apps can bypass it) would not be adequate. It was also identified that linking via FFI was not recommended and would not be suitable for non-GPL apps.

Maybe it was obvious to the team that even the architecture of these interfaces was experimental, but I don’t think this was obvious outside of the team. If Safe Net is going to entice developers in the wider community, I think a clear message is important. If developers are writing code based on these published interfaces, the team has to be respectful of that.

It isn’t my intention to sound over critical, but this stuff is the relatively ‘easy’ part (compared to the innovative networking bits!). Clients have been interfacing with libraries and services for decades. As we have always known that multiple platforms must be supported, including embedded and mobile, I don’t think there were too many unknowns here.

I hope this is taken as positive feedback, as I really want to see the community developer pool grow to meet the ambition of the project.


Believe me, when I say it is! And it is highly appreciated!

I agree that in the past, the communication around this has been less then ideal. The team was under a lot of pressure to get an Alpha out of the door and that left some other aspects – like proper communication or the CEP – lacking … to say the least. @frabrunelle and I joined the team especially to fix these aspects and do things better going forward. This discussion is part of that. But I do hear and understand your frustration about these things and can assure you we take them seriously.

I don’t think time was wasted. Quite the opposite, we made a lot of way. I spent the better past of the last month writing examples using the APIs and we’ve learnt a lot. Among them, we’ve realised that the way the LOW_LEVEl_API works right now (with the handlers and all) is quite fragile to be exposed over REST (to not say ‘insecure’) for example. So by all means, I would like to phase this version of the API out and replace it with something better and simpler. Where ever possible, we are trying to keep the friction of adoption low of course. In this case, for example, I believe the old API calls could still easily be mocked upon the new, safer API.

Similarly, for all apps that run in the new browser and use the provided DOM-API (which is the only way Web-Apps can work for a couple of weeks now), we have already spoken with @joshuef and are pretty certain we can provide a compatibility layer if not implement them the exact same way as they are right now – that’s the plan so far. For Web-Apps a replacement of the REST-API is planned to be completely transparent.

Which leaves only external Apps exposed to the update-problem. And, unless I am mistaken, almost all of them (Demo App, Email Example) are currently published by us. Of course when updating our apps, we will also publish guides on how to do that for your own app. If there were more apps and there was a higher interest in having a compatibility layer, we could for sure reprioritise the efforts regarding providing a REST-Server – maybe even within the authenticator. That is not out of the question at all!

Similarly, we will also find and recommend a solution that allows you to use/connect to SAFE without having to publish your App under the GPL of course. We are not changing course on that aspect either.

Well, from a technical stand-point, it simply isn’t:

When you use the REST API you – right now – copy all your data in a request through HTTP to node, which then parses, deserialises it and copies them via RPC over to a subprocess which then parses and eventually puts them on the wire to SAFE. The same for everything that comes back, but the other way round. And although NodeJS and our Rust both work concurrently, those memcopies do block the process. So if you are writing or reading big data, just the transfer of those between processes will “block” and cost a lot of time.

We haven’t done much performance analysis of this (and there are for sure things that can be optimised here), but it will never be as optimal as just making a call to rust with your local memory within your process and directly put that onto the wire. Less mem-copies, no parsing, no serialising and de-serialising and much more fine-grained control directly in the API rather than through a pretends to be stateless REST-API.

But it is also really hard to find a good security model that can be successfully enforced when each apps connects on their own. And at the time – and also because there was a lack of time to research and investigate this further – it was unclear how that could be done, if at all. Thus, the main principle that we don’t want to give apps direct access to the users credentials prevailed was more important to us than the performance gained from another model.

Today, we say, we think we might can do it all: good performance and stay secure – have the cake and eat it, too.

Again, I’d like to emphasise, that this is very valuable feedback and we appreciate it a lot. And if there are many Apps that require a REST-API to function properly I will take that information to the team and we will discuss what we can do about that.


Thanks for the great answer @ben. So maybe mobile is a special case(I still wonder how vpn client can work without missing a beat though…), anyway, assuming you are correct I think it would be a mistake to drop the REST API on desktop because of that.

You may make a strong case for the changes for mobile and embedded device but I don’t see any reason why the REST API can’t be the official way desktop app interact with the network. Embedded device and mobile phone are just edge cases that need special considerations.

See the REST API is not just a handy tool to make app. It is much more importantly the only way we can monitor how apps are using the network. It’s like a firewall, the last line of defense against apps going rogue and emptying our wallet.

With the REST API we know how apps behaves. We know how many PUT they make, we know to who it sends money to and how much it sends. We can black list wallet id and stop any app from even sending coins to it. We can monitor apps activity to detect behavior that could be armful (infinite loop with a PUT request…). We can have stats to see how efficient different apps are. We can allocate a fix amount of Safecoin for consumption and allow the app of using just that. We can monitor where our Safecoins are going and build monthly spending report. It’s our dashboard.

It sucks for mobile to not have that but it’s kind of a big deal to not have it on desktop. It’s not something you can toss aside and just rely on a third party for doing it. This is the most important piece of software made by MaidSafe at the user level. Without it we are blind. Running an app will be like walking in a minefield. So yeah it’s kind of a big deal eheh.


And yet the performance is still pretty descent. So I’d argue that for every day normal uses, sub-optimal performance is not that big of a deal. I would gladly trade 5-10% of my loading speed to get the advantages of funneling all my traffic through the launcher.

Power users who wants to download gigabytes of data can use an app that bypasses the launcher and get the best possible speed. But for normal uses, it is pretty good, and more importantly so much more secure.

Alright I’m a bit fiesty over this but I really thought the launcher and the rest api was there for good. To me it’s such a no-brainer that I didn’t see this coming. You guys are doing a terrific job and including us in the conversation is awesome, so I hope I’m not too pushy, keep up the good job.

1 Like

These are difficult problems and the proposal is a great step in the right direction. It’s stimulated a lot of purposeful discussion already.

Here are my very-high-level (possibly impractical) thoughts:


  1. The safe network is a data store. Look at existing libraries for storing data: file system, http, database etc. Also look at existing libraries for communicating data: http, websocket, irc etc. Dev libraries / bindings to interact with the safe network should look similar to these for ease of use. Data flow on the safe network is permissioned and bidirectional so has inherit complexity, but ultimately the dev just wants to dump some data somewhere (or fetch it). Let’s keep that use-case in mind when designing the launcher interfaces. Screwing around with namespaces and token expiry etc is a sure way to complicate the life of devs (but is probably a necessary evil).

  2. Having to ‘create an app’ for the safe network just to save some data sounds terrible to me as a dev (high cognitive overhead). I feel the launcher needs to operate at a lot simpler abstraction than that. At least, the interfaces and documentation should be at a simpler abstraction. Might not be possible…?

  3. GET will be much more common than PUT, so make it really easy for a dev to get public data off the network. It should be as easy as reading a file or performing a http request. Ideally the launcher should not be required for GETs on the safe network, it should just be built-in to the library the dev uses.

  4. Code is a simpler interface than network. I’m glad to see SWIG mentioned.


  1. Don’t assume they have a gui. Headless can easily be extended with a gui wrapper but not vice-versa. Headless should be the primary interface for the launcher.

  2. It’s interesting to consider the launcher as part of a unix pipeline. To me this represents the perfect combination of simplicity and power, which can be extended or wrapped with gui or other interfaces as needed. Can the safe network have a cURL-like interface?

  3. The purpose of apps is to be a ‘wall’ between data. Auth Tokens issued by the launcher are the raw material these walls are built from. There must be doors between the walls (ie the user must be able to share / shield data between different apps and users). How these walls and doors are managed is a difficult problem, especially considering ux differences across platforms, eg desktop, mobile, server environments. Management of auth tokens and apps should be as transparent as possible to both users and devs, which segues to:

  4. Permissions ux should be designed as a spectrum. Let everything through at the most permissive. Manually approve every request at the most restrictive. In the middle approve app at first use then automatic approval every time that, etc etc (lots of possible variations). Making this experience smooth across the various platforms whilst remaining fully-featured is hard. I think it must start as a config-file and then possibly extend with gui wrappers around that file.

Very glad to see this discussion, and all suggestions so far sound like positive steps. Hopefully these points help simplify the ecosystem :slight_smile:

1 Like

Hey @mav,

I agree to many things, and disagree to others, you’ve posted here. But I won’t respond this here now, as many of them not specific to the Launcher concept we are focussing on here and will be addressed in one way or another through the data-handeling RFC/Proposal I mentioned earlier. I’d love to continue the discussion about those then, if you don’t mind :slight_smile: .

1 Like

For us, mobile phones are not an edge case, they will be a primary use case of internet access to the future – heck, for a big part of the planet they are already the only access to the internet at all. Being able to support apps, like messengers, in full successfully from smart phones is a version 1 blocker. Not having all the same features of the network available on mobile, too, is a deal-breaker: if you can’t spend safecoin from your app, or while you are browsing, all ideas of “pay-the-producer”-models and alike will be impossible to do.

I personally believe, that mobile is an especially important, if not the more important case: desktop is easy, everyone can build a peer-2-peer-network for desktop. But enabling “the next billion”, who are and will be primarily on mobile, to use your network is a game-changer.

Well, that must change for mobile anyways – there won’t be any intermediaries on mobile. So, if we don’t have intermediaries, what happens to the “last line of defense” you are talking about? We can’t let it go down, all apps must still be authenticated and controlled. So, we must move that “firewall”: out of the launcher into the network itself.

That’s a big part of the architectural change we are planning: many more things, that so far happened in the launcher (so we can develop faster) but should be done by the network (like stronger enforcement of permissions) will finally move into vaults. So, even if that authenticator will still provide the same REST-API, it won’t do that enforcement anymore by itself, but that will be done by the network behind it. That change is necessary and overdue.

I’ll not go into depth too much here, as this must be part of the other documents we are still preparing for release later this week, but the key idea would be that all that is currently enforced per user (number of PUTS, safecoin spend, etc…) would then be enforced by the vaults per app. Of course we can’t allow any App to “go rouge” and spend all your safecoin – and it won’t: Apps will still not be having your actual authentication credentials but their own and it will only be able to “spend” the allowance the user gave them explicitly. In many aspects, this will allow even more finer control on these aspects than we had previously.

No one ever said anything about just rely on a third party for doing anything. Quite the opposite: we want to provide the same amount of security and privacy for all platforms. Some constrains, like spending of safecoin and puts, will be hard-enforced by the vaults in the network, while we’ll move others – like logging facilities – will be ingrained in safe_core to provide dashboards and introspection for the user. Deeper than it is now, as a matter of fact – as the current one only works per session per instance, among other problems.

But that has nothing to do with whether a REST-API is provided or not. Those can be done independently from one another.

I think we should keep this part of the discussion for when we have finished drafting and published the other docs around this topic. Because that draws a more complete picture. If you then still think that running an app is a minefield (and can show evidence about that :stuck_out_tongue_winking_eye: ), we must look into what can be done about this.


I’m very in favor of this one. Especially for desktop. It’s a bit like TOR where you start the browser and you’re online. It might be a bit different for mobile. This is a simple mock-up I posted on that other forum:

Maybe some apps could be stored locally? As long as it’s encrypted we should be good. That way you make sure that apps didn’t became evil when opened them again. And I would use the same approach as mobile apps from Google and others: just ask people for permission once. Maidsafe could deliver a bunch of screened apps with the download bundle. So it could look something like this:

  • Someone downloads the SAFE Beaker Browser from the Maidsafe website.
  • After installation the browser is started from the desktop.
  • The browser opens up with around 6 or 7 approved apps which are stored locally. Just like the mock-up.
  • A program called “account manager” shows up as the first app-icon so people understand that this is where they log in.
  • The other apps will ask for permission once.

There could be an option to install extra apps locally which show up as icons as well. After SAFE 1.0 an online appstore could be build. But I shouldn’t focus on that one for beta.


I expect the OP is more well considered than I can suggest reply to but will put 2cents here:

So long as access doesn’t become political… Apple like allowed apps etc, then it looks a good idea to reduce the workload on any app developer.

I’ve been playing with APIs as an excuse to learn Rust but wondering that jumping the API gets in the way of just putting time to whatever bright idea an app dev might have. Also, I wonder removing the API might reduce bundle size. Coupled with ABC examples of how to use the interface to get something useful done, would be a good step forward.

That said, looking at SWIG Compatibility, I wonder the list of supported languages is not comprehensive… REST just works and ideally so would any SAFE interface. SAFE interface for everyone. Would noob dev with bright idea be able to see past something like SWIG that they are not familiar with?..

As a dull reaction I’d say I like what I understand and even I understand REST but looking at SWIG I wonder I’d need better examples than they put in their tutorial… at very least for javascript where there is none atm.

From a quick reading, I suppose the big positive is a cross platform answer that REST cannot provide on mobiles; otherwise I’d just encourage that API libraries and executables are crafted that are useful to all languages and interests… a counterargument perhaps would resolve to whether it’s worth the extra effort to maintain a REST API, just for accessibility to devs expertise… if it’s the first thing they look for and they see what the understand, is that almost at the level of marketing, that worth the grind to maintain it? From what I’ve seen of the API design its relatively simple, so perhaps there’s a balance to be considered??.. Also once it’s there does it need much work - could it be an option that is not the default??

Surprised at that… would have thought some Linux repository would be a good route to visibility and adoption. The Software Sources in Ubuntu like simple usable Linux systems is there for being a step beyond “Users could install … via the command line”.

I was probably being too polite/subtle there. I know that native binding is going to be faster, which put bluntly, means that people who have written to the REST API will almost certainly feel compelled to rewrite.

I am actually pleased with the suggested approaches, but just a little sad we had to go round the REST houses first to get there.

FWIW, the overhead of a localhost REST server is unlikely to be a major latency contributor though - it is when the requests leave localhost that the delays start to occur. I would wager that unencrypted REST calls to a well optimised web service would introduce orders of magnitude less latency than the first hop onto the network.

This is academic though. We can still have a REST server on top of the new architecture, while facilitating access from additional key platforms. This is all good stuff.


Let’s stay on topic, folks! I’ve split these 3 posts into a new topic: Why is MaidSafe even working so hard on these types of problems?