Future of a SAFE Browser and node/webAPIs


Why are we looking at changing the browser?

In the first instance, we decided to look at the various possibilities going forwards with the browser because:

  • It would be nice not to have to maintain browser code or fight against upstream changes (Beaker has been reorganised a lot since we forked it, and became much more DAT focussed).
  • It would be nice not to have to reinvent the wheel, browser-wise.
  • It would be nice to offer a more fully featured / better-performing browser.
  • It would be nice for a codebase that offered mobile alternatives in a similar fashion.

Given those ideas we started looking at what possibilities exist:

  • Chrome
  • Firefox
  • Brave
  • A ground-up rewrite of an Electron browser.

Traditional browsers could be used. But there are many caveats:

  • Need to use extension (to avoid trying to hack core code)
  • Need to lockdown extensions for release / when in use (to avoid security issues)
  • Large foreign codebases.
  • Is the code reliable / consistent / subject to change of APIs frequently? (Might the way they implement changes lock us out at some future date? [changing licensing, implementation details etc])

The problem with extensions:

Looking further at the possibility of chrome extensions (which all three mainstream browsers’ implementations are based on), there are a few things of note:

  • Security. (Lack of)
  • Protocol: The chrome extension setup which all three old browsers use does not allow safe: protocol creation.
  • API injection: The chrome extension setup does not allow code injection / IPC to not standard protocols.

Investigating further we found we could easily / cleanly get safe: working in Brave, without too many core tweaks (though still some are needed).

We could easily knee-cap Brave to prevent HTTP and we can lock down extensions there.

BUT. Brave, as a fork of electron, would require us to implement a lot of changes to achieve a SAFE-API as we have it now.

Otherwise any implementation for chrome/ff would likely be purely via extensions, which would limit the protocol to be of the web+safe:// variety and also need some whitelist http workarounds for recreating APIs. Both of which are less than ideal.

Thusly, looking at the three olde browsers, Brave emerges as a frontrunner, especially when considering it’s anti-ad/tracking, pro-privacy standpoints. (Although they do want to serve targeted ads. Could we remove this easily? / Does that become a maintenance nightmare?)

A question:

All of which leads us, in roundabout fashion to a question: what de we want from a SAFE Browser?. Which has been a subject of some debate in the community.

Right now, though, I’d wager no-one is using SAFE Browser as their main browser to access clearnet.

Even given a more polished browser, would they want to?

Would that be secure?

It’s been noted that even Brave, with many devs and an anti-fingerprinting, privacy central goal can still be ID’d as a browser (although I’m not sure to what extent). Is that a security risk?

Technical issues.

If we look at Brave in the least hacky fashion, providing a fork with protocol and code injection for APIs there are some technical things of note:

  • We need to rework the API wrapper (safe-node-js). Brave’s electron fork is not compatible with our current node implementation (which is based upon node-ffi, which is incompatible with Brave’s muon electron fork; [it cant build against the unknown version of ‘electron’, it seems]).
  • We’d have to port to neon, or perhaps WebAssembly.
  • This requires creation of a Rust API (which we don’t currently have)

Neon seems like a reasonably straightforward replacement for FFI. Though it’s not been tested or benchmarked with SAFE code as yet. POCs were positive.


WASM could be great (using built rust code directly in the browser would remove a layer of abstraction / unify the API interface; instead of having safe-node-app and window.safeXXX interfaces which are inconsistent/ more things to maintain).

It would also mean no cross compiling for different systems here. Simplifying build process and potentially removing some layers of code.

It could also mean, clearnet sites could implement some safe functionalities (via uploading webassembly files to the clearnet site, it could potentially be run in any browser). (Whether or not you think that should happen, with an OS project and the possibility of easily compiling to web assembly… it likely will happen at some juncture.)

Web Assembly Questions

Web assembly is relatively new. Our brief testing was positive although we still need to test further / more conclusively.

Given the potential for unifying APIs (and wasm is in development for node too, under experimental flags), and the potential here to compile directly from rust. It’s worth looking at more.

Back to a browser

After this wee technical sojourn, we come back to the question: ‘how should we continue with a browser?’.

Grow Your Own

We’ve learnt a lot from Beaker, and it has some great patterns, but it’s also been under very active development (which is cool!), and has become much more DAT focussed (also cool!), but that limits its usefulness for our purposes.

We’re fighting to maintain an upstream branch that is no longer super relevant, all the while complicating feature development. We should not be doing this.

So the final idea is setting up a custom SAFE Browser using electron. The benefits to doing this ourselves being a simpler/cleaner more modular UI which should let us move faster.

There are aspects of reinventing the wheel (well… that’s pretty much the whole aspect), but if we consider what is a basic browser, it’s also not that complicated an endeavor. A lot of code could be lifted from the current SAFE implementations of the store etc in the browser.

to HTTP, or not to HTTP

In the end, the way forward is grounded in this question.

If we want it to be a main clearnet browser… there are obstacles, but it’s probably do-able with a fork of some kind. There are many workarounds needed for Brave due to their custom - undocumented - electron fork. But it should work.

But would a clearnet browser actually be used this way? And should we be doing this workaround/hack development at all? (And how much time does that take vs starting a fresh electron build?)

If we did decide to go down the Brave route, WebAssembly would be required for DOM APIs (which requires more rust work). And if we have that well, a clearnet extension could be enabled much more easily. Then users would have the choice… And we would offer only a basic, but secure option.

And if we don’t HTTP… Well then we take our learnings from forking Beaker and rebuild a simpler/cleaner codebase, removing we assume unused features like HTTP and closing up the security surface… There are some aspects of wheel reinvention, yes. But probably only as much work as updating to latest beaker would offer us, without the control… And once we’re level… We’re free to iterate much more easily.

We could then investigate WebAssembly on the side. Taking it up when we have the capacity for implementing the required rust layer changes and incorporating it then…

Wait… what about mobile?

Ah yes. Well, while there are Brave/Firefox versions for mobile, it seems they are in the end, separate code-bases and so there’s no benefit of choosing a desktop browser to match.

Safe_client_libs and use in Rust (native)

Forking Firefox (or Chromium) and do minimal changes (which are as much as possible in distinct files/directories) is probably not an option, because too complicated/too much work?
Also, if the code of the next Firefox/Chromium version doesn’t change drastically, then updating to the next version should not be that difficult, or is this assumption completely wrong?
Also Firefox has already some Rust in its code base, if I’m not mistaken…


thank you very much for this very detailed review of the current situation and of what the goals, expectations and projections are.
This relieves much of the concerns that I may have expressed a bit quickly or harshly last week, I’m sorry about that…
You give much insight on the complexity of the problem.

I still stay under the impression that one important aspect would be to keep assured that the whole code of a base platform is known and understood before choosing one way or another. Which seems ( at least to me ? ) to be more the case for Brave or an Electron-from-scratch version than Firefox / Chrome.

if we don’t HTTP…

that would be a way to take things seriously. This part of internet is screwed, lets open fresh roads in clean lands. :slight_smile:


The Brave feature for serving ads is supposed to be anonymized. I haven’t really looked into it very much, but perhaps it could actually be a useful feature to have on SAFE. Brave also has a payment feature which I suppose could be made to support SafeCoins. Perhaps the Brave guys would even include changes to add SAFE support to Brave at some point in the future if they see SAFE taking.

If HTTP is going to be supported in the browser it would be very important that SAFE tabs can not contain clickable HTTP links (or at least it would show a big warning when clicking a link) and can not do any XMLHTTP requests.


I don’t fully understand this part. Does it mean you could achieve safe:// protocol with chrome or not?

My overall feeling about this topic:
You are right with the assumption that I currently don’t use the SAFE Browser as my main one. It just lacks important features and has performance issues. My main browser has to be fast and should be bleeding edge from a technology perspective (These attributes are the reasons why I use Chrome).

Yes! Actually I would love a version of Chrome (which I use on desktop and mobile) which would allow me to also access safe sites. I think this is an important point from a strategic perspective: As SAFE gains more and more relevance due to apps and safe sites there will be more and more references to such safe services from the clearnet. Take the safe forum as an example. There you already have many links to safe sites. Everytime I click on a safe link I whish it would just open a new “SAFE-Tab” in my browser. Instead it launches another browser and if not already logged-in there (which is the case most of the time) I have to login before being able to use the service. From an end-user perspective this is a very bad UX in my opinion.

Don’t get me wrong, I would LOVE a future where SAFENet is more relevant than clearnet, but let’s be honest, that’s a long way down the road! I think as long as our main browsing activity is on the clearnet we should be able to continue using our favourite browser (e.g. Chrome) and try to extend the functionality with SAFE features.

In an ideal world I would therefore like to continue using Chrome browser where I can download an extension which gives me the possibility to browse both safe and clearnet and have the option to disable clearnet. I think this would minimize adoption hurdles.

I don’t get why extensions are such an issue, because I always have the choice which extensions I want to use an which not. I’m not a fan of dictating / restricting user behavior.


This is quite the conundrum. I think there is going to be a very distinct divide between the purists and the “pragmatists”. Not to say that in a negative way. I think that some have a veiw that SAFE will be better/more quickly adopted if it can be accessed via the clear web and I won’t deny thinking the same BUT I simply can’t say that it’s the right thing for SAFE or it’s users. I think SAFE needs to stay just that, secure access for everyone and let SAFE’s features draw people in. This is just my opinion of course but I keep it because it’s what drew me in and everyone else involved.

If I could suggest an option with no regard to cost or work I’d say custom browser babay! Reading through the pros and cons it honestly sounds mostly like cons trying to adapt existing browsers. Of course I hold no experience here so can’t judge properly but that is what I get from it.

So if I can ask, why can’t the current beaker browser fork just go its own way as though it was a fresh start? Make necessary feature fixes etc. and call it good? Is this infeasible or just out of the question and why?

For existing browsers if that was the only option, +1 for Brave


Just replying quickly this morning. I’m actually out until Tuesday, but I’ll get right back onto any more points/Qs asap then. Thanks for the replies so far.

@draw firefox is about to have some rust code. But it’s for the layout engine, not network side of it. If things stay ‘in line’, updates wouldn’t be too painful. But you never know. And that’s assuming we managed to do the required changes, (and cleanly / securely). Which given the size of the codebases, might not be that small a task

It means no safe: protocol in these browsers, nope. We cannot have what we know as safe: links, they would have to appear as web+safe: links in the address bar.

Largely the issue is that we cannot offer an extension as the main ‘SAFE’ entry point due to security concerns. If a large benefit of the SAFE network is anonymity, offering only an extension would probably undermine this.

  • We cannot control the main features of the browser. Chrome (for example), nav bar sends info to google by default.
  • Other extensions can see/manipulate the browser/dom/tabs/extensions.

So while an extension is a feasible way to access the safenet, it’s unlikely to be as secure as we’d want.

Yeh. Us neither. It’s another thing we’d need to check/potential vector for problems is the main point. But if everything works as advertised, Brave is certainly interesting for HTTP access (and as you note, SAFE is not contrary to their aims, so there’s some potential for synergy down the line.)

Currently anytime you’re in ‘SAFE’ mode, there’s no http requests possible. When you turn this off, you could do http requests from a ‘safe’ site.

I think what you suggest is clearer/more consistent. And with a clear indication of what is ‘safe’ could work well. Although it does limit anyone wanting to link/show things from the clearnet on safe. (Which is not to presume that it should be allowed… just to note it’s a limitation on what’s possible there as a consequence).

It could. But to make beaker work as we’d want, it’s actually quite a large refactor. Probably (_almost definitely, IMO) a larger one that starting from scratch / taking what we’ve learned from beaker and adapting it to a fresh codebase more in line with how we work internally (single store, react-y, top down data flows).


Fair point. What about Chromium? I don’t know how close the codebase is to Chrome or if it has the same tracking behavior.

However, I defenitely want to warn about underastimating the effort of building (and maintaining) a new browser from scratch (or fork without upstream support). Feature wise, security bugs etc… I really don’t want to undermine MaidSafe’s capabilities but I think there is a reason why there are dedicated companies/teams doing nothing but building a browser for years now. Although I have to admit that I can only judge from an outside perspective as I have never taken a deeper look into browser development.


Definitely, AND no external , non clickable, requests either : webfonts, jquery, css, images, anything really. One such external request goes out, and we get a “AFE” browser : Access For Everyone, with the “Secure” part stripped out.

I suppose down the road there will exist two kinds of Safe browsers : one family that is HTTP clearnet disabled ( ie : there is no code inside to process http clearnet links ), sealed off from clearnet, with hardened and audited code. This family would passively handle security for users, and would be used in higher risk situations where security / privacy is the main goal.
A second family would allow both clearnet and Safe access, and would leave the responsability of security choices to its users : you are provided with a tool that can give you a secure environment, but you can open the fences if you want to, at your own risks. By mitigating security tolerance, this family would certainly help to grow awareness and adoption of the Safe network, with emphasis on the serverless aspect.

It should be be always kept in mind that Convenience is n°1 enemy when it comes to security and privacy.


What about a Rust browser (https://servo.org/) for a Rust network? :wink:


Sounds like with safemode on/off a safe site could sit in the background of a tab and wait for the user to turn off safe mode and then start sending connecting to http servers to sniff out the users ip adress.

I think it shouldn’t be a problem to support both safe and http in a single browser, but if it going to be secure then the tabs would have to be completely isolated so that it wouldn’t really be any different from opening sites in different browsers. To make it convenient the tab could switch mode based on the url, if a user typed safe:// something the tab could switch to safe mode and then if the user typed http:// it could switch to http mode. It shouldn’t be possible to switch mode by clicking links though as sites could attach identifiers to the url to link the http and safe identities.

edit: perhaps a reasonably secure way to make links clickable when visiting a SAFE site could be to open them in a Tor tab?


It would be great if the default API in the browser and what users would be encouraged to use would be Rust WebAssembly API. Of course many people want to use JavaScript, that’s fine for many use cases and there should be nothing stopping them from doing so.

The problem with JavaScript is when you make an app that deals with money, especially irreversible transaction like you have with cryptocurrencies.

On the web today it is rarely a complete disaster if you online bank has a couple JavaScript bugs, they’re extremely easy to make and can be hard to find unless you have very thorough test coverage of everything, but most of the important stuff happens on the server and transactions can be reversed after all.

With SAFE, the important stuff will happen on the client and transactions can’t be reversed. JavaScript is a recipe for disaster in this case. It’s not easy to write secure code in Rust, but it’s much easier than doing so in JavaScript. With Rust at least you have a compiler that enforces some things, with JavaScript there’s nothing enforced. You can have extensive tes coverage, use JavaScript Lint etc, but nothing forces you to do so, so many won’t. Even with all this, sneaky bugs that the compiler in Rust would detect, can go undetected.

People will make all kinds of apps and sites on SAFE and for many it won’t matter if they have even tons of bugs, but there will also be many where it is a big deal and for those Rust would be better.

Already Rust is kinda the default for native apps. Sure you can use node js and that’s great, but having Rust being the default language and what people are encouraged to use throughout the whole stack would have lots of advantages, not only in terms of security, but also code reusability etc.


Did you check out Vivaldi? Also has a strong privacy focus


Dedicated browser would be good, and probably could use WebKit or Servo but what about all the functionality around just displaying sites? I don’t know which of them are included in rendering engines and which are not, but implementing things like developer tools, indexeddb, calendar for date input, video playback, webrtc… isn’t that too much?

Perhaps the only way is to drop browser support and just limit to native apps? There is one API less to maintain :slight_smile: When somebody wants to use HTML/CSS/JS, they always have Electron. What’s the advantage of web apps that we so desperately need? One post on the topic from a month ago:


That post suggest improving the web rather than reverting back to old fashioned native apps. That is an interesting option, although it if it it’s too different from what people are used to it might be difficult to get a lot of people to develop for it, at least in the beginning.

There are several interesting points in that post which could be made true for SAFE

A clear notion of app identity

SAFE apps have ids

A unified data representation from backend to frontend

A dynamic site on SAFE is always rendered in the browser by JavaScript or WebAssembly, rather than on the server and the site connects directly to the database, so this would be a natural way of doing things on SAFE. Perhaps a binary version of JSON or something like that would work well.

Binary protocols and APIs

Perhaps protocol buffer could work for this.

Platform level user authentication

SAFE has this

IDE oriented development

This is a question of tools being developed. JavaScript isn’t well suited for advanced IDEs, but Rust could improve the process.

Components, modules and a great UI layout system — just like regular desktop or mobile development.

This touches to one of the core problems of the web. HTML and CSS is a big hack that’s not all that well suited for making apps, but it is improving all the time and improving this shouldn’t be the focus of MaidSafe I think.


Not read the post yet, but just came across Phuzzy Browser related to the SOLID / Linked Data project and reading this page had me wondering if there are some relevant ideas here. I’ll read more about it once I’ve read the OP but too busy atm:




@joshuef, did you try and contact developer teams of browsers? They might be the best people to judge wether their product suits SAFE’s needs…


A fully featured browser is an inevitability for SAFE without the need to piggyback off existing projects. Let’s not attempt to endlessly tame large complicated beasts like Firefox. That is effort better directed towards building a custom and efficient browser who’s inner workings are of our own creation. Meaning less unknowns.

For a successful launch browser, we require a few things done well for rapid adoption. It should be able to stream music, video, follow links, post and display static/dynamic data at the very least. The control/options and display framework can be lifted from some other open source projects. At launch our commitment to security, privacy, and freedom should be clear to users. Most of which will be purists.

As time advances, so too will the browsers’ features. Likely keeping pace with the increase of novice or pampered users. Http access should be secondary in accordance to our desired paradigm change.

We’re persuing change. This not an endeavor to conform to current standards, but instead to break through while strategically balancing SAFEs’ presence by adapting to those who need hand holding. If we do as most projects, we miss an opportunity to create something easily sustainable as entry point to the network. Being open source in the first place means that these features (http, advanced rendering, etc) will be built or their creation assisted by the greater collaborative community. No doubt behemoths like Firefox will become entry points into SAFE at some point, but those that create the extensions will have spared @maidsafe frustrating and wasteful maintenance in addition to embarrassing security breaches.

Going with current mammoth browsers is analogous to driving a tricked out RV when all you need to do is get to work daily. Too many inefficiencies, complexities, and considerations just to accomplish simple tasks. With a small modular vehicle we can elegantly add advanced systems with the advantage of having it designed with high uncompromising security in mind. Let the ideals of SAFE run through it’s veins! Plant our flag! Let’s not embed ourselves in a reckless host and risk us being eaten along with it. Instead plant the seed in a fertile womb and watch SAFE browser grow :wink:


Based on observed ‘majority usage patterns’ most people most of the time want content delivery. Simple to install, simple to use, performant, no bullshit. Just attach the firehose to my face and let me drink up all the content. See 1% rule, but beware there are some real philosophical conundrums to be faced when exploring this idea.

Everything else is secondary to content consumption. Not saying I agree with this usage, but it’s what has been observed.

Rephrased as ‘would that be secure enough’ I would say very much yes for most people most of the time. I’m going to combine security and privacy together here, but without a ‘threat model’ the idea of ‘secure enough’ is pretty intangible. Who is this for and what risks are they trying protect against? This is not actually defined, but needs to be.

I think a safe browser should not have http. Most consumption happens on mobile / tablet, and that already means users are going into different apps for facebook / instagram / web browsing etc anyhow, so I think having web browser for http:// and safe browser for safe:// works best for most people most of the time.

Should there be workarounds / hacks? No. Build the firehose for seamless content delivery, and when people drink enough and want to start adding their own content they’ll be motivated to work it out. I really do think there’s a significant market for a read-only safe browser, at least to get people started and confident.

Mostly, I think a lot of user design / research is missing - usage patterns, threat models, perceived risks, actual risks, concerns, etc - without this the design is for ‘nobody’.

To start the collection of data with my own preferences: I would be happy with a very simple fast readonly browser that allowed private and secure consumption of all content on the safe network (html, video, audio, etc). Anything more than that (eg commenting, posting, uploading, buying etc) I’m happy to go to a specialized app or a different ‘more advanced’ safe browser with tweaks for privacy and interaction etc.

But make my consumption experience perfect. That is nonnegotiable.


As I read more I feel the case for a SAFE Browser built from scratch, only gets stronger.