SAFE Drive - help with testing

I don’t even know what that means! :slight_smile:

If anyone would like to assist with that, or create an issue requesting it with more details (what, why etc) , that would be great.

Oh, I meant a Personal Package Archive (PPA) as described here:
https://help.launchpad.net/Packaging/PPA
It would just mean everybody who puts the PPA in their sources.list would always get the latest packages and dependencies installed automatically. I guess it’s a bit like AUR for Arch. I wish all SAFE stuff could be packaged like that. Just sudo apt-get … and there you go testing the latest stuff.

Ah, thanks for explaining.

Indeed, but the reason it isn’t is the work that involves. I haven’t tried to do this yet so step 1 is days researching and trying things out, then implementing properly, which I suspect is tricky to get right, and then of course more test and maintenance in the release process.

So it is great for users, but for one person projects quite a big ask.

If I had time my first priority would be to put together some regression tests to catch things like the above which it looks like John has found, but… time.

Also go though all the comments documentation I’ve written that is now out of date, or need fixing for JSDocs, better explanations etc

But there are also vital features that need writing, lots of bugs to flush out and fix and so on.

So don’t hold your breath :wink:

1 Like

I’ve never managed a PPA but I can imagine it requires some work. It’s probably too much to ask of a one person project, but maybe Maidsafe as a company could look into this. Easier testing would mean more testers and more bugs discovered. Also, the whole back and forth about whether somebody is using the right dependencies or the latest packages when testing could always be answered by a simple: Have you done apt-get update? Just a thought.

Having thought a little more myself :slight_smile: I’m not sure PPA helps SAFE Drive much with this because for end users I’m already shipping a packaged executable to avoid the need for checking node version etc. It would though help with dependencies, but you then have to package for each different OS (and John is not on Ubuntu).

It is worth nothing that whatever packaging method is used, we still have to do this testing - exactly what John has been helping with and which revealed I wasn’t including debug for Safenetworkjs. That’s what gets us to the stage where we can put out a built package.

1 Like

I did a little research on how to implement regression tests for SAFE Drive and created an issue with some pointers. I found a package called BATS that looks like it will help a lot. I haven’t evaluated it, but it looks like a good place to start.

If anyone fancies a go at this, reply to the issue to let us know you’re picking it up.

The good thing about writing tests for SAFE Drive is that they will also do a good job testing many features of safenetworkjs until that has its own API based regression tests, which isn’t likely to happen any time soon!

I think you mean

1 Like

I have just fixed this (fingers crossed) and pushed the change so you can pick up the latest code and should find it now works much better. The issue you spotted could have affected a lot of functionality.

To get the change, in your safenetwork-fuse directory, you should just need to do: git pull

EDIT: …and the same in your safentworkjs directory!

1 Like

Applause for @JPL please :clap: :clap: :clap:, he’s helped me identify and fix three issues in the last week.

I’m a happy :rabbit:

And he’s offered to have a look at setting up regression tests which will be amazing! Other can help with this I expect, so let us know if you are willing. See: https://github.com/theWebalyst/safenetwork-fuse/issues/13

4 Likes

Looking good so far …

lookin-good

4 Likes

Works a treat - really quick too, considering it’s not optimised. A brief summary of my meanderings.
Copy a file hello2 to a pre-existing folder with files in. :white_check_mark:
cat hello2 to read file :white_check_mark:
firefox hello2 :white_check_mark:
ls to see contents of the directory :white_check_mark:
cat much longer file - very quick :white_check_mark:

Copy a 30kb image file to mounted SAFE drive: 3 seconds
200kb image - 5s
2MB image - 50s

Copying 4.4mb file from mounted SAFE drive to local drive: 7s

echo Hello world > ~/SAFE/_public/jpl/root-safe-blues/helloworld.html :white_check_mark:

Make a directory eg mkdir ~/SAFE/_public/jpl/root-www worked but can’t copy a file into it
cp hello2 SAFE/_public/jpl/root-www/hello2 :negative_squared_cross_mark: (I think this is kown and just not done yet)
error: cp: cannot create regular file 'SAFE/_public/jpl/root-www/hello2': No such file or directory
After trying that the new folder is no longer there ls SAFE/_public/jpl/ does not show the folder.

Observation: new folders created via Web Hosting Manager are not recognised on the mounted SAFE drive until they contain something

Automounting containers as per this post works a treat :white_check_mark:

Accessing the mounted SAFE drive via file manager (Thunar), Firefox, VLC, etc all work :white_check_mark:

Observation: all this tinkering only cost me 35 PUTs. Pretty sure it would have been more uploading files via Web Hosting Manager?

I think @happybeing’s cracked it you know. :tada::champagne:

8 Likes

One thing I forgot. Running node bin starts Peruse but does not mount the SAFE drive. That requires issuing the command again from another console window.

2 Likes

This is great @JPL, thanks yet again.

Not only for the very practical help. When I see other people explore and use stuff I’ve built or worked on it is one of the biggest rewards I get from making stuff, I expect other ‘makers’ will agree.

Correct. You can make directories fine, but they aren’t real until you copy a file into them (same for WHM). This is why you don’t see an empty directory created by WHM in SAFE Drive (or vice versa), until you copy a file to it. I imagine that won’t make sense to anyone who hasn’t delved into the API, which makes it very hard to help a user understand what’s going on here.

The above (create directory, copy file into it) would work in an existing container created by WHM, but I have not provided a way to make the containers using SAFE Drive yet.

_public is a container of containers, but doesn’t hold any files itself. So you need to create a container inside _public, and then you can save files to that. My thought was to create the file container automatically when you copy the first file into a new directory (for which there is not yet a container), but I’m not sure about that. I wonder what you and others think? I think it makes it really easy, but maybe too easy to create lots of containers without realising it.

The alternative would be to provide a separate command line utility to create a container, but I think that’s going to be too hard for users.

I have tried to keep PUTs and particularly GETs to a minimum, but I’m not sure whether there’s a real difference here - they do pretty much the same thing. Maybe somebody will do some comparisons.

Did you try mounting website containers (_webMounts/cat.ashi, _webMounts/heaven) , or automounting SAFE containers (_public, _publicNames), or both?

I tried git init --bare last night and it almost works - fails right at the end, so I can’t yet publish a repo this way, but anyone can now share files. Who will be first to share files over SAFE Network using SAFE Drive?

How about it? :slight_smile:

4 Likes

Both!

What does git init --bare do? Are you trying to create a Git repo in SAFE?

1 Like

Yes, a bare repository works like a central repository - github without the fancy stuff, just git push/pull etc. So I could have a repo that I publish changes to and others could pull them. Its how people would use git before github, with a shared drive for example.

It falls over with file operations at certain points, and has helped me find and fix a few bugs along the way, but still not completing. So we may be a way off that working yet.

Another bug? I tried copying a file into ~/SAFE/_webMounts/cat.ashi and it’s hanging (debug output below). How do you create the _webMounts container anyway?

Summary
  safe-fuse:ops wrote 1689 bytes +1ms
   write[1] 1689 bytes to 53248
   unique: 196, success, outsize: 24
unique: 197, opcode: FLUSH (25), nodeid: 13, insize: 64, pid: 6798
flush[1]
  safe-fuse:stub TODO: implement fuse operation: flush(/_webMounts/cat.ashi/tired-face.jpg, fd) 1 +2s
   unique: 197, success, outsize: 16
unique: 198, opcode: RELEASE (18), nodeid: 13, insize: 64, pid: 0
release[1] flags: 0x8001
  safe-fuse:ops release('/_webMounts/cat.ashi/tired-face.jpg', 1) +0ms
  safe-fuse:vfs:index getHandler(/_webMounts/cat.ashi/tired-face.jpg) +1ms
  safe-fuse:vfs:index getHandler(/_webMounts/cat.ashi) +0ms
  safe-fuse:vfs:root getHandlerFor(/_webMounts/cat.ashi) - containerRef: { safeUri: 'safe://cat.ashi' }, mountPath: /_webMounts/cat.ashi +1ms
  safe-fuse:vfs:root getHandlerFor(/_webMounts/cat.ashi/tired-face.jpg) - containerRef: { safeUri: 'safe://cat.ashi' }, mountPath: /_webMounts/cat.ashi +0ms
  safe-fuse:vfs:root RootHandler for { safeUri: 'safe://cat.ashi' } mounted at /_webMounts/cat.ashi close('/_webMounts/cat.ashi/tired-face.jpg') +0ms
  safenetworkjs:container NfsContainer.closeFile('tired-face.jpg', 1) +1ms
  safenetworkjs:container NfsContainer._clearResultForPath(tired-face.jpg) +0ms
  safenetworkjs:file NfsContainerFiles.closeFile('tired-face.jpg', 1) +0ms
  safenetworkjs:file doing insert('tired-face.jpg') +583ms
  safenetworkjs:cli fromUri(app, [object Object]) +3m
  safenetworkjs:cli ipcReceive(6685) +816ms
Server path not specified, so defaulting to ipc.config.socketRoot + ipc.config.appspace + ipc.config.id /tmp/app.6685
starting server on  /tmp/app.6685 
starting TLS server false
starting server as Unix || Windows Socket

As you don’t yet have write access to the cats.ashi container, it requests access via Peruse. So that’s why it’s waiting.

Is cats.ashi yours? I’m not sure what would happen if you ask Peruse for access to a container that’s not yours. Is it possible that you didn’t notice Peruse asking for you to authorise?

Ah, now you’re asking. It’s not real. Is anything? :wink:

BTW I fixed a few bugs today so it may be worth pulling the changes in both projects. Same as last time.

Peruse popped up but it didn’t ask for authorisation. I guess that’s the problem. cats.ashi isn’t mine - I don’t have a cat :wink:

BTW I fixed a few bugs today so it may be worth pulling the changes in both projects. Same as last time.

OK, will try

EDIT: OK got it.

ls ~/SAFE/_webMounts/www.safenetworkprimer lists:
index.html introbg1.png styles.css
ls ~/SAFE/_webMounts/cat.jpl1 lists:
cat.jpg index.html

So _webMounts is a virtual mounted container and I only have write access to my own containers.

It does need to fail gracefully though if someone tries though. At the moment it seems it’s waiting for authorisation but not asking for it.

Agreed, but I often get this at random times during testing - so it may not be the _webMounts related code that’s the cause. I’m hopeful the browser and API will get better at this over time because I’m not sure if there’s anything I can do about it.

We need more data points.

It’s a good one to add to the ‘torture tests’ list :wink:

I’ve had a look into BATs and it requires a knowledge of BASH scripting which I don’t possess or have time to learn at the moment so I’ll have to leave it to someone more knowledgeable I’m afraid. Hopefully someone will pick it up.

1 Like