I don’t have more on LD rules yet, but can ask. Or by all means ask in the LinkedData gitter yourself.
I got the impression that a lot of what you describe above is already possible with LD.
I haven’t though read the stuff I’ve linked to thoroughly myself so would like to do that now I realise there’s more to it than I had appreciated. If you turn anything up please summarise or link on this topic and I’ll do the same.
So I did a bit of Googling and found a recent project called the “Function Ontology” that can be used to define functions in a linked data graph, this seems to be similar to what I made a simple version of in my prototype.
Combine this with a QA app similar to Stack Overflow, but where you were required to use the function ontology to markup your functions, you can have the basis for making natural language queries. Today in programming, poeple often use Google and then click on stack overflow, but have to manually take the code from there, but with functions marked up with the function ontology this process could to some extent be automated. You could write a natural language query, search the index from the QA app and then get a list of potential functions back. Especially in a more limited domain like SAFE apps this might work. People who publish answers to the app could also include things like a license for the function.
Good technical overview of Solid/Crosscloud platform (client/server, user/application developer, features and example applications). Best intro yet IMO:
A Demonstration of the Solid Platform for Social Web Applications (PDF)
Notes to self:
LDP PATCH is required according to this, whereas I thought it was an optional server feature and can in any case be simulated in the client (within limitations of the client and connection).
LDP SPARQL and link following are optional server features, but clearly powerful, and so we might consider how SAFE API can provide for their implementation - either client side, or perhaps by the network in the future.
Access Control Lists and WebID authentication are key parts of Solid platform and it would be worthwhile looking at how SAFE API could support similar or equivalent functionality in a compatible way (for example, with fine gained control over access of individual resources through an LDP compatible API).
For SAFE API support of Solid features, much can probably be achieved using a solid-safenetwork client application library, which could later be moved into the main SAFE app APIs, but certain features may require changes to SAFE network client libraries if they are to be supported.
To begin with, it would be very useful to know what the current SAFE network and APIs can and cannot accommodate, and how this would limit or block Solid app features when used with SAFEnetwork.
That’s pretty much it. Glad you are looking at this Josh as I would value any feedback, observations, ideas, advice etc
rdflib.js is forked and modified very slightly so I can intercept its fetch()
a Solid app expects to use HTTP to talk to a RESTful (Solid) server
a Solid app can use fetch(), or the rdflib.js Fetcher (which adds functionality)
if it uses other means (e.g XHmtlRequest) they must first be converted to fetch()
the Solid/SAFE Plume demo uses both window.fetch() and rdflib.js Fetcher
implements LDP (a subset of Solid server)
lacks WebID auth or access control lists
intercepts any window.fetch() or Fetcher and handles any safe: URLs
is written with more than Solid/LDP in mind. My idea is that additional services could be supported by just adding a ServiceInterface class (file sharing, remoteStorage.io etc)
rdflib.js is extensive and I don’t know it very well. The Fetcher handles different data formats, and returns a ‘graph’ which can then be queried directly. Also, I think the Fetcher may be able to crawl multiple resources where one resource refers to another, including on other domains - a bit like a scraper where you set the depth. I’m not sure about that, so take with a pinch of salt, but if so how cool is that!
The SAFE Plume demo uses Fetcher, but doesn’t do anything complex, so I think it could just as well only use fetch() but I may be underestimating it!
Yeh nice. Sounds similar to how I was imagining search indexes to function (time and time ago when I was trying to whip that up back when Structured Data was a thing ).
@happybeing, it’s a great idea for smoothing out things for SOLID devs to open up adoption etc
Only thing that pops up is that right now you’re working in one JS file. For end use probably it should be split up, such that you need only include the LDP portion and wont need the rest of the code services etc that are irrelevant for you.
Mostly for me it leads into questions about how we should be storing the data on SAFE, and if/how we can treat LDP as something first class perhaps… ? So in the end we could aim to be building apps directly on the network using this setup for data.
How are you storing/writing data on the network at the moment?
An array of objects on the MD?
//... rest of MD stuff
RDF : [ //... more triples
s : 'SAFE'
p : 'is'
o : 'rad'
s : 'happybeing'
p : 'made'
o : 'this'
Is there any more info that needs to be stored for Linked Data?
Some other thoughts:
I wonder how LD triples lend themselves to site indexing (although probably there’d be a need for more detailed info than RDF triples provides?)
What would be the expected / potential structures of LD for a website or a user profile? (is that standardised anywhere?)
Could we query the network directly for triples? Is that even possible? Is that even desirable?
Thanks for the feedback and support Josh, I’m glad you’re interested in this and it would be great to have as much input from you as you want to / can give
You ask important questions and ideas which are food for thought. Some answers & comment…
The library is one file just for convenience atm but the code is structured so it can be split up easily, new services added, implementable replaced etc, even by non coders. So yes, that’s to be done.
I’ve also just fleshed my thoughts on how to standardise this for SAFE, by using RESTful file based services as you may have seen:
The LDP and Solid standards use a file / document metaphor, which are resources in RDF terminology. So this is how the storage is implemented, and the transactions boil down to CRUD operations on containers and files, but using RDF (usually Turtle for any payload and response body). Turtle is essentially a text format for Subject/Predicate/Object triples (e.g. Josh/works for/Maidsafe etc - BTW that’s not Turtle, just a hint about what a triple is ).
Linked Data is generally implemented in a Solid or LDP server as directories of files where each file represents an RDF resource that can be addressed with its own URI. Responses are therfore often parsed into a graph which is then queried and manipulated.
SPARQL queries return the triples which might be a subset of one resource or drawn from multiple resources, even spread across different servers.
So my implementation stores each RDF resource as a file (using SAFE NFS). Content negotiation (not yet implemented) allows the requester to receive results in different forms, depending on the representation that is stored (Turtle, N3, JSON-LD etc) and whatever conversions the server supports from each stored representation. In addition, rdflib.js Fetcher has several conversions built in too - so client side.
Note that a resource can be in any format, not just RDF, so plain text, .png, .mp3 etc, but using RDF leverages the semantic Web, SPARQL etc and if you upload a Word document to Tim’s Solid server, a kitten dies.
I’ll be showing the RESTful LDP interface at work, and the Turtle it returns for container listings, and that is used store blog posts in my demo. So if you want to see some really simple LD maybe take a look at that - it’s all there in the console output if you visit plumetest atm).
I don’t think there’s any limit to what triples can represent, though what is the best implementation is a different matter. Some use file system back ends, some graph databases etc.
For leveraging the existing semantic Web (the data out there, applications, tools, libraries and skills), and to create an ecosystem where SAFE apps can share, reuse and reach across a sea of data, it’s the protocols Solid, LDP and the standards RDF/LinkedData, WebID etc that we should support. And things like the RESTful principles and the [Architecture of the World Wide Web] (https://www.w3.org/TR/webarch/). Sitting in on the chats between Tim and his colleagues has taught me a lot about these things, and the value of building with them.
HTML has various ways of integrating with LinkedData and there is a standard way for users to publish a profile as part of a Web identity (a WebID URI), which is something a user owns and controls.
Now you’re thinking! Lots of ways to pay with this and I haven’t really got the basics yet. Step by step I plod. I look forward to working with folks to figure things like this out.
Although far from understanding the concepts of the semantic web, the most obvious piece is that of resources being linked and accessed via URI which brings us back again to @bochaco’s to be able to access data on the network as a SAFE URI via XOR name, like safe://4291165faf85f9964d4c8f5d12e4b0dc31ffda4509d2049704c77a018f922c98 (a hexadecimal slice of a 32 byte XOR name).
Additionally, it’s necessary for a URI to contain the format and metadata of the data which it represents, which https://multiformats.io can help provide inspiration.
I think it will be useful both to have a mechanism such as the one you refer to here, and the ability for RDF resources to be accessed by the browser as if they are a Web service. The former will allow linking directly to Immutable Data, and more general forms of file sharing without using a public ID, so will be very valuable. The latter may prove useful in itself, because a short readable URI is a better UI, and can be updated for example, and is available now.
In the DevCon demo you can see me doing the second kind, but I hacked it by making an LDP based on top a SAFE www device, rather than a separate new LDP SAFE service type - which is necessary if we want to generalise this. I made a proposal for how to do that here:
Where files are stored using NFS we can use the file extension to imply the ContentType (which is how a typical Solid server works), but if the SAFE NFS API is extended to allow this to be set as metadata, we could use that instead.
I’m not sure how important that is for ContentType because using the file extension works well enough, but I think it’s still going to be useful to allow NFS file metadata to be set, and I believe this is already planned.
At the same time I’d like to add the option to have MD entries for containers so they can also have metadata, and can also be explicitly created and deleted, because this improves compatibility with LDP and therefore Solid.
Thanks for your input and please keep digging in, it’s great to see. I am due to chat with @Viv and @Krishna about going forward so tagging them with your interest.