rho at devc.at
Fri Aug 19 20:32:03 CEST 2011
On Fri, Aug 19, 2011 at 09:49:02AM +0100, Axel Polleres wrote:
> HDT has two purposes mainly:
> 1) being a binary compressed RDF serialization format: that's the
> part you guys seem to be interested in, IIUC?
> On 18 Aug 2011, at 17:18, Gregory Williams wrote:
> I read the submission when it was published in May, and the whole
> thing strikes me as rather convoluted and over-architected (and most
> likely having design decisions based on a specific
> > On Aug 18, 2011, at 7:23 AM, Jakob Voss wrote:
> > > I just stumbled upon binary RDF at http://www.rdfhdt.org/ - there was a
> > > first W3C submission in May: http://www.w3.org/Submission/2011/03/
> > >
> > > As far as I understand, HDT includes Bitmap triples as a very compact
> > > representation of RDF graphs,
> > > which still can be queried for triples of the form (s,?p,?o), (s,p,?o),
> > > and (s,p,o) in constant time.
When I looked at the Trine <-> AllegroGraph bridge a while back I was
also confronted with the fact that the current Trine/SPARQL
architecture does not easily allow particular store implementations
"to trickle in" details which would benefit or even shortcut certain
steps in the SPARQL query evaluation.
AllegroGraph, for instance, would allow pattern matches to be eval'ed
inside the AG server. Instead we now let Trine do the mechanics and
only bulk-load the triples when we need them. Very wasteful.
I still think (said something along these lines at the London
Hackathon) that Trine would benefit from a refactoring into a
role-based (trait) system. With such roles, we can then take a
particular store and have it replace certain steps in the query
evaluation. I never managed to get such an architecture working with a
plain vanilla OO structure.
So from this angle I would understand Greg's current scepticism.
 I know it's pain :-) But it's worth it!
More information about the Dev