The default Jena reasoner performs inference on an entire graph. For a very large graph, this inferencing can be fairly expensive. I got asked today whether there is any way to just do inferencing on a small subset of a very large graph.
I am wondering whether it would be feasible and make sense to create a new in-memory graph and then essentially make a copy of the relevant triples from the very large graph into this in-memory graph, and then perform inferencing just on that small graph. The purpose is to answer a query or question on a small subset of the graph without incurring the overhead of doing it for the entire graph.
Is this a common practice? Best practice? Are there any recommended ways to efficiently implement the copy process from the stored graph into the in-memory graph?
If I could get a response by noon EST Friday, that would be great, as I have a presentation at 1 that this may come up.
I am wondering whether it would be feasible and make sense to create a new in-memory graph and then essentially make a copy of the relevant triples from the very large graph into this in-memory graph, and then perform inferencing just on that small graph. The purpose is to answer a query or question on a small subset of the graph without incurring the overhead of doing it for the entire graph.
Is this a common practice? Best practice? Are there any recommended ways to efficiently implement the copy process from the stored graph into the in-memory graph?
If I could get a response by noon EST Friday, that would be great, as I have a presentation at 1 that this may come up.